From ea3e4abc5e549465a9383f9933446ed0e59ea893 Mon Sep 17 00:00:00 2001 From: amyheather Date: Mon, 2 Mar 2026 11:45:11 +0000 Subject: [PATCH 01/30] feat(impact): hide python/r buttons (#192) --- pages/impact/impact.qmd | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/pages/impact/impact.qmd b/pages/impact/impact.qmd index 2127033..443a3a8 100644 --- a/pages/impact/impact.qmd +++ b/pages/impact/impact.qmd @@ -2,6 +2,13 @@ title: "Impact" --- + + + This page highlights how the DES RAP Book is being used and referenced by the community. Let us know if you use the DES RAP Book, as we'd love to include it here! ## Teaching and training use From 334282b1397bfb7b0f9dbb5054bf53097106a150 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 5 Mar 2026 16:19:11 +0000 Subject: [PATCH 02/30] feat(index/impact): add renal capacity model and quote from lucy --- index.qmd | 14 ++++++++++---- pages/impact/impact.qmd | 25 +++++++++++++------------ 2 files changed, 23 insertions(+), 16 deletions(-) diff --git a/index.qmd b/index.qmd index ffeb49c..7483dbe 100644 --- a/index.qmd +++ b/index.qmd @@ -27,13 +27,15 @@ format: } -This open book is a self-paced training resource that teaches you how to design, implement, and share discrete-event simulation (DES) models in Python and R as part of a reproducible analytical pipeline. It combines a step-by-step guide with complete example repositories that you can adapt for your own projects. +This open book is a self-paced training resource that teaches you how to design, implement, and share discrete-event simulation (DES) models in Python and R as part of a reproducible analytical pipeline. It combines a **step-by-step guide** with **complete example repositories** that you can adapt for your own projects. -The material is designed for researchers, research software engineers, analysts, and postgraduate students in health and operations research who want to build transparent, trustworthy simulation models. Educators and trainers can adopt the book as a complete syllabus for a short course or reuse individual chapters as standalone teaching units (for example, sessions on RAP, testing, packaging, or sharing models), and should cite the resource when they do so in line with the provided citation below. The accompanying code is released under an MIT licence, and the text is available under a CC BY-SA 4.0 licence, allowing reuse and adaptation with appropriate attribution. +The material is designed for analysts, researchers, and students in health and operations research who want to build transparent, trustworthy simulation models. To get the most from this resource, you should be comfortable with basic programming in either Python or R and have some familiarity with probability and basic statistics. **No prior DES experience is required**: short introductions to DES, reproducible analytical pipelines, and free and open source software are provided in the "Intros" section and linked below. -> If you use the DES RAP book in your teaching, research, or training, we'd love to hear about it. Please [get in touch](/pages/guide/further_info/feedback.qmd) to share how you've used the material, suggest improvements, or point us to example models or case studies that others might find helpful. +::: {.pale-blue} + +"The DES RAP Book was a valuable resource in our transition to open-source DES modelling. It provided us with a step-by-step structure to follow and guidance on best practice for writing code, tests and documentation for our package. Our model is being using in the Midlands, and beyond, for understanding future demand and capacity within Kidney Replacement Therapy." - **Lucy Morgan, Analytics Manager at The Strategy Unit**. -To get the most from this resource, you should be comfortable with basic programming in either Python or R and have some familiarity with probability and basic statistics. No prior DES experience is required: short introductions to DES, reproducible analytical pipelines, and free and open source software are provided in the "Intros" section and linked below. +::: An engaged learner can complete the core step-by-step guide in around 10-15 hours, including time to run the code examples and attempt the exercises. You can work through the material in order as a structured course, or dip into specific sections (such as input modelling, verification and validation, or sharing and archiving) as needed. @@ -48,6 +50,10 @@ After working through this resource, you will be able to: * Improve **style, documentation, and automation** with linting, docstrings, and GitHub Actions. * **Collaborate and share** your work effectively, including code review, licensing, citation, changelogs, and archiving. +> If you use the DES RAP book in your teaching, research, or training, we'd love to hear about it. Please [get in touch](/pages/guide/further_info/feedback.qmd) to share how you've used the material, suggest improvements, or point us to example models or case studies that others might find helpful. +> +> Educators and trainers can adopt the book as a complete syllabus for a short course or reuse individual chapters as standalone teaching units (for example, sessions on RAP, testing, packaging, or sharing models), and should cite the resource when they do so in line with the provided citation below. The accompanying code is released under an MIT licence, and the text is available under a CC BY-SA 4.0 licence, allowing reuse and adaptation with appropriate attribution. +
```{=html} diff --git a/pages/impact/impact.qmd b/pages/impact/impact.qmd index 443a3a8..eb7dcaa 100644 --- a/pages/impact/impact.qmd +++ b/pages/impact/impact.qmd @@ -11,6 +11,19 @@ title: "Impact" This page highlights how the DES RAP Book is being used and referenced by the community. Let us know if you use the DES RAP Book, as we'd love to include it here! +## Research + +* **Renal capacity model** + Model developed by [YiWen Hon](https://github.com/yiwen-h) and [Lucy Morgan](https://github.com/morganle-48) at the Strategy Unit, in collaboration with the Midlands Renal Operational Delivery Network (MRODN). Use SimPy to model Kidney Replacement Therapy (KRT) patient pathways for capacity planning. [Check out their model on GitHub](https://github.com/The-Strategy-Unit/renal-capacity-model/), also [archived on Zenodo](https://zenodo.org/doi/10.5281/zenodo.18599117), and read more about the project [in this article](https://www.strategyunitwm.nhs.uk/news/planning-rising-renal-demand-simulating-capacity-across-care-system). + +> "The DES RAP Book was a valuable resource in our transition to open-source DES modelling. It provided us with a step-by-step structure to follow and guidance on best practice for writing code, tests and documentation for our package. Our model is being using in the Midlands, and beyond, for understanding future demand and capacity within Kidney Replacement Therapy." - **Lucy Morgan, Analytics Manager at The Strategy Unit**. + +* **NHS model reuse project** + Collaboration between King’s, The Strategy Unit and the University of Exeter, where DES models are provided to NHS analysts. It explores whether they are able to reproduce and reuse them in their own context, and the model sample consists of our [python stroke example model](https://github.com/pythonhealthdatascience/pydesrap_stroke) and [R stroke example model](https://github.com/pythonhealthdatascience/rdesrap_stroke). + +* **Stroke model of Same Day Emergency Care (SDEC) and CT Perfusion (CTP) scanning** + Hyperacute and acute stroke pathway model created by John Williams at Maidstone and Tunbridge Wells NHS Trust. Now being updated using DES RAP principles to add documentation, reproducible workflows, testing and other RAP infrastructure - [see updated model GitHub](https://github.com/Bergam0t/jw_hsma_des_stroke_project). + ## Teaching and training use The DES RAP Book is already being used and adapted in several teaching and training contexts. @@ -18,20 +31,8 @@ The DES RAP Book is already being used and adapted in several teaching and train * **NHS Somerset workshop (planned)** It will be used in a workshop with an NHS Somerset team, to support them in improving their simulation model. -* **Community workshop on reproducible simulation (planned)** - A wider workshop is being developed for the simulation community, using DES RAP as the main teaching resource. Participants will work through selected parts of the step-by-step guide and example repositories to explore how reproducible analytical pipelines can be applied to their own discrete-event simulation projects. - * **Coding for Reproducible Research (R environments)** Material from the DES RAP Book has been adapted for use in the R environments module of the [Coding for Reproducible Research](https://coding-for-reproducible-research.github.io/CfRR_Courses/short_courses/r_environments.html) Training Programme from the University of Exeter. * **HSMA "Little Book of DES"** DES RAP Book is referenced in several places in the [Little Book of DES](https://des.hsma.co.uk/) from the [Health Service Modelling Associates (HSMA) Programme](https://hsma.co.uk/) at the University of Exeter. - -## Research - -* **NHS model reuse project (planned)** - Collaboration between King’s, The Strategy Unit and the University of Exeter, where DES models are provided to NHS analysts. It explores whether they are able to reproduce and reuse them in their own context, and the model sample includes our [python stroke example model](https://github.com/pythonhealthdatascience/pydesrap_stroke). - -* **Stroke model of Same Day Emergency Care (SDEC) and CT Perfusion (CTP) scanning** - Hyperacute and acute stroke pathway model created by John Williams at Maidstone and Tunbridge Wells NHS Trust. Now being updated using DES RAP principles to add documentation, reproducible workflows, testing and other RAP infrastructure. - \ No newline at end of file From c244b023252dcd35bf998f8f9a908db0931658a1 Mon Sep 17 00:00:00 2001 From: amyheather Date: Wed, 11 Mar 2026 09:51:30 +0000 Subject: [PATCH 03/30] chore(vv): spelling mistake --- .../verification_validation_checklist.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pages/guide/verification_validation/verification_validation_resources/verification_validation_checklist.md b/pages/guide/verification_validation/verification_validation_resources/verification_validation_checklist.md index 6f04b83..95a7917 100644 --- a/pages/guide/verification_validation/verification_validation_resources/verification_validation_checklist.md +++ b/pages/guide/verification_validation/verification_validation_resources/verification_validation_checklist.md @@ -75,7 +75,7 @@ Predictive validation Animation visualisation -* [ ] Create an animation to help with validation (as well as communicaton and reuse). +* [ ] Create an animation to help with validation (as well as communication and reuse). Comparison testing From de36ab548e8abf12d0efbfee2b211218a6184fe6 Mon Sep 17 00:00:00 2001 From: amyheather Date: Wed, 11 Mar 2026 13:55:43 +0000 Subject: [PATCH 04/30] feat(vv): mentiont TRACE --- .../verification_validation.qmd | 4 ++++ .../references.bib | 19 +++++++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/pages/guide/verification_validation/verification_validation.qmd b/pages/guide/verification_validation/verification_validation.qmd index 131b2f4..9520b9a 100644 --- a/pages/guide/verification_validation/verification_validation.qmd +++ b/pages/guide/verification_validation/verification_validation.qmd @@ -53,6 +53,8 @@ You'll come across many different categorisations for simulation validation tech The following sections outline various methods for verification and validation - but they are not comprehensive - just a selection we feel likely to be relevant and feasible for healthcare DES models. For more techniques, check out @Balci1998. +> **Note:** Verification and validation are often treated as part of a broader, structured process of establishing model quality and credibility across the whole model lifecycle. One example is the TRACE framework (TRAnsparent and Comprehensive Ecological modelling documentation) from ecological modelling, which was developed to plan, perform, and document "evaludation": the entire process of assessing a model's quality and credibility during development, analysis, and application (@Grimm2014). + ## Methods for verification
@@ -523,6 +525,8 @@ To help you get started, we've put together a simple Markdown checklist that you Use the checklist to track which methods you've tried, what you want to tackle next, and which seem most relevant or feasible for your current context. +> **Note:** Frameworks like TRACE recommend maintaining a modelling notebook and using standardised headings (for example, for model purpose, design choices, testing, and sensitivity analysis) so that you can later assemble a transparent record of how the model was developed and checked. You can treat the activities on this page in the same way: for each verification or validation method you apply (such as stress testing, comparison testing, or face validation), record what you did, why you did it, and what you concluded (@Grimm2014). This Markdown checklist can double as a lightweight TRACE-style log for healthcare DES models: you can tick off methods you have used, and briefly note what you did and what you learned for each item. + {{< downloadthis verification_validation_resources/verification_validation_checklist.md dname="verification_validation_checklist" label="Download the verification and validation checklist" type="primary" >}} ```{.markdown} diff --git a/pages/guide/verification_validation/verification_validation_resources/references.bib b/pages/guide/verification_validation/verification_validation_resources/references.bib index 3ed6467..dfcb7a1 100644 --- a/pages/guide/verification_validation/verification_validation_resources/references.bib +++ b/pages/guide/verification_validation/verification_validation_resources/references.bib @@ -84,4 +84,23 @@ @online{Rosser2025 month = April, url = {https://hsma-programme.github.io/h6_april_2025_forum_presentation/}, note = {Accessed: 2025-05-10} +} + +@article{Grimm2014, + series = {Population {Models} for {Ecological} {Risk} {Assessment} of {Chemicals}}, + title = {Towards better modelling and decision support: {Documenting} model development, testing, and analysis using {TRACE}}, + volume = {280}, + issn = {0304-3800}, + shorttitle = {Towards better modelling and decision support}, + url = {https://www.sciencedirect.com/science/article/pii/S0304380014000611}, + doi = {10.1016/j.ecolmodel.2014.01.018}, + abstract = {The potential of ecological models for supporting environmental decision making is increasingly acknowledged. However, it often remains unclear whether a model is realistic and reliable enough. Good practice for developing and testing ecological models has not yet been established. Therefore, TRACE, a general framework for documenting a model's rationale, design, and testing was recently suggested. Originally TRACE was aimed at documenting good modelling practice. However, the word ‘documentation’ does not convey TRACE's urgency. Therefore, we re-define TRACE as a tool for planning, performing, and documenting good modelling practice. TRACE documents should provide convincing evidence that a model was thoughtfully designed, correctly implemented, thoroughly tested, well understood, and appropriately used for its intended purpose. TRACE documents link the science underlying a model to its application, thereby also linking modellers and model users, for example stakeholders, decision makers, and developers of policies. We report on first experiences in producing TRACE documents. We found that the original idea underlying TRACE was valid, but to make its use more coherent and efficient, an update of its structure and more specific guidance for its use are needed. The updated TRACE format follows the recently developed framework of model ‘evaludation’: the entire process of establishing model quality and credibility throughout all stages of model development, analysis, and application. TRACE thus becomes a tool for planning, documenting, and assessing model evaludation, which includes understanding the rationale behind a model and its envisaged use. We introduce the new structure and revised terminology of TRACE and provide examples.}, + urldate = {2025-06-11}, + journal = {Ecological Modelling}, + author = {Grimm, Volker and Augusiak, Jacqueline and Focks, Andreas and Frank, Béatrice M. and Gabsi, Faten and Johnston, Alice S. A. and Liu, Chun and Martin, Benjamin T. and Meli, Mattia and Radchuk, Viktoriia and Thorbek, Pernille and Railsback, Steven F.}, + month = may, + year = {2014}, + keywords = {Risk assessment, Decision support, Good modelling practice, Standardization}, + pages = {129--139}, + file = {ScienceDirect Snapshot:/home/amy/Zotero/storage/E4PNR9TL/S0304380014000611.html:text/html}, } \ No newline at end of file From ae2d82409ac9f2a97c6101d3f359138062020ca7 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 10:21:11 +0000 Subject: [PATCH 05/30] feat(version): add mcq (#145) --- pages/guide/setup/version.qmd | 87 +++++++++++++++++++++++++++++++++-- 1 file changed, 82 insertions(+), 5 deletions(-) diff --git a/pages/guide/setup/version.qmd b/pages/guide/setup/version.qmd index fe9da31..63d2464 100644 --- a/pages/guide/setup/version.qmd +++ b/pages/guide/setup/version.qmd @@ -619,11 +619,90 @@ It will ask for a name and contact email. You can typically then select that the ## Test yourself -If you haven't already, **have a go at setting up a GitHub repository** to use when working through this book. +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +:::{.callout-note} + +## What does cloning a repository from GitHub allow you to do? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "View the repository in your browser, but not edit any files locally.", + "Copy the repository to a new GitHub account without keeping any history.", + answer = paste0( + "Create a local copy of the remote repository on your computer so you can ", + "edit files and sync changes between local and remote." + ) +))) +``` + +::: + +:::{.callout-note} + +## You have edited `README.md` and want those changes included in your next commit. Which command should you run first? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "`git status README.md`", + answer = "`git add README.md`", + "`git clone README.md`" +))) +``` + +::: -We will provide examples in both Python and R - following along in your own repository helps build understanding and practial skills. +:::{.callout-note} -### Steps to practice +## You are working alone on a project and want to try a risky change without breaking your main code. What is the best Git approach? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Clone the repository again into a new folder and work only in that copy.", + answer = paste0( + "Create a new branch, make changes there, and merge back ", + "when you're happy." + ), + "Edit files directly on the main branch and hope you can undo if needed." +))) +``` + +::: + +:::{.callout-note} + +## What is a key advantage of creating a GitHub organisation for a team project? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "It provides shared ownership and continuity so multiple people can ", + "manage and access related repositories under one organisational account." + ), + "It hides all repositories from collaborators so only you can access them." +))) +``` + +::: + +### Activity + +If you haven't already, **have a go at setting up a GitHub repository** to use when working through this book. We will provide examples in both Python and R, and following along in your own repository will help build understanding and practial skills. You should try to: ::: {.python-content} 1. **Create a repository** (e.g. `des-rap-python`) with `README.md`, `.gitignore` and `LICENSE`. @@ -642,8 +721,6 @@ We will provide examples in both Python and R - following along in your own repo 6. **Delete the branch**. -
- ::: {.callout-tip} If you plan to explore both languages, make separate repositories for each, as their project structures and workflows differ. From c58fa7e9ed9778a8c9b6c97dbb8c36e2b8f75f00 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 10:39:37 +0000 Subject: [PATCH 06/30] feat(environment): add mcq (#145) --- pages/guide/setup/environment.qmd | 146 ++++++++++++++++++++++++++++++ 1 file changed, 146 insertions(+) diff --git a/pages/guide/setup/environment.qmd b/pages/guide/setup/environment.qmd index fc71d8f..6d9f4f3 100644 --- a/pages/guide/setup/environment.qmd +++ b/pages/guide/setup/environment.qmd @@ -1628,4 +1628,150 @@ We also describe how to **recreate and troubleshoot environments**: identify or ::: +## Test yourself + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Why should you create an isolated environment for each project? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + paste0( + "So the code will keep working even if you uninstall Python or R from ", + "your computer." + ), + "So the project will only ever work on the exact same physical computer.", + answer = paste0( + "So you can recreate the same versions of the language and packages ", + "later, ensuring results can be reproduced." + ) +))) +``` + +::: + +:::: {.python-content} + +::: {.callout-note} + +## When first creating a conda environment, what do we recommend putting in your `environment.yaml`? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "A full list of every possible package you might ever use in the future.", + answer = paste0( + "Just the environment name, channels (e.g. `conda-forge`), and the ", + "Python version you want to use." + ), + paste0( + "Only your simulation model package and no information about Python or ", + "channels." + ) +))) +``` + +::: + +::: {.callout-note} + +## After building a Python environment and installing the packages you want, what is the recommended next step for reproducibility? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + paste0( + "Delete `environment.yaml` because the environment now lives on your ", + "machine." + ), + answer = paste0( + "Update `environment.yaml` to pin the exact package versions you are ", + "using, for example `python=3.13.3` and `simpy=4.1.1`." + ), + "Run `conda env export` and ignore any manually written files." +))) +``` + +::: + +:::: + +:::: {.r-content} + +::: {.callout-note} + +## What is the primary role of `rig`? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "To install, list, and switch between different R versions on your ", + "machine (e.g. `rig add 4.1.2`, `rig default 4.1.2`)." + ), + "To manage project‑specific package libraries and lockfiles.", + "To build R packages and upload them to CRAN." +))) +``` + +::: + +::: {.callout-note} + +## On this page, why is a `DESCRIPTION` file recommended alongside `renv.lock` for R projects? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Because `renv.lock` is optional and never used during restoration.", + answer = paste0( + "Because `DESCRIPTION` clearly lists the main packages your project ", + "depends on, complementing `renv.lock` which records all packages ", + "and exact versions." + ), + paste0( + "Because `DESCRIPTION` automatically replaces the need for ", + "`renv::snapshot()`." + ) +))) +``` + +::: + +:::: + +::: {.callout-note} + +## What is a sensible first step when you want to recreate someone else's environment? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + paste0( + "Install whatever latest language and package versions you prefer and ", + "ignore project files." + ), + "Manually edit all scripts to remove any package dependencies.", + answer = paste0( + "Look for existing environment files (e.g. `environment.yaml`, ", + "`renv.lock`, `DESCRIPTION`) and use them, or construct one based on ", + "the documented versions and packages." + ) +))) +``` + +::: +

From bd36bb351dc8cbe49eb92fc4700ec515b7bec6d9 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 10:42:27 +0000 Subject: [PATCH 07/30] feat(package): add mcq (#145) --- pages/guide/setup/package.qmd | 109 ++++++++++++++++++++++++++++++++++ 1 file changed, 109 insertions(+) diff --git a/pages/guide/setup/package.qmd b/pages/guide/setup/package.qmd index c28151d..1b81755 100644 --- a/pages/guide/setup/package.qmd +++ b/pages/guide/setup/package.qmd @@ -1165,6 +1165,115 @@ Click the  [Code]{style="background: #597341; color: #fff; border-radius:4p If you are following along with this book and using this template, you'll want to update it with the provided environment, so you have all the packages needed. +## Test yourself + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Why do we recommend structuring your simulation code as a package? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "Because a package structure makes your model easier to reuse, test, and ", + "separate from analysis, aligning with RAP guidance." + ), + paste0( + "So you can recreate the same versions of the language and packages ", + "later, ensuring results can be reproduced." + ) +))) +``` + +::: + +:::: {.python-content} + +::: {.callout-note} + +## What is the purpose of the `__init__.py` file in the Python `simulation/` package? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To store environment variables for your operating system.", + answer = paste0( + "It tells Python that `simulation/` is a package and can expose things ", + "like `__version__` for tools such as flit." + ), + "To automatically download any missing packages from the internet." +))) +``` + +::: + +::: {.callout-note} + +## When testing the Python package in a Jupyter notebook, why does the tutorial use `%load_ext autoreload` and `%autoreload 2`? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To automatically commit changes to GitHub every time a cell runs.", + "To disable importing any local modules while the notebook is open.", + answer = paste0( + "So that when you save changes to files like `model.py`, Jupyter ", + "reloads them without restarting the kernel." + ) +))) +``` + +::: + +:::: + +:::: {.r-content} + +::: {.callout-note} + +## In the R example, which folder is used to store the core model functions when structuring as a package? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "`models/`", + answer = "`R/`", + "`man/`" +))) +``` + +::: + +::: {.callout-note} + +## Why is a package‑level documentation file (created with `usethis::use_package_doc()`) helpful in R? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "It lets you collect all `@importFrom` tags in one place, keeping ", + "code in `R/` less cluttered and imports easier to review." + ), + "It automatically writes all your modelling functions for you.", + "It replaces the need for a `DESCRIPTION` file entirely." +))) +``` + +::: + +:::: + ## Further information :::{.python-content} From d99e2ed93ef1ca25858fcc9559449373923f3729 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 11:38:05 +0000 Subject: [PATCH 08/30] feat(code_structure): add mcq (#145) --- pages/guide/setup/code_structure.qmd | 98 +++++++++++++++++++++++++++- 1 file changed, 97 insertions(+), 1 deletion(-) diff --git a/pages/guide/setup/code_structure.qmd b/pages/guide/setup/code_structure.qmd index 7885339..e01b1e2 100644 --- a/pages/guide/setup/code_structure.qmd +++ b/pages/guide/setup/code_structure.qmd @@ -341,6 +341,102 @@ Normally, a mix of programming paradigms will be used. ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +:::: {.python-content} + +::: {.callout-note} + +## Why do we recommend using functions and classes, instead of copying and pasting code? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "Because small, reusable functions/classes make code more modular, ", + "easier to read, test, and maintain, and avoid duplication." + ), + paste0( + "Because as long as logic is in functions or classes, it doesn't matter ", + "if there is a lot of duplicated code." + ) +))) +``` + +::: + +::: {.callout-note} + +## In the Python `Patient` class, what is the purpose of the `__init__` method? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To automatically run every hour and update all patients in the system.", + answer = paste0( + "It initialises a new `Patient` object by setting attributes such as ", + "`patient_id`, `arrival_time`, and the initial `status`." + ), + "To delete the patient from the queue when they arrive at the hospital." +))) +``` + +::: + +:::: + +:::: {.r-content} + +::: {.callout-note} + +## Why do we recommend using functions, instead of copying and pasting code? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "Because small, reusable functions make code more modular, ", + "easier to read, test, and maintain, and avoid duplication." + ), + paste0( + "Because as long as logic is in functions, it doesn't matter ", + "if there is a lot of duplicated code." + ) +))) +``` + +::: + +::: {.callout-note} + +## In the R `estimate_wait_time` example, what does the function return? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "A list containing duration, status, and a vector of results.", + answer = "A single numeric value: the estimated total wait time.", + "A data frame with one row per patient and their individual wait time." +))) +``` + +::: + +:::: + +### Activity + ::: {.python-content} **Task: Refactor the provided script into a function or class and put it on GitHub**. To do this, you should: @@ -468,7 +564,7 @@ print(results) :::: -:::: {.callout-tip title="Solutions" collapse="true"} +:::: {.callout-tip title="Click to view solutions" collapse="true"} ::: {.python-content} From 84e0324f30b04ffe5ca56ba6d508bdf18177c819 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 11:54:21 +0000 Subject: [PATCH 09/30] feat(input_modelling): add mcq (#145) --- pages/guide/inputs/input_modelling.qmd | 90 ++++++++++++++++++++++++++ 1 file changed, 90 insertions(+) diff --git a/pages/guide/inputs/input_modelling.qmd b/pages/guide/inputs/input_modelling.qmd index 86b47ea..7307cb2 100644 --- a/pages/guide/inputs/input_modelling.qmd +++ b/pages/guide/inputs/input_modelling.qmd @@ -1289,6 +1289,96 @@ A common challenge in healthcare modelling is having insufficient data or no sui ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## How is randomness typically introduced into a DES model? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "By choosing a single fixed time for each event and reusing it.", + answer = paste0( + "By randomly sampling event times (e.g. arrivals, service durations) ", + "from probability distributions." + ), + "By manually typing in new event times before each simulation run." +))) +``` + +::: + +::: {.callout-note} + +## In the arrivals example, what is the usual relationship between the Poisson and exponential distributions? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + paste0( + "They are unrelated; Poisson is only used for costs and exponential only ", + "for ages." + ), + answer = paste0( + "If arrivals follow a Poisson process with a constant rate, the ", + "inter‑arrival times follow an exponential distribution with the same rate." + ), + "They both always produce symmetric bell‑shaped distributions." +))) +``` + +::: + +::: {.callout-note} + +## What is the main purpose of plotting a histogram of inter‑arrival or service times in this workflow? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To check whether the data file was saved in CSV or Excel format.", + answer = paste0( + "To inspect the shape and support of the data (e.g. right‑skewed, ", + "non‑negative) and identify plausible candidate distributions." + ), + "To verify that the mean is exactly equal to the median." +))) +``` + +::: + +::: {.callout-note} + +## When data are insufficient (e.g. sparse histograms or only summary statistics), which strategy is recommended in the chapter? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "Use simpler standard distributions (e.g. exponential or triangular), ", + "draw on expert judgement, and explore sensitivity to plausible ranges." + ), + "Always switch to the most complex multi‑parameter distribution available.", + "Abandon modelling and instead assume all times are fixed at the mean." +))) +``` + +::: + +### Activity + If you haven't already followed along, **now's the time to put everything from this page into practice!** **Task:** From 6d2668f0036ac1b280a5334d6c708869b5278048 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 11:55:14 +0000 Subject: [PATCH 10/30] feat(parameters_file): add mcq (#145) --- pages/guide/inputs/parameters_file.qmd | 72 ++++++++++++++++++++++++++ 1 file changed, 72 insertions(+) diff --git a/pages/guide/inputs/parameters_file.qmd b/pages/guide/inputs/parameters_file.qmd index e6d37b1..93f1ea9 100644 --- a/pages/guide/inputs/parameters_file.qmd +++ b/pages/guide/inputs/parameters_file.qmd @@ -892,6 +892,78 @@ The nurse visit model doesn't import parameters from a file. It instead stores t ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Why might you choose to store simulation parameters in an external file rather than directly in your script? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "Because it avoids hard‑coded values, makes it easier to share different ", + "parameter sets (e.g. real vs synthetic), and lets non‑programmers edit ", + "parameters without touching the code." + ), + paste0( + "Because parameters in a script cannot be version controlled, whereas ", + "files like CSV or JSON can." + ) +))) +``` + +::: + +::: {.callout-note} + +## When is CSV usually preferred over JSON for parameter files? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "When parameters fit naturally into a flat table of rows and columns ", + "following tidy data principles." + ), + paste0( + "When parameters contain deeply nested structures and many attributes ", + "per item that must mirror nested code objects." + ) +))) +``` + +::: + +::: {.callout-note} + +## What is the main purpose of a data dictionary for parameter files in this workflow? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To store raw simulation outputs alongside parameters.", + answer = paste0( + "To document each parameter’s meaning, units, and any codes or ", + "abbreviations so others can correctly interpret and modify them." + ) +))) +``` + +::: + +### Activity + If you haven't already, now's the time to practice working with **external parameter files**. **Task:** From 9c2cad7b1521eaab7e944c0b4846510e240e7e41 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 11:55:42 +0000 Subject: [PATCH 11/30] chore(parameters_script): add activity heading --- pages/guide/inputs/parameters_script.qmd | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/pages/guide/inputs/parameters_script.qmd b/pages/guide/inputs/parameters_script.qmd index 888b374..2869876 100644 --- a/pages/guide/inputs/parameters_script.qmd +++ b/pages/guide/inputs/parameters_script.qmd @@ -972,6 +972,10 @@ The stroke model doesn't store parameters in a script. It instead imports them f ## Test yourself +
+ +### Quiz + ```{r} #| echo: false library(webexercises) # nolint: library_call_linter @@ -1012,7 +1016,7 @@ cat(longmcq(c( ::: -
+### Activity If you haven't already, let's have a go at **managing parameters inside scripts**. From 313f981d61cb511db797ab3a21517fab59e071ff Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 12:00:35 +0000 Subject: [PATCH 12/30] feat(param_validation): add mcq (#145) --- pages/guide/inputs/parameters_validation.qmd | 33 ++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/pages/guide/inputs/parameters_validation.qmd b/pages/guide/inputs/parameters_validation.qmd index af0cd3d..47ab5f1 100644 --- a/pages/guide/inputs/parameters_validation.qmd +++ b/pages/guide/inputs/parameters_validation.qmd @@ -1096,6 +1096,39 @@ except ValueError as e: # << ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Why is parameter validation especially important in discrete‑event simulation models? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + paste0( + "Because simulations will always crash immediately if a parameter name ", + "is misspelled." + ), + answer = paste0( + "Because small typos or out‑of‑range values can silently produce ", + "plausible‑looking but incorrect simulation results." + ), + "Because parameter validation automatically makes the simulation run faster." +))) +``` + +::: + +### Activity + If you haven't already, now's the time to try out **parameter validation** in practice. **Task:** From be65ec33cdad63cb3d468bb64b22210d1af9f638 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 12:09:39 +0000 Subject: [PATCH 13/30] feat(patients): add mcq (#145) --- pages/guide/model/patients.qmd | 98 ++++++++++++++++++++++++++++++++++ 1 file changed, 98 insertions(+) diff --git a/pages/guide/model/patients.qmd b/pages/guide/model/patients.qmd index 61d280e..66b1a54 100644 --- a/pages/guide/model/patients.qmd +++ b/pages/guide/model/patients.qmd @@ -1199,6 +1199,104 @@ model(param = param, run_number = 1L) ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +:::: {.python-content} + +::: {.callout-note} + +## In our model, what is the role of the `Parameters` class? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To define the SimPy environment and patient processes.", + answer = paste0( + "To store core simulation settings such as `interarrival_time`, ", + "`run_length`, and `verbose` in one place." + ), + "To generate random exponential inter‑arrival times directly." +))) +``` + +::: + +::: {.callout-note} + +## In the `Model` class, what does `generate_arrivals()` do? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "It computes summary statistics after the simulation has finished.", + "It deletes all patients from the system at the end of the run.", + answer = paste0( + "It repeatedly samples an inter‑arrival time, waits that long in the ", + "SimPy environment, creates a `Patient`, and stores it." + ) +))) +``` + +::: + +:::: + +:::: {.r-content} + +::: {.callout-note} + +## In our example, what does the `model()` function's `add_generator()` call do? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "It creates a summary report of arrivals after the simulation.", + answer = paste0( + "It defines a patient arrival process that follows a trajectory and ", + "uses `rexp()` to sample exponential inter‑arrival times." + ), + "It removes all existing trajectories from the simmer environment." +))) +``` + +::: + +:::: + +::: {.callout-note} + +## What problem does our distribution registry help solve? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "It centralises the configuration and creation of multiple distributions ", + "so you do not have to hard‑code each sampler in the model logic." + ), + "It replaces the need for any parameters or configuration files.", + paste0( + "It ensures all distributions use the same seed and produce identical ", + "samples." + ) +))) +``` + +::: + +### Activity + Have a go at setting up a basic simulation model with entity generation, while following good package organisation. **Task:** From 1530d1343d675a0505d52ad814f46fc0f9742fcc Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 12:10:10 +0000 Subject: [PATCH 14/30] chore(distributions): add quiz/activity headings --- pages/guide/model/distributions.qmd | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/pages/guide/model/distributions.qmd b/pages/guide/model/distributions.qmd index 4e36e60..bb31a6b 100644 --- a/pages/guide/model/distributions.qmd +++ b/pages/guide/model/distributions.qmd @@ -436,6 +436,10 @@ On [the next page](patients.qmd), we'll learn how to add sampling to your model ## Test yourself +
+ +### Quiz + ```{r} #| echo: false library(webexercises) # nolint: library_call_linter @@ -554,7 +558,7 @@ cat(longmcq(c( :::: -
+### Activity If you haven't already, try out **random sampling** for yourself! From 37bd03127f11ef9e48dd6b9606c0f761f78b21a2 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 12:23:37 +0000 Subject: [PATCH 15/30] feat(process): add mcq (#145) --- pages/guide/model/process.qmd | 94 +++++++++++++++++++++++++++++++++++ 1 file changed, 94 insertions(+) diff --git a/pages/guide/model/process.qmd b/pages/guide/model/process.qmd index 9f618d0..621513c 100644 --- a/pages/guide/model/process.qmd +++ b/pages/guide/model/process.qmd @@ -606,6 +606,100 @@ You can create custom logs which are easier to interpret - see the [Logging](log ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +:::: {.python-content} + +::: {.callout-note} + +## In the Python `consultation()` method, what is the effect of with `self.doctor.request() as req: yield req`? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "The patient requests a doctor; if none are available they wait in a ", + "queue until a doctor becomes free." + ), + "It forces every patient to skip the doctor and leave immediately.", + "It randomly changes the number of available doctors during the run." +))) +``` + +::: + +::: {.callout-note} + +## How is consultation duration modelled? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "As a fixed constant equal to `consultation_time` for every patient.", + answer = paste0( + "As a random sample from an exponential distribution with mean ", + "`param.consultation_time`." + ), + "As a uniform draw between 0 and `run_length` regardless of parameters." +))) +``` + +::: + +:::: + +:::: {.r-content} + +::: {.callout-note} + +## What is the role of `add_resource("doctor", param[["number_of_doctors"]])` in the model? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "It creates a logging object that prints all arrivals to the console.", + answer = paste0( + "It defines a doctor resource in the simmer environment with capacity ", + "equal to the specified number of doctors." + ), + "It sets the maximum number of patients allowed in the system." +))) +``` + +::: + +:::: + +::: {.callout-note} + +## What happens when all doctors are busy and a new patient arrives? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "The patient is discarded and never served.", + "The simulation automatically adds a new temporary doctor for that patient.", + answer = ( + "The patient joins a queue and waits until a doctor becomes available." + ) +))) +``` + +::: + +### Activity + If you haven't already, try adding a resource-based process for your model. **Task:** From 0a1d22fda457c6bac413fda9c1d6ae61b19ca394 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 13:43:19 +0000 Subject: [PATCH 16/30] feat(full_run): add mcq (#145) --- pages/guide/experiments/full_run.qmd | 70 ++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/pages/guide/experiments/full_run.qmd b/pages/guide/experiments/full_run.qmd index 8879495..dadaf54 100644 --- a/pages/guide/experiments/full_run.qmd +++ b/pages/guide/experiments/full_run.qmd @@ -507,6 +507,76 @@ Both have a file `run_rmarkdown.sh` (like the example above) which is used to ru ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Why is being able to run the entire analysis from a single command considered important in a RAP workflow? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "Because it reduces manual steps, avoids inconsistencies between runs, ", + "and makes the full workflow repeatable by you and others." + ), + "Because it guarantees that the model will always run faster.", + "Because it allows you to skip documenting intermediate steps." +))) +``` + +::: + +:::: {.python-content} + +::: {.callout-note} + +## What is the main role of `run_notebooks.sh`? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To run all Jupyter notebooks in a random order.", + "To run all Jupyter notebooks in parallel.", + answer = "To run all Jupyter notebooks in sequence." +))) +``` + +::: + +:::: + +:::: {.r-content} + +::: {.callout-note} + +## What is the main role of `run_rmarkdown.sh`? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To run all `.Rmd` files in a random order.", + "To run all `.Rmd` files in parallel.", + answer = "To run all `.Rmd` files in sequence." +))) +``` + +::: + +:::: + +### Activity + Try writing a bash scripts that runs some of your analysis files. You can print a message before running each one to check whether each command succeeded. From 764e405b26fa1e06e65899229100a6fe20a2805e Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 13:48:10 +0000 Subject: [PATCH 17/30] feat(scenarios): add mcq (#145) --- pages/guide/experiments/scenarios.qmd | 50 +++++++++++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/pages/guide/experiments/scenarios.qmd b/pages/guide/experiments/scenarios.qmd index a1217c6..afdaf78 100644 --- a/pages/guide/experiments/scenarios.qmd +++ b/pages/guide/experiments/scenarios.qmd @@ -590,6 +590,56 @@ However, there are two key things to keep in mind: ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Why should code be shared for every scenario, not just the base case? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "Because published descriptions are often ambiguous, and without code ", + "for each scenario it can be hard or impossible to reproduce the results." + ), + "Because reviewers only care about the scenario code and not the base case.", + "Because scenario code runs faster than base‑case code." +))) +``` + +::: + +::: {.callout-note} + +## Conceptually, how does scenario analysis differ from sensitivity analysis? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "Scenario analysis studies predefined, plausible configurations (often ", + "changing several parameters), while sensitivity analysis varies one or ", + "a small set of parameters to see how input uncertainty affects outputs." + ), + "Scenario analysis only changes one parameter; sensitivity changes many.", + "Scenario analysis is exploratory; sensitivity is only about performance." +))) +``` + +::: + +### Activity + Try running your own scenario and sensitivity analyses. You could build your own simple loop to test different values - just make sure you use a fresh set of parameters for each run. From 229f38e9aa7968a9587a0a0e6da10e5deae9837d Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 13:50:08 +0000 Subject: [PATCH 18/30] feat(tables_figures): add mcq (#145) --- pages/guide/experiments/tables_figures.qmd | 33 ++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/pages/guide/experiments/tables_figures.qmd b/pages/guide/experiments/tables_figures.qmd index 663cf86..3c4965f 100644 --- a/pages/guide/experiments/tables_figures.qmd +++ b/pages/guide/experiments/tables_figures.qmd @@ -1027,6 +1027,39 @@ In this figure, the blue line shows the result (e.g., mean wait time) as we incr ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## What is the key message of this page regarding tables and figures in DES studies? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "That visual outputs are optional and not part of reproducibility.", + answer = paste0( + "That tables and figures should be generated by code, and that code must ", + "be shared so others can recreate the reported results." + ), + paste0( + "That only raw simulation logs need to be archived; plots can be ", + "drawn by hand." + ) +))) +``` + +::: + +### Activity + Create at least one table and figure from your simulation results. If you don't have any to hand, feel free to download ours: From 0bfe519382e16df671b0966bae4ec50eb670d501 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 13:52:25 +0000 Subject: [PATCH 19/30] feat(logs): add mcq (#145) --- pages/guide/model/logs.qmd | 48 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/pages/guide/model/logs.qmd b/pages/guide/model/logs.qmd index d17d09b..5e364ea 100644 --- a/pages/guide/model/logs.qmd +++ b/pages/guide/model/logs.qmd @@ -1687,6 +1687,54 @@ print(log_contents, sep = "\n") ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Why are logs useful when building and verifying a DES model? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "They let you see when key events occur (e.g. arrivals, consultations), ", + "helping you spot bugs and check that the model behaves as intended." + ), + "They replace the need for any formal validation or testing." +))) +``` + +::: + +::: {.callout-note} + +## Why is logging designed to be optional, rather than always on? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Because logs are only useful during sensitivity analysis.", + "Because DES models cannot run correctly if any logging is enabled.", + answer = paste0( + "Because detailed logs are helpful for short test runs and debugging but ", + "would be excessive and slow for long runs or many replications." + ) +))) +``` + +::: + +### Activity + Try **adding logs to your model**. Experiment with different log messages and parameter options, then run your code and observe how the logs reflect your changes at each step. If you extend your model (for example, with different patient types, destinations, or additional resources), continue making use of logs. Logging is especially valuable during **model development** - it helps you check that each part of your logic works as intended and makes it easier to troubleshoot issues as your simulation grows in complexity. From 90567e6dd9114cfe02c770675e7bbd94159daf2f Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 13:57:34 +0000 Subject: [PATCH 20/30] feat(length_warmup): add mcq (#145) --- pages/guide/output_analysis/length_warmup.qmd | 46 +++++++++++++++++++ 1 file changed, 46 insertions(+) diff --git a/pages/guide/output_analysis/length_warmup.qmd b/pages/guide/output_analysis/length_warmup.qmd index d147b62..7471cf9 100644 --- a/pages/guide/output_analysis/length_warmup.qmd +++ b/pages/guide/output_analysis/length_warmup.qmd @@ -1125,6 +1125,52 @@ Not relevant - replicating an existing model described in a paper, so just used ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## What main problem does a warm-up period aim to reduce in steady‑state DES models? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = "Initialisation bias from starting in an empty or atypical state.", + "Numerical rounding errors in random number generators.", + "Bias caused by using too many replications." +))) +``` + +::: + +::: {.callout-note} + +## In the time series inspection approach, why do we plot the cumulative mean rather than raw performance measures over time? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Because raw values are always constant in steady state.", + answer = paste0( + "Because cumulative means smooth short‑term fluctuations and make it ", + "easier to see when metrics stabilise." + ), + "Because cumulative means completely remove all variability from the data." +))) +``` + +::: + +### Activity + Try using/adapting the code above for your own simulation and identify a suitable length of warm-up. ## References From 3672f3b155a292639db18dfcbea524917713ca85 Mon Sep 17 00:00:00 2001 From: amyheather Date: Thu, 12 Mar 2026 14:09:43 +0000 Subject: [PATCH 21/30] feat(n_reps): add mcq (#145) --- pages/guide/output_analysis/n_reps.qmd | 49 ++++++++++++++++++++++++++ 1 file changed, 49 insertions(+) diff --git a/pages/guide/output_analysis/n_reps.qmd b/pages/guide/output_analysis/n_reps.qmd index 8ac04cb..62c325b 100644 --- a/pages/guide/output_analysis/n_reps.qmd +++ b/pages/guide/output_analysis/n_reps.qmd @@ -1580,6 +1580,55 @@ Not relevant - replicating an existing model described in a paper, so just used ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Why do we need multiple replications when analysing a DES model? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Because replications are required to remove all randomness from the model.", + "Because each replication uses a different model structure.", + answer = paste0( + "Because each run is affected by randomness, and averaging over many ", + "replications produces a more stable and reliable estimate." + ) +))) +``` + +::: + +::: {.callout-note} + +## What is the core idea behind the confidence interval method for choosing the number of replications? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "Increase the number of replications until the confidence interval ", + "around the mean becomes sufficiently narrow and stable." + ), + "Run a fixed, large number of replications regardless of variability.", + "Keep adding replications until the mean exactly matches a target value." +))) +``` + +::: + +### Activity + Have a go for yourself! Try out both the manual and automated methods described above to work out how many replications you need for your own simulation model. From 9958bfe2f33fada562a98f780a5bb56cd9850882 Mon Sep 17 00:00:00 2001 From: amyheather Date: Fri, 13 Mar 2026 12:42:41 +0000 Subject: [PATCH 22/30] feat(outputs): add mcq (#145) --- pages/guide/output_analysis/outputs.qmd | 51 +++++++++++++++++++++++++ 1 file changed, 51 insertions(+) diff --git a/pages/guide/output_analysis/outputs.qmd b/pages/guide/output_analysis/outputs.qmd index 32fafd6..2aa1475 100644 --- a/pages/guide/output_analysis/outputs.qmd +++ b/pages/guide/output_analysis/outputs.qmd @@ -4635,6 +4635,57 @@ ggplotly(p) ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Why does the workflow separate warm-up and data-collection periods in both Python and R? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To make it easier to debug patient-level logs.", + answer = paste0( + "To ensure the inital system behaviour does not bias the performance ", + "measures." + ), + "To allow different random number generators in each period.", + "To reduce the total runtime of the simulation." +))) +``` + +::: + +::: {.callout-note} + +## Why are backlogged (unseen) patients tracked separately from completed patients when computing mean wait times? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Because unseen patients never arrive in the system.", + "Because they always have zero wait time.", + answer = paste0( + "Because their waits are censored at the end of the run, and including ", + "them in the same mean would underestimate true waits for completed ", + "patients." + ) +))) +``` + +::: + +### Activity + Try adding one or more of the measures described above to your own DES model. See how the results change as you adjust the model (e.g. arrival rate, number of resources). From 343eeb134f3c7251f29a9f6158793785d70bc320 Mon Sep 17 00:00:00 2001 From: amyheather Date: Fri, 13 Mar 2026 12:50:50 +0000 Subject: [PATCH 23/30] feat(parallel): add mcq (#145) --- pages/guide/output_analysis/parallel.qmd | 60 ++++++++++++++++++++++++ 1 file changed, 60 insertions(+) diff --git a/pages/guide/output_analysis/parallel.qmd b/pages/guide/output_analysis/parallel.qmd index 5af164f..a4592ef 100644 --- a/pages/guide/output_analysis/parallel.qmd +++ b/pages/guide/output_analysis/parallel.qmd @@ -675,6 +675,66 @@ For larger-scale simulations, parallelisation will often bring performance impro ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## What is the main goal of adding parallel processing to the replication workflow? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To reduce memory usage of each individual replication.", + answer = ( + "To run multiple replications at the same time, reducing total run time." + ) +))) +``` + +::: + +::: {.callout-note} + +## What does setting `cores = 1` mean? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Use one core for the OS and the rest for the simulation.", + answer = "Use a single‑core, run sequentially.", + "Use all available cores." +))) +``` + +::: + +::: {.callout-note} + +## What does setting `cores = -1` mean? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Use one core for the OS and the rest for the simulation.", + "Use a single‑core, run sequentially.", + answer = "Use all available cores." +))) +``` + +::: + +### Activity + Try adapting the model to use parallel processing and experiment with different numbers of cores: * Edit the parameters and model set up so parallelism is enabled. From 4d6bfbe99d188ce871adf82cb796a2b21cc6e004 Mon Sep 17 00:00:00 2001 From: amyheather Date: Fri, 13 Mar 2026 13:02:07 +0000 Subject: [PATCH 24/30] feat(replications): add mcq (#145) --- pages/guide/output_analysis/replications.qmd | 49 ++++++++++++++++++++ 1 file changed, 49 insertions(+) diff --git a/pages/guide/output_analysis/replications.qmd b/pages/guide/output_analysis/replications.qmd index a746280..fdf3144 100644 --- a/pages/guide/output_analysis/replications.qmd +++ b/pages/guide/output_analysis/replications.qmd @@ -448,6 +448,55 @@ result[["patients_in_system"]] |> ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Why are multiple replications needed in a stochastic DES model? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "Because each run is random, and multiple runs reveal variability and ", + "allow estimation of averages and confidence intervals." + ), + "To avoid having to use warm‑up periods.", + "To reduce the total number of events processed." +))) +``` + +::: + +::: {.callout-note} + +## How are patient‑level results handled when running multiple replications? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Each replication writes its patients to separate files only.", + answer = paste0( + "Patient‑level tables from all runs are stacked together, with a column ", + "indicating which replication each patient belongs to." + ), + "Patient‑level results are discarded once run‑level summaries are computed." +))) +``` + +::: + +### Activity + Extend your model to support multiple replications, as shown above.

From 73509cf5e4c526eb05a47e898c7a63cc62f55614 Mon Sep 17 00:00:00 2001 From: amyheather Date: Fri, 13 Mar 2026 13:14:18 +0000 Subject: [PATCH 25/30] feat(warmup): add mcq (#145) --- pages/guide/output_analysis/warmup.qmd | 43 ++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/pages/guide/output_analysis/warmup.qmd b/pages/guide/output_analysis/warmup.qmd index c6d4231..5ea00fe 100644 --- a/pages/guide/output_analysis/warmup.qmd +++ b/pages/guide/output_analysis/warmup.qmd @@ -758,6 +758,49 @@ filter_warmup(result = result, warm_up_period = param[["warm_up_period"]]) ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## What is initialisation bias in a DES model? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Bias caused by using too many replications.", + "Bias caused by using wrong time units.", + answer = "Bias introduced by starting in an empty or atypical state." +))) +``` + +::: + +::: {.callout-note} + +## In which type of system is starting the simulation from an empty state usually acceptable? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "A hospital ward with chronic bed shortages", + answer = "A clinic that opens empty at the start of each working day", + "A crowded emergency department that is busy 24/7" +))) +``` + +::: + +### Activity + If you haven't already, try adding a warm-up period to your model. **Task:** From d943254ec00949dd4b95101e36b5a8623910e4d6 Mon Sep 17 00:00:00 2001 From: amyheather Date: Fri, 13 Mar 2026 13:15:01 +0000 Subject: [PATCH 26/30] chore(archive+citation): add quiz + activity titles --- pages/guide/sharing/archive.qmd | 6 +++++- pages/guide/sharing/citation.qmd | 6 +++++- 2 files changed, 10 insertions(+), 2 deletions(-) diff --git a/pages/guide/sharing/archive.qmd b/pages/guide/sharing/archive.qmd index 66455cd..9bacc9a 100644 --- a/pages/guide/sharing/archive.qmd +++ b/pages/guide/sharing/archive.qmd @@ -250,6 +250,10 @@ These repositories are shared on GitHub (since the start of development) and are ## Test yourself +
+ +### Quiz + ```{r} #| echo: false library(webexercises) # nolint: library_call_linter @@ -306,7 +310,7 @@ cat(longmcq(c( ::: -
+### Activity **Ready to put this into practice?** Try archiving your simulation repository - even if it's just an early version or draft! diff --git a/pages/guide/sharing/citation.qmd b/pages/guide/sharing/citation.qmd index 61450a9..6454010 100644 --- a/pages/guide/sharing/citation.qmd +++ b/pages/guide/sharing/citation.qmd @@ -461,6 +461,10 @@ These repositories have `CITATION.cff` files and citation details in the `README ## Test yourself +
+ +### Quiz + ```{r} #| echo: false library(webexercises) @@ -516,7 +520,7 @@ cat(longmcq(c( ::: -
+### Activity **Try it out for yourself!** Have a go at adding citation information to your codebase. For example, visit [cffinit](https://citation-file-format.github.io/cff-initializer-javascript/#/) to create a `CITATION.cff` file and add it to your repository. From 4c6da01d696e95378cee0fc62bb4481942566958 Mon Sep 17 00:00:00 2001 From: amyheather Date: Fri, 13 Mar 2026 13:28:32 +0000 Subject: [PATCH 27/30] feat(changelog/docstrings/documentation): add mcq (#145) --- pages/guide/sharing/changelog.qmd | 49 ++++++++++++++++++++ pages/guide/style_docs/docstrings.qmd | 48 +++++++++++++++++++ pages/guide/style_docs/documentation.qmd | 59 ++++++++++++++++++++++++ 3 files changed, 156 insertions(+) diff --git a/pages/guide/sharing/changelog.qmd b/pages/guide/sharing/changelog.qmd index 17385b3..8ccb98a 100644 --- a/pages/guide/sharing/changelog.qmd +++ b/pages/guide/sharing/changelog.qmd @@ -227,6 +227,55 @@ These repositories have a `NEWS.md` file and GitHub releases. ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## What is the main purpose of a changelog in a simulation project? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To keep a copy of all GitHub commits.", + "To store all the results from the simulation and any changes.", + answer = paste0( + "To provide a human-readable record of what changed between version of ", + "code." + ) +))) +``` + +::: + +::: {.callout-note} + +## From a reproducibility and open‑science perspective, why is linking a publication to a specific tagged release important? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "It ensures that readers, reviewers, and future users can find and run ", + "the exact code version underlying the published results, even as the ", + "project evolves." + ), + "It prevents you from updating the repository in the future." +))) +``` + +::: + +### Activity + **Try making your own changelog and release:** ::: {.python-content} diff --git a/pages/guide/style_docs/docstrings.qmd b/pages/guide/style_docs/docstrings.qmd index c02a7a7..dc423cc 100644 --- a/pages/guide/style_docs/docstrings.qmd +++ b/pages/guide/style_docs/docstrings.qmd @@ -469,6 +469,54 @@ These repositories use roxygen2 style docstrings throughout. ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## What is the primary purpose of a docstring? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To replace inline comments entirely", + "To store configuration values for the model", + answer = ( + "To describe what a function, class, or module does and how to use it" + ) +))) +``` + +::: + +::: {.callout-note} + +## How do docstrings differ from inline comments? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Docstrings are only for tests; comments are for functions.", + answer = paste0( + "Docstrings explain the overall purpose and interface of an object; ", + "inline comments explain specific lines or non‑obvious logic." + ), + " Docstrings are executed as code, comments are ignored." +))) +``` + +::: + +### Activity + Try writing docstrings for functions from your own codebase. Alternatively, practice by writing a docstring for the example function below. diff --git a/pages/guide/style_docs/documentation.qmd b/pages/guide/style_docs/documentation.qmd index 3f72433..3e787e5 100644 --- a/pages/guide/style_docs/documentation.qmd +++ b/pages/guide/style_docs/documentation.qmd @@ -256,6 +256,65 @@ Both repositories have `README.md` and `CONTRIBUTING.md` files. ## Test yourself + +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## What is the primary role of a `README.md` file in a simulation repository? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "To provide an entry‑point overview with purpose, usage, and key ", + "instructions for reproducing and running the project." + ), + paste0( + "To summarise key results and figures so readers don’t need to inspect ", + "the code." + ), + paste0( + "To document only the development history, such as who worked on the ", + "project and when." + ) +))) +``` + +::: + +::: {.callout-note} + +## What is a key advantage of having a documentation website in addition to README? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + paste0( + "It replaces the need to share the underlying code repository, since ", + "users can just read the website." + ), + answer = paste0( + "It provides a richer, navigable space (tutorials, methodological notes, ", + "case studies, API reference) that is easier for users to explore than a ", + "single long markdown file." + ) +))) +``` + +::: + +### Activity + Do you have a README file? If not, create one! Edit your README to include clear information about the project and your model. Explore the example models and links above for inspiration on effective documentation. From a3fa5fc45575925690f8599ac8825e1414e7c182 Mon Sep 17 00:00:00 2001 From: amyheather Date: Fri, 13 Mar 2026 13:42:20 +0000 Subject: [PATCH 28/30] feat(githubactions/linting): add mcq (#145) --- pages/guide/style_docs/github_actions.qmd | 50 +++++++++++++++ pages/guide/style_docs/linting.qmd | 74 +++++++++++++++++++++++ 2 files changed, 124 insertions(+) diff --git a/pages/guide/style_docs/github_actions.qmd b/pages/guide/style_docs/github_actions.qmd index 09c3563..381ad10 100644 --- a/pages/guide/style_docs/github_actions.qmd +++ b/pages/guide/style_docs/github_actions.qmd @@ -388,6 +388,56 @@ As mentioned above, the testing action is more complex than the provided example ## Test yourself + +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Why is Continuous Integration (CI) particularly valuable for regression testing of DES models? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "It guarantees that simulation outputs will never change over time.", + answer = paste0( + "It re‑runs your tests automatically when changes are pushed or merged, ", + "making it easier to detect when a code change unintentionally alters ", + "previously working behaviour." + ) +))) +``` + +::: + +::: {.callout-note} + +## Why might you choose to run CI workflows on multiple operating systems (e.g., Ubuntu, Windows, macOS) for your simulation code? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "To ensure that OS‑specific differences in paths, tooling, or ", + "dependencies do not cause the model or tests to fail on platforms your ", + "users may use." + ), + "To comply with GitHub’s default requirements for public repositories." +))) +``` + +::: + +### Activity + If you've never used GitHub actions before, have a go with our very basic "Example action" to begin with. Then, try adding a testing and/or linting action to your model repository. diff --git a/pages/guide/style_docs/linting.qmd b/pages/guide/style_docs/linting.qmd index ca8121f..4ff6a1f 100644 --- a/pages/guide/style_docs/linting.qmd +++ b/pages/guide/style_docs/linting.qmd @@ -696,6 +696,80 @@ These repositories use `lintr`. They have a GitHub actions workflow (`.github/wo ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## What is the main purpose of a style guide in coding? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "To specify file naming rules only, not code layout.", + answer = paste0( + "To provide conventions for naming, layout, syntax, docstrings, and ", + "indentation." + ), + "To enforce strict performance benchmarks for all functions." +))) +``` + +::: + +::: {.callout-note} + +## What is the primary purpose of using a linter in your simulation codebase? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "To analyse code for potential errors, bad practices, and style ", + "violations." + ), + paste0( + "To guarantee that the numerical results of the simulation are ", + "statistically correct." + ) +))) +``` + +::: + +:::: {.python-content} + +::: {.callout-note} + +## In the Python examples, why might you run pylint, flake8, and ruff on the same file? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "They each support different Python versions, so all are required.", + "Only the majority vote between three linters decides whether code passes.", + answer = paste0( + "Each tool has a slightly different focus and rule set, so combining ", + "them can surface different classes of issues (bugs, style, complexity)." + ) +))) +``` + +::: + +:::: + +### Activity + The best way to get comfortable with linting is to try it out on your own code. Take a small script you've written and run your linter to see what it flags. If you don't have any code handy, here's a simple example function you can copy into a file and run through your linter. It works correctly, but has several style and formatting issues that linting tools will pick up. From a4e0880036043c765f13ac360be6de2260bc9e3a Mon Sep 17 00:00:00 2001 From: amyheather Date: Fri, 13 Mar 2026 13:52:57 +0000 Subject: [PATCH 29/30] feat(qa): add mcq (#145) and activity --- .../quality_assurance.qmd | 58 +++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/pages/guide/verification_validation/quality_assurance.qmd b/pages/guide/verification_validation/quality_assurance.qmd index e387c7e..ef8a4eb 100644 --- a/pages/guide/verification_validation/quality_assurance.qmd +++ b/pages/guide/verification_validation/quality_assurance.qmd @@ -232,3 +232,61 @@ Key resources to check out when planning your quality assurance: * The [Quality assurance of code for analysis and research](https://best-practice-and-impact.github.io/qa-of-code-guidance/intro.html) book from the Government Analysis Function. * [Facets of trust in simulation studies](https://doi.org/10.1016/j.ejor.2020.06.043) - a framework for thinking about different dimensions of trust in simulation work and how they evolve over a study’s lifecycle. + +## Test yourself + +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Which statement best describes quality assurance (QA) in analytical projects? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "A one-off final check of results before publication.", + "A process limited to testing code for bugs.", + answer = paste0( + "A formal, systematic process of planning checks, carrying them out, and ", + "keeping evidence." + ) +))) +``` + +::: + +::: {.callout-note} + +## Which option best captures the relationship between QA, verification, and validation as described on the page? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "QA and verification are the same; validation is separate.", + answer = "Verification and validation are specific checks within QA.", + "Validation replaces QA in higher‑risk projects." +))) +``` + +::: + +### Activity + +Reflect on how you might apply quality assurance to the simulation model you have been developing, or to another recent piece of analytical work. At a minimum, write down: + +* The decision the work is intended to inform. +* Who could act as commissioner, analyst, assurer, and approver (even if some roles are combined). +* A few QA activities you could realistically do when (a) scoping, (b) design, (c) analysis, and (d) delivery. + +You could also have a go creating a **GitHub Project**, adding some QA items as issues. You could then treat open items as your QA plan and closed items as your QA log. + +

From cb76d9941834180ec86505283cd663f66250b4f1 Mon Sep 17 00:00:00 2001 From: amyheather Date: Fri, 13 Mar 2026 14:00:52 +0000 Subject: [PATCH 30/30] feat(vv/math/test): add mcq (#145) --- .../verification_validation/mathematical.qmd | 51 +++++++++++++++++++ pages/guide/verification_validation/tests.qmd | 49 ++++++++++++++++++ .../verification_validation.qmd | 47 +++++++++++++++++ 3 files changed, 147 insertions(+) diff --git a/pages/guide/verification_validation/mathematical.qmd b/pages/guide/verification_validation/mathematical.qmd index 3081241..87d6e6f 100644 --- a/pages/guide/verification_validation/mathematical.qmd +++ b/pages/guide/verification_validation/mathematical.qmd @@ -755,6 +755,57 @@ Not relevant - model too complex. ## Test yourself + +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## What is mathematical proof of correctness in the context of a DES model? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Proving that the code has no bugs by inspection.", + "Demonstrating that run times match theoretical complexity.", + answer = paste0( + "Showing that simulation outputs agree with results from an appropriate ", + "analytical (mathematical) model under the same assumptions." + ) +))) +``` + +::: + +::: {.callout-note} + +## Why are M/M/s queueing models a good candidate for proof of correctness of simple clinic simulations? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + answer = paste0( + "Their assumptions (Poisson arrivals, exponential service, s servers) ", + "match simple DES set‑ups, and classic formulas exist for key ", + "performance measures." + ), + "They cover any arrival and service process.", + "They guarantee stable performance regardless of parameter values." +))) +``` + +::: + +### Activity + Try adding this test to the model set-up using this book.

diff --git a/pages/guide/verification_validation/tests.qmd b/pages/guide/verification_validation/tests.qmd index c70786f..6015cc1 100644 --- a/pages/guide/verification_validation/tests.qmd +++ b/pages/guide/verification_validation/tests.qmd @@ -649,6 +649,55 @@ Test files include: ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Testing a simulation model, as described on this page, is primarily about: + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Making the model run as fast as possible.", + answer = paste0( + "Evaluating whether the model behaves as expected, is reliable, and ", + "handles different conditions." + ), + "Proving the model is mathematically perfect." +))) +``` + +::: + +::: {.callout-note} + +## Which statement best describes regression testing in this context? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Testing how well the model fits a regression line.", + "Running tests only at the end of the project.", + answer = paste0( + "Re-running tests to check that code changes have not introduced new ", + "issue." + ) +))) +``` + +::: + +### Activity + **Write tests**! Look at **example models** for inspiration on what and how to test. Start writing tests **early**, and run them **often** to catch issues as you develop. diff --git a/pages/guide/verification_validation/verification_validation.qmd b/pages/guide/verification_validation/verification_validation.qmd index 9520b9a..b9fe4ff 100644 --- a/pages/guide/verification_validation/verification_validation.qmd +++ b/pages/guide/verification_validation/verification_validation.qmd @@ -514,6 +514,53 @@ Verification can be viewed in [GitHub Issue #18](https://github.com/pythonhealth ## Test yourself +
+ +### Quiz + +```{r} +#| echo: false +library(webexercises) # nolint: library_call_linter +``` + +::: {.callout-note} + +## Which statement best describes verification in simulation modelling? + +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Checking whether the model is useful for decision makers.", + answer = paste0( + "Checking that the model correctly implements the intended conceptual ", + "model." + ), + "Comparing model outputs to real‑world data." +))) +``` + +::: + +::: {.callout-note} + +## Stress testing in this context is primarily about: +```{r} +#| output: asis +#| echo: false +cat(longmcq(c( + "Testing how stressed a modeller gets when discovering a bug in their code.", + answer = paste0( + "Running the model with extremely demanding conditions to see if it ", + "behaves sensibly." + ) +))) +``` + +::: + +### Activity + We encourage you to apply as many verification and validation methods as you can to your simulation models. Each approach gives a fresh angle on model quality, and together they build a robust foundation of trust in your work. However, trying to do everything at once can feel overwhelming! Remember, verification and validation are iterative processes - it's normal and expected to work on them bit by bit, returning to add, review, and improve methods as your project develops.