Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 7 additions & 5 deletions quality-checks.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,20 +15,22 @@ The checks are classified here based on the concerns they help to address:

## Usage

All applicable quality checks should be applied. Not all checks are applicable in all contexts, for example accessibility testing is only applicable to applications with a user interface.
All applicable quality checks should be applied. Not all checks are universal, i.e. applicable in all contexts. For example, accessibility testing is only applicable to applications with a user interface.

The majority of these checks should be [automated](./patterns/automate-everything.md) via [continuous integration / continuous deployment](./practices/continuous-integration.md): the optimal sequencing of these checks within CI/CD pipelines will depend on the project's branching strategy, deployment strategy, etc.

All of these checks are important, even where their purpose overlaps with other checks in the list. For example, comprehensive functional testing could be achieved without unit testing, instead only using the other functional test types on this list - this would result in a very long-running and inefficient test suite, precluding fast feedback and impeding rapid and safe delivery of software changes. For further details please see [test practices](./practices/testing.md), and especially the [test pyramid](https://martinfowler.com/articles/practical-test-pyramid.html).

Although we separate resilience checks into their notional components, when it comes to implementation these may be joined up where useful. For example, an automated test suite may test the application for peak load and then drop down to sustained load in order to conduct a soak test.

## RAG scale

We rate our applications against each of these checks as follows:

* Green = the quality check is applied frequently and consistently (in practice this typically means [automated](./patterns/automate-everything.md) via [continuous integration / continuous deployment](./practices/continuous-integration.md)), the output of the check is a quality gate (as opposed to just a warning / for information), and the tolerances for that quality gate (e.g. code coverage %) are agreed and understood
* Amber = the quality check is applied, but not all conditions for green are met - for example the check generates warnings that may or may not be acted on
* Red = the quality check is not applied
* N/A = the quality check is not applicable
* Green = the quality check is applied frequently and consistently (in practice this typically means [automated](./patterns/automate-everything.md) via [continuous integration / continuous deployment](./practices/continuous-integration.md)), the output of the check is a quality gate (as opposed to just a warning / for information), and the tolerances for that quality gate (e.g. code coverage %) are agreed and understood.
* Amber = the quality check is applied, but not all conditions for green are met - for example the check generates warnings that may or may not be acted on, or is executed on an ad-hoc basis and cannot offer a consistent quality gate guarantee.
* Red = the quality check is not applied.
* N/A = the quality check is not applicable.

## Tracking progress

Expand Down