Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .markdownlint.jsonc
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{
"MD013": false,
"MD033": false
}
14 changes: 14 additions & 0 deletions services/metrics-calculator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,20 @@ Configuration is achieved via the following environment variables:
| NATIONAL_METRICS_S3_PATH_PARAM_NAME | String that is the AWS SSM Parameter Name where the National Metrics S3 path will be outputted to |
| PRACTICE_METRICS_S3_PATH_PARAM_NAME | String that is the AWS SSM Parameter Name where the Practice Metrics S3 path will be outputted to |


### Running the metrics-calculator manually for testing

The Metrics-calculator service is ran within the dashboard pipeline state-machine, in order to trigger this manually for testing you will need to navigate in the aws
console to step-functions/statemachines/dashboard-pipeline and trigger an execution with the following input
```json
{
"SKIP_METRICS": true,
"time": "2026-02-16"
}
```
Following the execution of this you will see the metrics-calculator task running before either the validate-metrics lambda is called if it is successful or the GP2GP Dashboard
Alert lambda is called on a failure.

## Developing

Common development workflows are defined in the `tasks` script.
Expand Down
14 changes: 14 additions & 0 deletions services/ods-downloader/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -122,3 +122,17 @@ Perform the following steps:
If you see the below notice when trying to activate the python virtual environment, run `deactivate` before trying again.

> Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning.

### Running the ods-downloader pipeline manually for testing

1. Send the asidLookup.csv to the gp-registrations-data email address [Manually triggering store_asid_lookup_lambda](https://github.com/NHSDigital/gp2gp-reporting-infrastructure/blob/main/lambdas/store_asid_lookup/README.md).
1. The final stages of the Store-asid-lookup-lambda will then trigger the Ods-Downloader pipeline.
1. It will then read from the `prm-gp2gp-asid-lookup-{env}` which is populated by the store asid lookup lambda. It will then grab the relevant practices and organisation metadata.
1. It will then write the Organisation metadata to the output bucket `prm-gp2gp-ods-metadata-{env}/{version}/{year}/{month}`
1. If these operations are run successfully then the pipelines execution will return as a success.
1. NOTE: If you have already populated the S3 lookup bucket then you may retrigger the pipeline with this execution input:
```json
{
"time": "YYYY-MM-01T00:00:00Z"
}
```
61 changes: 54 additions & 7 deletions services/reports-generator/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,53 @@ Common development workflows are defined in the `tasks` script.

This project is written in Python 3.14.

### Running the reports-generator manually for testing

> Prior to running these tests, inform Richard Sellers that some additional test emails will be received.

There are 3 different paths that can be taken after triggering this pipeline:

- Daily/weekly reporting window
- Monthly reporting window
- Custom reporting window

Daily

```json
{
"ALERT_ENABLED": "true",
"CONVERSATION_CUTOFF_DAYS": "0",
"NUMBER_OF_DAYS": "1",
"REPORT_NAME": "TRANSFER_OUTCOMES_PER_SUPPLIER_PATHWAY",
"SEND_EMAIL_NOTIFICATION": "true"
}
```

Monthly

```json
{
"ALERT_ENABLED": "true",
"CONVERSATION_CUTOFF_DAYS": "0",
"NUMBER_OF_MONTHS": "1",
"REPORT_NAME": "TRANSFER_OUTCOMES_PER_SUPPLIER_PATHWAY",
"SEND_EMAIL_NOTIFICATION": "true"
}
```

Custom

```json
{
"ALERT_ENABLED": "true",
"CONVERSATION_CUTOFF_DAYS": "0",
"START_DATETIME": "2024-07-13T00:00Z",
"END_DATETIME": "2024-07-14T00:00Z",
"REPORT_NAME": "TRANSFER_OUTCOMES_PER_SUPPLIER_PATHWAY",
"SEND_EMAIL_NOTIFICATION": "true"
}
```

### Recommended developer environment

- [pyenv](https://github.com/pyenv/pyenv) to easily switch Python versions.
Expand All @@ -98,21 +145,21 @@ This project is written in Python 3.14.

#### Installing pyenv

```
```shell
brew install pyenv
```

#### Configure your shell's environment for Pyenv

```
```shell
For zsh:
echo 'eval "$(pyenv init --path)"' >> ~/.zprofile
echo 'eval "$(pyenv init -)"' >> ~/.zshrc
```

#### Install new python and set as default

```
```shell
pyenv install 3.14
pyenv global 3.14
```
Expand All @@ -121,7 +168,7 @@ pyenv global 3.14

In a new shell, run the following:

```
```shell
python -m pip install pipenv
python -m pip install -U "pip>=21.1”
```
Expand All @@ -130,7 +177,7 @@ python -m pip install -U "pip>=21.1”

In a new shell, in the project directory run:

```
```shell
./tasks devenv
```

Expand All @@ -140,16 +187,16 @@ This will create a python virtual environment containing all required dependenci

To find out the path of this new virtual environment, run:

```
```shell
pipenv --venv
```

Now you can configure the IDE. The steps for IntelliJ are following:

1. Go to `File -> Project Structure -> SDK -> Add SDK -> Python SDK -> Existing environments`
2. Click on three dots, paste the virtual environment path from before, and point to the python binary.
The path should look like this: `/Users/janeDoe/.local/share/virtualenvs/prm-spine-exporter-NTBCQ41T/bin/python3.14`


### Running the unit and integration tests

`./tasks test`
Expand Down
15 changes: 15 additions & 0 deletions services/spine-exporter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,21 @@ Alternatively, you can download one of the docker containers already published t

The main code entrypoint is via `python -m prmexporter.main`.

## Running the Spine-exporter manually for testing
The spine-exporter is usually triggered by AWS EventBridge (See 'Amazon EventBridge/Scheduler/Scheduled rules' for details).

In order to manually trigger the 'daily-spine-exporter-and-transfer-classifier' step-function you will need to
navigate to the relevant state-machine in the aws console and use the following input.
```json
{
"time": "2026-02-16T05:37:00Z",
"detail": {}
}
```
After the spine data is collected it will be placed into the prm-gp2gp-raw-spine-data-{env} S3 bucket, where it can be used by the
transfer-classifier stage of the state-machine, which will take the new raw GP2GP events pulled by the spine exporter task and combine them at the conversation level
to generate useful information about each GP2GP transfer. This task will also pull data from the ods-downloader task which gives details such as the practice name, ICB name etc.


## Developing

Expand Down
16 changes: 16 additions & 0 deletions services/transfer-classifier/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,22 @@ index="spine2vfmmonitor" service="gp2gp" logReference="MPS0053d"
- When START_DATETIME and END_DATETIME are both passed (which both must be at midnight), then the data retrieved will be from the daily spine exporter output, and it will output a daily transfer parquet file for each date within the date range.
Example of ISO-8601 datetime that is specified for START_DATETIME or END_DATETIME - "2022-01-19T00:00:00Z".

### Running the transfer-classifier manually for testing

Whilst the transfer-classifier will be run and tested during the spine-exporter step-functions execution

```json
{
"CONVERSATION_CUTOFF_DAYS": "8",
"START_DATETIME": "2026-01-01T00:00:00Z",
"END_DATETIME": "2026-02-01T00:00:00Z",
"OUTPUT_TRANSFER_DATA_BUCKET": "prm-gp2gp-transfer-data-{env}",
"MI": false
}
```
The MI key is now deprecated but still needs to be passed in. For every day in the reporting window it will output
each day to the S3 bucket {prm-gp2gp-transfer-data-{env}} separately into the given conversation cut-off path

#### Environment variables

Configuration is achieved via the following environment variables:
Expand Down
Loading