From 5ec060dbf4824dc36fe0543cf9306c383aeb006f Mon Sep 17 00:00:00 2001 From: samfallowfield <73955537+samfallowfield@users.noreply.github.com> Date: Fri, 13 Feb 2026 11:42:34 +0000 Subject: [PATCH 1/7] PRM-717 added manual run steps for the ods-downloader --- services/ods-downloader/README.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/services/ods-downloader/README.md b/services/ods-downloader/README.md index 5477f933..37607f7e 100644 --- a/services/ods-downloader/README.md +++ b/services/ods-downloader/README.md @@ -122,3 +122,16 @@ Perform the following steps: If you see the below notice when trying to activate the python virtual environment, run `deactivate` before trying again. > Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning. + +### Running the ods-downloader pipeline manually + +1. Send the asidLookup.csv to the gp-registrations-data email address [Manually triggering store_asid_lookup_lambda](https://github.com/NHSDigital/gp2gp-reporting-infrastructure/blob/main/lambdas/store_asid_lookup/README.md). +1. This will then trigger the Ods-Downloader pipeline. +1. It will then read from the `prm-gp2gp-asid-lookup-{env}` which is populated by the store asid lookup lambda. It will then grab the relevant practices and organisation metadata. +1. It will then write the Organisation metadata to the output bucket `prm-gp2gp-ods-metadata-{env}/{version}/{year}/{month}` +1. If these operations are run successfully then the pipelines execution will return as a success. +1. NOTE: If you have already populated the S3 lookup bucket then you may retrigger the pipeline with this execution input: +```json + { + "time": "YYY-MM-01T00:00:00Z" + } \ No newline at end of file From 6eead6bf16b32fe908ab36bd7447a50bea1e858b Mon Sep 17 00:00:00 2001 From: samfallowfield <73955537+samfallowfield@users.noreply.github.com> Date: Fri, 13 Feb 2026 14:41:37 +0000 Subject: [PATCH 2/7] PRM-717 added manual run steps for the reports-generator step-function --- services/ods-downloader/README.md | 2 +- services/reports-generator/README.md | 38 ++++++++++++++++++++++++++++ 2 files changed, 39 insertions(+), 1 deletion(-) diff --git a/services/ods-downloader/README.md b/services/ods-downloader/README.md index 37607f7e..021920ed 100644 --- a/services/ods-downloader/README.md +++ b/services/ods-downloader/README.md @@ -123,7 +123,7 @@ If you see the below notice when trying to activate the python virtual environme > Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning. -### Running the ods-downloader pipeline manually +### Running the ods-downloader pipeline manually for testing 1. Send the asidLookup.csv to the gp-registrations-data email address [Manually triggering store_asid_lookup_lambda](https://github.com/NHSDigital/gp2gp-reporting-infrastructure/blob/main/lambdas/store_asid_lookup/README.md). 1. This will then trigger the Ods-Downloader pipeline. diff --git a/services/reports-generator/README.md b/services/reports-generator/README.md index 1396f893..dd6b5d6f 100644 --- a/services/reports-generator/README.md +++ b/services/reports-generator/README.md @@ -89,6 +89,44 @@ Common development workflows are defined in the `tasks` script. This project is written in Python 3.14. +### Running the reports-generator manually for testing +#### There are 3 different paths that can be taken after triggering this pipeline: +- Daily/weekly reporting window +- Monthly reporting window +- Custom reporting window + +Daily +```json +{ + "ALERT_ENABLED": "true", + "CONVERSATION_CUTOFF_DAYS": "0", + "NUMBER_OF_DAYS": "1", + "REPORT_NAME": "TRANSFER_OUTCOMES_PER_SUPPLIER_PATHWAY", + "SEND_EMAIL_NOTIFICATION": "true" +} +``` +Monthly +```json +{ + "ALERT_ENABLED": "true", + "CONVERSATION_CUTOFF_DAYS": "0", + "NUMBER_OF_MONTHS": "1", + "REPORT_NAME": "TRANSFER_OUTCOMES_PER_SUPPLIER_PATHWAY", + "SEND_EMAIL_NOTIFICATION": "true" +} +``` +Custom +```json +{ + "ALERT_ENABLED": "true", + "CONVERSATION_CUTOFF_DAYS": "0", + "START_DATETIME": "2026-02-01T00:00Z", + "END_DATETIME": "2026-02-06T00:00Z", + "REPORT_NAME": "TRANSFER_OUTCOMES_PER_SUPPLIER_PATHWAY", + "SEND_EMAIL_NOTIFICATION": "true" +} +``` + ### Recommended developer environment - [pyenv](https://github.com/pyenv/pyenv) to easily switch Python versions. From 048d142af4ea86288cfcf711498aec13b106671c Mon Sep 17 00:00:00 2001 From: samfallowfield <73955537+samfallowfield@users.noreply.github.com> Date: Fri, 13 Feb 2026 17:07:47 +0000 Subject: [PATCH 3/7] PRM-717 spine exporter set up note --- services/spine-exporter/README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/services/spine-exporter/README.md b/services/spine-exporter/README.md index 004e66a8..22846ff5 100644 --- a/services/spine-exporter/README.md +++ b/services/spine-exporter/README.md @@ -9,6 +9,10 @@ Alternatively, you can download one of the docker containers already published t The main code entrypoint is via `python -m prmexporter.main`. +## Running the Spine-exporter manually for testing + + +prm-gp2gp-raw-spine-data-{env} ## Developing From 3ac5e6d0efdbf88f70dda7b0acfb1d3a8e21ec04 Mon Sep 17 00:00:00 2001 From: samfallowfield <73955537+samfallowfield@users.noreply.github.com> Date: Tue, 17 Feb 2026 10:08:49 +0000 Subject: [PATCH 4/7] PRM-717 added manual run steps for services --- services/metrics-calculator/README.md | 14 ++++++++++++++ services/spine-exporter/README.md | 15 +++++++++++++-- services/transfer-classifier/README.md | 16 ++++++++++++++++ 3 files changed, 43 insertions(+), 2 deletions(-) diff --git a/services/metrics-calculator/README.md b/services/metrics-calculator/README.md index 25fce04c..a1f6fc47 100644 --- a/services/metrics-calculator/README.md +++ b/services/metrics-calculator/README.md @@ -16,6 +16,20 @@ Configuration is achieved via the following environment variables: | NATIONAL_METRICS_S3_PATH_PARAM_NAME | String that is the AWS SSM Parameter Name where the National Metrics S3 path will be outputted to | | PRACTICE_METRICS_S3_PATH_PARAM_NAME | String that is the AWS SSM Parameter Name where the Practice Metrics S3 path will be outputted to | + +### Running the metrics-calculator manually for testing + +The Metrics-calculator service is ran within the dashboard pipeline state-machine, in order to trigger this manually for testing you will need to navigate in the aws +console to step-functions/statemachines/dashboard-pipeline and trigger an execution with the following input +```json +{ + "SKIP_METRICS": true, + "time": "2026-02-16" +} +``` +Following the execution of this you will see the metrics-calculator task running before either the validate-metrics lambda is called if it is successful or the GP2GP Dashboard +Alert lambda is called on a failure. + ## Developing Common development workflows are defined in the `tasks` script. diff --git a/services/spine-exporter/README.md b/services/spine-exporter/README.md index 22846ff5..aa7368c8 100644 --- a/services/spine-exporter/README.md +++ b/services/spine-exporter/README.md @@ -10,10 +10,21 @@ Alternatively, you can download one of the docker containers already published t The main code entrypoint is via `python -m prmexporter.main`. ## Running the Spine-exporter manually for testing +The spine-exporter is usually triggered by AWS EventBridge (See 'Amazon EventBridge/Scheduler/Scheduled rules' for details). + +In order to manually trigger the 'daily-spine-exporter-and-transfer-classifier' step-function you will need to +navigate to the relevant state-machine in the aws console and use the following input. +```json +{ + "time": "2026-02-16T05:37:00Z", + "detail": {} +} +``` +After the spine data is collected it will be placed into the prm-gp2gp-raw-spine-data-{env} S3 bucket, where it can be used by the +transfer-classifier stage of the state-machine, which will take the new raw GP2GP events pulled by the spine exporter task and combine them at the conversation level +to generate useful information about each GP2GP transfer. This task will also pull data from the ods-downloader task which gives details such as the practice name, ICB name etc. -prm-gp2gp-raw-spine-data-{env} - ## Developing Common development workflows are defined in the `tasks` script. diff --git a/services/transfer-classifier/README.md b/services/transfer-classifier/README.md index 1e9ec488..8f846681 100644 --- a/services/transfer-classifier/README.md +++ b/services/transfer-classifier/README.md @@ -28,6 +28,22 @@ index="spine2vfmmonitor" service="gp2gp" logReference="MPS0053d" - When START_DATETIME and END_DATETIME are both passed (which both must be at midnight), then the data retrieved will be from the daily spine exporter output, and it will output a daily transfer parquet file for each date within the date range. Example of ISO-8601 datetime that is specified for START_DATETIME or END_DATETIME - "2022-01-19T00:00:00Z". +### Running the transfer-classifier manually for testing + +Whilst the transfer-classifier will be run and tested during the spine-exporter step-functions execution + +```json +{ + "CONVERSATION_CUTOFF_DAYS": "8", + "START_DATETIME": "2026-01-01T00:00:00Z", + "END_DATETIME": "2026-02-01T00:00:00Z", + "OUTPUT_TRANSFER_DATA_BUCKET": "prm-gp2gp-transfer-data-{env}", + "MI": false +} +``` +The MI key is now deprecated but still needs to be passed in. For every day in the reporting window it will output +each day to the S3 bucket {prm-gp2gp-transfer-data-{env}} separately into the given conversation cut-off path + #### Environment variables Configuration is achieved via the following environment variables: From 34f4f5a52cdd0052fc75f4d4a190a79e98b00340 Mon Sep 17 00:00:00 2001 From: samfallowfield <73955537+samfallowfield@users.noreply.github.com> Date: Wed, 18 Feb 2026 16:01:15 +0000 Subject: [PATCH 5/7] PRM-717 added manual run steps for services --- services/ods-downloader/README.md | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/services/ods-downloader/README.md b/services/ods-downloader/README.md index 021920ed..9dfdf1fc 100644 --- a/services/ods-downloader/README.md +++ b/services/ods-downloader/README.md @@ -133,5 +133,9 @@ If you see the below notice when trying to activate the python virtual environme 1. NOTE: If you have already populated the S3 lookup bucket then you may retrigger the pipeline with this execution input: ```json { - "time": "YYY-MM-01T00:00:00Z" - } \ No newline at end of file + "time": "YYYY-MM-01T00:00:00Z" + } +``` + +Not automatically calling the pipeline? may need manual triggering +triggered on the 1st of the month by ??? From 406644b89d2a9932d81ce06540e8b8daace1f471 Mon Sep 17 00:00:00 2001 From: samfallowfield <73955537+samfallowfield@users.noreply.github.com> Date: Mon, 23 Feb 2026 10:27:22 +0000 Subject: [PATCH 6/7] PRM-717 updated ReadMe for ods-downloader --- services/ods-downloader/README.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/services/ods-downloader/README.md b/services/ods-downloader/README.md index 9dfdf1fc..2ec82392 100644 --- a/services/ods-downloader/README.md +++ b/services/ods-downloader/README.md @@ -126,7 +126,7 @@ If you see the below notice when trying to activate the python virtual environme ### Running the ods-downloader pipeline manually for testing 1. Send the asidLookup.csv to the gp-registrations-data email address [Manually triggering store_asid_lookup_lambda](https://github.com/NHSDigital/gp2gp-reporting-infrastructure/blob/main/lambdas/store_asid_lookup/README.md). -1. This will then trigger the Ods-Downloader pipeline. +1. The final stages of the Store-asid-lookup-lambda will then trigger the Ods-Downloader pipeline. 1. It will then read from the `prm-gp2gp-asid-lookup-{env}` which is populated by the store asid lookup lambda. It will then grab the relevant practices and organisation metadata. 1. It will then write the Organisation metadata to the output bucket `prm-gp2gp-ods-metadata-{env}/{version}/{year}/{month}` 1. If these operations are run successfully then the pipelines execution will return as a success. @@ -136,6 +136,3 @@ If you see the below notice when trying to activate the python virtual environme "time": "YYYY-MM-01T00:00:00Z" } ``` - -Not automatically calling the pipeline? may need manual triggering -triggered on the 1st of the month by ??? From 1cddd60f8d69f8a4ea15fa1b65c1a4fb86dd8a64 Mon Sep 17 00:00:00 2001 From: chrisbloe Date: Tue, 24 Feb 2026 16:23:31 +0000 Subject: [PATCH 7/7] README updates --- .markdownlint.jsonc | 4 ++++ services/reports-generator/README.md | 29 ++++++++++++++++++---------- 2 files changed, 23 insertions(+), 10 deletions(-) create mode 100644 .markdownlint.jsonc diff --git a/.markdownlint.jsonc b/.markdownlint.jsonc new file mode 100644 index 00000000..151ee39b --- /dev/null +++ b/.markdownlint.jsonc @@ -0,0 +1,4 @@ +{ + "MD013": false, + "MD033": false +} \ No newline at end of file diff --git a/services/reports-generator/README.md b/services/reports-generator/README.md index dd6b5d6f..27f8316f 100644 --- a/services/reports-generator/README.md +++ b/services/reports-generator/README.md @@ -90,12 +90,17 @@ Common development workflows are defined in the `tasks` script. This project is written in Python 3.14. ### Running the reports-generator manually for testing -#### There are 3 different paths that can be taken after triggering this pipeline: + +> Prior to running these tests, inform Richard Sellers that some additional test emails will be received. + +There are 3 different paths that can be taken after triggering this pipeline: + - Daily/weekly reporting window - Monthly reporting window - Custom reporting window Daily + ```json { "ALERT_ENABLED": "true", @@ -105,7 +110,9 @@ Daily "SEND_EMAIL_NOTIFICATION": "true" } ``` + Monthly + ```json { "ALERT_ENABLED": "true", @@ -115,13 +122,15 @@ Monthly "SEND_EMAIL_NOTIFICATION": "true" } ``` + Custom + ```json { "ALERT_ENABLED": "true", "CONVERSATION_CUTOFF_DAYS": "0", - "START_DATETIME": "2026-02-01T00:00Z", - "END_DATETIME": "2026-02-06T00:00Z", + "START_DATETIME": "2024-07-13T00:00Z", + "END_DATETIME": "2024-07-14T00:00Z", "REPORT_NAME": "TRANSFER_OUTCOMES_PER_SUPPLIER_PATHWAY", "SEND_EMAIL_NOTIFICATION": "true" } @@ -136,13 +145,13 @@ Custom #### Installing pyenv -``` +```shell brew install pyenv ``` #### Configure your shell's environment for Pyenv -``` +```shell For zsh: echo 'eval "$(pyenv init --path)"' >> ~/.zprofile echo 'eval "$(pyenv init -)"' >> ~/.zshrc @@ -150,7 +159,7 @@ echo 'eval "$(pyenv init -)"' >> ~/.zshrc #### Install new python and set as default -``` +```shell pyenv install 3.14 pyenv global 3.14 ``` @@ -159,7 +168,7 @@ pyenv global 3.14 In a new shell, run the following: -``` +```shell python -m pip install pipenv python -m pip install -U "pip>=21.1” ``` @@ -168,7 +177,7 @@ python -m pip install -U "pip>=21.1” In a new shell, in the project directory run: -``` +```shell ./tasks devenv ``` @@ -178,16 +187,16 @@ This will create a python virtual environment containing all required dependenci To find out the path of this new virtual environment, run: -``` +```shell pipenv --venv ``` Now you can configure the IDE. The steps for IntelliJ are following: + 1. Go to `File -> Project Structure -> SDK -> Add SDK -> Python SDK -> Existing environments` 2. Click on three dots, paste the virtual environment path from before, and point to the python binary. The path should look like this: `/Users/janeDoe/.local/share/virtualenvs/prm-spine-exporter-NTBCQ41T/bin/python3.14` - ### Running the unit and integration tests `./tasks test`