Skip to content

Conversation

@1JW1
Copy link
Contributor

@1JW1 1JW1 commented Jul 2, 2025

Developer Tooling Standard – Production Environments & PII Handling

This PR introduces a new Developer Tooling Standard as part of the Hackney Security Playbook.

It follows on from a recent presentation I gave on Postman, where we investigated an issue involving it logging secrets and environment variables. We later discovered that PII could inadvertently be recorded and stored by the tool due to its use.

Purpose of this Standard

  • To reduce the risk of exposing sensitive citizen data through unapproved or insecure developer tools.
  • To provide clear guidance to developers on what tools can be used in production, what criteria tools must meet, and where responsibilities for tool assessment lie.
  • To ensure that any tools approved for production use with PII are documented in the Hackney Service Catalogue, providing a single point of reference for visibility and governance.

Key Additions in This PR

  • New Developer Tooling Standard section in the Playbook.
  • Detailed requirements for production tool usage.
  • Clear developer responsibilities for tool assessment and documentation in the Service Catalogue and Risk Log.

@1JW1 1JW1 requested review from LBHSPreston and spikeheap July 2, 2025 15:16
@adamtry
Copy link

adamtry commented Jul 10, 2025

Thanks for working on this standard - it's certainly very important we act to prevent the leakage of PII

I think this standard would benefit from a definition of the scope of tool for the purposes of these processes. I guess the most obvious is that it includes web and desktop apps which explicitly process potential PII data like Postman, but there's a lot of potential scope here, e.g:

  • Browsers
  • Browser extensions
  • Google workspace tools & plugins
  • Software packages (npm, pip, etc.)
  • IDE extensions e.g. VSCode extensions
  • "Standard" dev tools and IDEs (VSCode, JetBrains, GitHub workspaces, Docker etc), I guess this is where Postman also goes

I'd be wary of making the scope of this too broad immediately because that'd be a lot of things to need to review, but some general principles to guide sensible judgement before a data handling review has been conducted could be helpful also.

Are there any public reviews that have been carried out by reputable organizations that we could adopt partially or fully?

@spikeheap
Copy link
Contributor

@adamtry that's a good point.

If you're using a VSCode extension to connect to a production database I think it's reasonable to include it in scope, and software dependencies wouldn't be covered because they're not "development, testing or API interaction tools". If you're using ad-hoc NPM packages in an interactive terminal to query a production system, maybe we would want to do some vetting though (but we're not doing that, right?).

If we believe someone might read "tool" and think "software dependencies" it's reasonable to explicitly descope it from the doc and we can cover dependency audits/SBOM somewhere else. Equally, we don't need to get too bogged down in precision if we don't see many requests.

Browsers & browser plugins are a very interesting question. This is scoped to developer tooling, but that could cover accessing the AWS console to view CloudTrail logs. Perhaps it would sensible to explicitly exclude them from the scope of this doc, as it feels like that should be a Council-wide policy rather than only a restriction for developers.

Along the same lines, we can refine the scope so it's clearer that it's not just "accessing production data" but accessing that data as an engineer - this isn't really intended to cover viewing a medical form in Google Workspace, but it does cover logging into a server to check why the medical form isn't showing for a user (for example).

Are there any public reviews that have been carried out by reputable organisations that we could adopt partially or fully?

I hope someone here knows of one. I see ad-hoc reviews when people find problems (as we did with Postman), and search for CVEs if I'm evaluating something, but I've not seen anything. Perhaps it's something we could make public to help others?

Copy link
Contributor

@Duslerke Duslerke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, this proposal makes sense so long its focused tools like desktop apps and web services (like AI).

Comment on lines 33 to 34
- Obtain approval from the Head of Engineering.
- Add approved tools to the [Hackney Service Catalogue](https://docs.google.com/spreadsheets/d/1QoinQ9Yvlr4gvUoROYAA8vDWOWyXw7-vokZmPEH296A/edit?gid=1633075159#gid=1633075159), so that its use is documented and visible to relevant teams.
Copy link
Contributor

@Duslerke Duslerke Jul 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the record: this (line 34) won't be the end of it.

Software changes over time, so does its licensing & terms and conditions. If you want examples, look no further than "Postman" itself or "Serverless Framework" that increased their corporate reach.

Meaning, whatever was approved will need to be re-reviewed over time. Someone will need to go over each thing that was approved and check its terms and conditions, telemetry collection, data storage, etc. Do we want to define a time frame for when things will get re-reviewed, or do we want to do this during every (or every other) update of a given software (or its ToS, privacy policy)?

This still can be manageable if it's regular dev tools software. But if we go as fars as Nuget packages or NPM packages as I've seen suggested, this can become quite laborious very quickly with needing to track multiple versions of the same thing, etc.

Also a curious case: Under what circumstances the head of engineering would not approve of a tool? I'm assuming we present the tool for approval after it ticks all of the boxes described in the criteria section. So if all the boxes are ticked, then is the approval just a formality, or is the head of engineering expected to do an independent research to double-check/verify the claims made in the proposal? If it's the latter, then this likely will limit the breadth of what kind of tools we verify (so stuff like NPM falls out of the picture).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great questions! I really appreciate you digging into the details. I’ve been thinking about how we can keep the process practical and secure without overloading anyone.

When a tool’s licensing, privacy policy, telemetry setup or any other terms change, developers usually get notified—either by email or through update notes when the tool is upgraded. At that point, the person who introduced the tool (typically the dev or team lead) should take ownership of reviewing the changes or if they are not available, another dev who is using the tool. They should check the release notes for anything concerning, update the Hackney Service Catalogue, and make any necessary adjustments to the risk log.

We can make the approval process easier and more consistent by introducing a Tool Evaluation Checklist, which I’ll include in this PR.

To support long-term compliance, teams should also re-review their approved tools on an annual basis to ensure they’re still in use and remain suitable for our needs. I’ll reflect this in the PR as well.

Does this address all of your concerns?

@1JW1 1JW1 marked this pull request as ready for review July 17, 2025 13:26
@1JW1 1JW1 requested review from a team as code owners July 17, 2025 13:26
@1JW1 1JW1 merged commit 20fcb4e into main Jul 17, 2025
5 checks passed
@1JW1 1JW1 deleted the CHORE-CSAT-855-add-technical-standard branch July 17, 2025 13:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants