Skip to content

feat(db): improve GET /annotate/all performance#601

Open
labradorite-dev wants to merge 18 commits intodevfrom
issue-566-optimize-annotation-load-time
Open

feat(db): improve GET /annotate/all performance#601
labradorite-dev wants to merge 18 commits intodevfrom
issue-566-optimize-annotation-load-time

Conversation

@labradorite-dev
Copy link
Collaborator

@labradorite-dev labradorite-dev commented Feb 28, 2026

Summary

Improves GET /annotate/all performance.

Background Context

Issue #566 describes improving performance of an endpoint. However, before improving performance I first wanted to measure current performance to drive the decision with data. As such I established a benchmark using pytest-benchmark. To back the benchmark with scale, it seeds 10k URLs to populate some relatively realistic annotation data. The most recent benchmark can be seen in the new GHA I added (see in the PR checks).

The benchmark determined that main_query_s was by far the largest portion of wall-clock time in this endpoint and so I focused my efforts on improving that performance. I considered a number of different solutions but by far the most effective is to convert the url_annotation_count_view and url_annotation_flags from regular PostgreSQL views to materialized views with unique indexes on url_id.

Performance Improvement

Metric Before After Improvement
main_query_s @ 10k URLs ~177ms ~0.64ms ~276× faster
Total HTTP roundtrip @ 10k URLs ~183ms ~12ms ~15× faster

From what I can tell, both views are used solely as internal query infrastructure to determine which URL gets served to an annotator next, via sorting. This does come at a cost though, which is a tradeoff for staleness.

Staleness Tradeoff

I added the materialized views to the existing RefreshMaterializedViewsOperator schedule which is once per day. In the worst case, URL ranking could be up to 24 hours stale — e.g. a URL that just received its 10th annotation may still be ranked highly when it should be deprioritized. My personal viewpoint is that this is acceptable because:

  • No user-facing data is incorrect — only the ordering of work is affected
  • No writes or authorization decisions are gated on these values
  • Annotators are directed to URLs in a slightly suboptimal order, not a wrong one
  • Follows the same trade-off already accepted by the 3 existing materialized views in the codebase

If fresher data is a product priority, I'd recommend a more aggressive refresh rate (such as hourly) for the materialized view.

Alternative Long Term Solution

I would be remiss to not include the other, much larger refactor I considered which is to implement proper CQRS. The core of the issue here is that data is prioritized for writes, NOT reads. Then, at query time we pay the price and have to stitch all the data together. Materialized views shift that data stitching slightly left (once a day in a separate job), but a potentially more "complete" fix would be to add a new, fully denormalized table which looks like this:

-- read model, updated on every write command
  annotation_queue (
      url_id        PRIMARY KEY,
      priority_rank INT,  -- pre-computed: manual > followed > count > id
      total_count   INT,
      is_manual     BOOL,
      followed_by_user BOOL,
      ...
  )

Then at each time we write a new annotation to be consumed by annotators, we also populate this table correctly. Then, at query time we have a primary key lookup (extremely cheap): SELECT * FROM annotation_queue ORDER BY priority_rank LIMIT 1.

If we are interested, I do think this could be a new issue if read performance will continue to be prioritized for the app.

Test plan

  • All 4 benchmark tests pass (test_benchmark_annotate_all_*)
  • Migration applies cleanly from scratch (alembic upgrade head)
  • CONCURRENTLY refresh works (unique indexes in place as prerequisite)

@gitguardian
Copy link

gitguardian bot commented Feb 28, 2026

⚠️ GitGuardian has uncovered 1 secret following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secret in your pull request
GitGuardian id GitGuardian status Secret Commit Filename
15086721 Triggered Generic Password baf7550 .github/workflows/benchmark.yml View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secret safely. Learn here the best practices.
  3. Revoke and rotate this secret.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@labradorite-dev
Copy link
Collaborator Author

⚠️ GitGuardian has uncovered 1 secret following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secret in your pull request

GitGuardian id GitGuardian status Secret Commit Filename
15086721 Triggered Generic Password f099307 .github/workflows/benchmark.yml View secret
🛠 Guidelines to remediate hardcoded secrets

  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secret safely. Learn here the best practices.
  3. Revoke and rotate this secret.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider

🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

Not a real secret (:

- Add pytest-benchmark~=4.0 dev dependency
- Add ContextVar-based timing collector (timing.py) for zero-cost
  production instrumentation via _phase() context managers
- Instrument extract_and_format_get_annotation_result() with per-phase
  timing: format_s, agency_suggestions_s, location_suggestions_s,
  name_suggestions_s, batch_info_s
- Instrument GetNextURLForAllAnnotationQueryBuilder.run() with
  main_query_s timing
- Add benchmark test suite under tests/automated/integration/benchmark/
  with HTTP round-trip and per-phase breakdown tests
- Add GHA workflow (.github/workflows/benchmark.yml) that runs on
  workflow_dispatch or PR to dev, uploads JSON artifact per commit SHA
- Add README with Mermaid sequence diagram of the measured call chain
…566)

Add scale_seed.py with create_scale_data() that seeds 10k URLs plus
geographic hierarchy, agencies, name/agency/location suggestions to
exercise the query planner under realistic load. Add scale_seeder
fixture (module-scoped) and two new benchmark tests that mirror the
existing small-data pair, printing per-phase averages for direct
comparison.
)

The sequential agency_auto_suggestions / add_location_suggestion loops
(4 DB calls × 5k iterations each) blew through the 300s pytest timeout
during fixture setup. Replace with 3 round-trips per suggestion type:
initiate_task → bulk_insert subtasks (return_ids) → bulk_insert suggestions.
SuggestionModel.robo_confidence is typed int; Pydantic rejects float
values with fractional parts (e.g. 0.9). Use 1.0 which coerces cleanly.
…ags (#566)

Convert both annotation views from regular views to MATERIALIZED VIEWs
with unique indexes on url_id. Add CONCURRENTLY refresh calls to
refresh_materialized_views(). Drops main_query_s from ~177ms to ~0.64ms
at 10k-URL scale (~276x improvement).
…test (#566)

With materialized views, URLs added between refreshes have no row in the
view. Inner joins excluded them entirely, making new URLs invisible to
annotators. Switch to LEFT OUTER JOINs so URLs without a view row still
appear with NULL/0 counts.

The sort test explicitly verifies annotation-count-driven ordering, so it
needs a manual refresh_materialized_views() call after data setup to
reflect counts before querying.
- Migration: Union[str, None] -> Optional[str], add docstrings to upgrade/downgrade
- async_.py: add missing newline at end of file
… heads (#566)

Upstream dev merged 1fb2286a016c (add_internet_archive_to_batch_strategy)
which also chains from 759ce7d0772b, creating two Alembic heads.
Update our migration's down_revision to sit after upstream's.
…o docker-compose

GitGuardian flagged the literal default password added to docs/development.md.
Replaced with a pointer to local_database/docker-compose.yml where it is already defined.
@labradorite-dev labradorite-dev force-pushed the issue-566-optimize-annotation-load-time branch from e5dc903 to 80b77b8 Compare February 28, 2026 22:30
Copy link
Collaborator

@maxachis maxachis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So my impression of this is that the core idea and the solution is good! Great, even! I think benchmarking is something we lack in this repo, and it's forward-thinking to include it. And the proposed solution makes perfect sense -- those views definitely justify being materialized. I think this will definitely improve performance.

But the PR is also introducing new design principles (primarily the adjustment of code to be amenable to benchmarking) that I want to interrogate before committing. I'm not averse to introducing such principles if they're justified, but I do want to ensure they are justified, because that will be the pattern we follow when benchmarking other parts of this app and data-sources.

A secondary issue is that it's touching a few too many files that don't need touched for this PR. I wouldn't block on that, but I am noting it.

from tests.automated.integration.benchmark.scale_seed import ScaleSeedResult
from tests.automated.integration.readonly.helper import ReadOnlyTestHelper

_PHASE_ATTRS = (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do think this logic is a little difficult to parse. The questions that come to mind are:

  1. What is a "phase" in this context?
  2. Why are they written to JSON?

The answers are in the code, but I think it would benefit to have a bit of documentation explaining the workflow, because it doesn't seem immediately apparent.

We also create an environment variable that is used once and not documented at root level (we have an ENV.md file for this purpose). So I want to interrogate whether we should have it as an env , which I leave to your discretion, as I don't have a strong opinion on it other than wanting to interrogate it. If we do include it, make sure it's documented at root level (and perhaps note this in CLAUDE.md as part of the general style guide).

on:
workflow_dispatch:
pull_request:
branches: [dev]
Copy link
Collaborator

@maxachis maxachis Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of having a benchmark that we can repeatedly review! But I'm not convinced we should have it run on every PR to dev -- main perhaps (which is not listed here), but I want to be sensitive to GitHub action usage limits. We may be nowhere near those limits (I'll ask @josh-chamberlain for deets on that), but it's something I want to flag for closer inspection.

The other thing is that I am assuming that relatively few code changes will impact annotation benchmarking, and in that context it may not be efficient to have a GitHub workflow solely dedicated to one (albeit important) part of the app that runs on every code change, because most of the time it will function just fine. The lint and test actions, for example, are always relevant; even if most of the code they test is unchanged, it can still fail on conceivably any change made. But that's not as probable with this action.

That might be an argument for a wider set of benchmarks, which would be a separate issue.

My first instinct is I like the benchmark, and I'm open to the idea of having GitHub actions for a more broad set of benchmarks. But I want to keep scope relatively narrow in this PR. I would either cut it for now or set it to workflow dispatch only.

# revision identifiers, used by Alembic.
revision: str = '1fb2286a016c'
down_revision: Union[str, None] = '759ce7d0772b'
down_revision: Optional[str] = '759ce7d0772b'
Copy link
Collaborator

@maxachis maxachis Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've no objections to this in principle, but do keep an eye on keeping scope focused to what's relevant to this issue! We're affecting a number of files as-is, and tighter PRs make things simpler to review.

URLAnnotationFlagsView,
URLAnnotationFlagsView.url_id == URL.id
URLAnnotationFlagsView.url_id == URL.id,
isouter=True
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I'm trying to think of a scenario where isouter would be necessary here. I don't think it will have a negative impact, since there are no duplicates in these views (at least as far as I know). But it's also not clear that it makes a meaningful difference.

I suppose, since a mat view will be on a lag, this would allow the query to pick up URLs that haven't yet been added to the mat views. But then we can't do much with those URLs because we don't have the attributes contained in those mat views!

Open to being proven wrong, but otherwise I would nix.

url.user_url_type_suggestions,
url.anon_url_type_suggestions
"""Extract and format the annotation result for a URL."""
with _phase("format_s"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Why the _s underscore for these? If that's an established term, then mea culpa, but still need my peasant brain educated 🧠.

# Agencies
agency_suggestions: AgencyAnnotationResponseOuterInfo = \
await GetAgencySuggestionsQueryBuilder(url_id=url.id).run(session)
with _phase("agency_suggestions_s"):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not deeply experienced in benchmarking, but I'm hesitant to adjust the code primarily to make it more amenable to benchmarking. In my (limited) experience, there are benchmarking libraries that can identify the biggest contributors without having to resort to these sorts of adjustments.

Again, not my area of knowledge, If there's a compelling reason for doing it like this, I'll approve and add it to a general architecture guide! But let's chat about it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants