Skip to content

Continuous benchmarking #97

@marvin-j97

Description

@marvin-j97

Current plan:

  • store raw results in S3 + DynamoDB
  • make it possible to run on different types of cloud instances

DynamoDB key layout (PK, SK):

  • (year#instance_type, d#month#day#hh#mm#ss#commit_id) -> { s3Path, version }
  • (year#instance_type, c#commit_id) -> { s3Path, version }
  • (q#instance_type, commit_id) -> { version }

Example:

  • On push, a new commit, e.g. AD351FD is added as (q#instance_type, AD351FD) with version=2. Instance types are hard coded, e.g. ["fly.performance.4x"], depending on what instances I have available
  • Every 24h, worker instances are woken up (e.g. Fly instance), which pulls an item from its queue (e.g. q#fly.performance.4x), git checks it out
  • The commit datetime is taken, e.g. 2024-11-11 20:15:51
  • If the database already has (2024#fly.performance.4x, c#AD351FD), it is skipped because it already has been benchmarked. The queue item is simply deleted
  • Otherwise, it benchmarks the commit
  • The raw .jsonl is added to S3, and then linked in DynamoDB, using: (2024#fly.performance.4x, d#11#11#20#15#51#AD351FD) and (2024#fly.performance.4x, c#AD351FD)
  • The worker tries to pull another task; if there is none, it will shutdown
  • Another repo will then pull the data from Dynamo + S3 and build a report HTML every 24h

Every S3 file can simply be called results/[commit_id]/fjall.jsonl.

Annotations (e.g. major releases) can be hard coded into the reporting repository directly and then added to each graph https://apexcharts.com/docs/annotations/ (X-axis annotation).

Metadata

Metadata

Assignees

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions