Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
a0e7bfd
[docs] Add basic configuration with Sphinx
mcopik Mar 9, 2025
d079d55
[azure] Remove old dead code
mcopik Mar 10, 2025
3416522
[gcp] Remove old dead code
mcopik Mar 10, 2025
4459067
[faas] Remove dead old code
mcopik Mar 10, 2025
5e775a1
[system] First batch of docstrings
mcopik Mar 10, 2025
75855cb
[system] Next batch of docstrings
mcopik Jun 18, 2025
cc6d763
[system] Linting of docstrings
mcopik Jun 23, 2025
4d6ae78
[openwhisk] Add correct logic for loading user and cached config
mcopik Jun 23, 2025
4a5cf8a
[system] Linting
mcopik Jun 23, 2025
611d162
[docs] Update sphinx config
mcopik Jun 23, 2025
7858853
[syhstem] Lintiong
mcopik Jun 23, 2025
b764274
[docs] Add YAML config
mcopik Jun 23, 2025
d67d275
[system] Linting
mcopik Jun 24, 2025
7d597e1
[docs] Remove duplicated elements
mcopik Jun 25, 2025
f26d03d
[docs] Removing warnings
mcopik Jun 25, 2025
c3652e2
[docs] Add table of content
mcopik Jun 25, 2025
b821628
[docs] Manual corrections to generated docstrings
mcopik Jun 25, 2025
afa2945
[system] Adapt API of storage to avoid passing mutable arguments
mcopik Jun 25, 2025
695e62d
[docs] More docstrings
mcopik Jun 25, 2025
6467963
[docs] Updated and corrected docstrings
mcopik Jun 25, 2025
963f577
[docs] Finishing reviewing Claude-generated docstrings
mcopik Jun 28, 2025
39baf91
[docs] Split benchmark documentation across files
mcopik Mar 4, 2026
0c37176
Merge remote-tracking branch 'origin/master' into feature/docs
mcopik Mar 4, 2026
2e9e2a4
[dev] Linting
mcopik Mar 4, 2026
f7d1abe
[system] Minor discrepancy in storage handling
mcopik Mar 4, 2026
1420448
[system] Minor mistake
mcopik Mar 4, 2026
7d1ffb3
[aws] Correct C++ runtime to the newest version
mcopik Mar 4, 2026
e4113da
[system] Minor fixes & corrections
mcopik Mar 4, 2026
10aa825
[docs] Update Sphinx generation
mcopik Mar 4, 2026
3175bdc
[docs] Add proper headers
mcopik Mar 4, 2026
de38b78
[dev] Add check for documentation coverage
mcopik Mar 4, 2026
0e6a17c
[docs] Fix headers
mcopik Mar 4, 2026
8210abd
[docs] Move docs again
mcopik Mar 4, 2026
9c1d971
[docs] Remove old files
mcopik Mar 4, 2026
e8ab573
[docs] Update publications
mcopik Mar 4, 2026
4560ef2
[docs] Visualize Mermaid diagrams in Sphinx
mcopik Mar 4, 2026
cef4d0d
[docs] Proper Mermaid visualization
mcopik Mar 4, 2026
715cf5f
[docs] Avoid ambiguity
mcopik Mar 4, 2026
70ca645
[docs] Fix broken link
mcopik Mar 4, 2026
8ee73b3
[docs] Make headings consistent
mcopik Mar 4, 2026
8840a20
[docs] Minor fixes and consistency issues
mcopik Mar 4, 2026
88ae5c1
[docs] Add tutorial links
mcopik Mar 4, 2026
9948e0e
[docs] Ensure that all Markdown links are properly replaced
mcopik Mar 4, 2026
fa3a26e
[docs] Update main image
mcopik Mar 4, 2026
e34f7b1
[docs] Update missing docs
mcopik Mar 4, 2026
5fa58f6
[dev] Linting
mcopik Mar 4, 2026
b1796e6
[docs] Update README
mcopik Mar 4, 2026
f92b457
[docs] Add changelog
mcopik Mar 4, 2026
a09ad66
[docs] Typos and minor fixes
mcopik Mar 4, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,11 @@ jobs:
. python-venv/bin/activate
mypy sebs --config-file=.mypy.ini
name: Python static code verification with mypy
- run:
command: |
. python-venv/bin/activate
interrogate -v --fail-under 100 sebs
name: Check for Python documentation coverage
- store_artifacts:
path: flake-reports
destination: flake-reports
Expand Down
28 changes: 28 additions & 0 deletions .readthedocs.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details

# Required
version: 2

# Set the OS, Python version, and other tools you might need
build:
os: ubuntu-24.04
tools:
python: "3.13"

# Build documentation in the "docs/" directory with Sphinx
sphinx:
configuration: docs/source/conf.py

# Optionally, but recommended,
# declare the Python requirements required to build your documentation
# See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
python:
install:
- requirements: requirements.docs.txt
- requirements: requirements.txt
- requirements: requirements.aws.txt
- requirements: requirements.azure.txt
- requirements: requirements.gcp.txt
- requirements: requirements.local.txt

126 changes: 126 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@

## WiP [1.2.0](https://github.com/spcl/serverless-benchmarks/compare/v1.1...v1.2) (XXXX)

### Features

#### Container & Multi-Architecture Support

* Container deployment support for AWS Lambda (#205)
* Multi-architecture support (x86_64, arm64) for benchmarks (#227)

#### Language Support

* **C++ benchmarks**: Full support for C++ on AWS Lambda with dependency management system (#99, #251)
- Docker-based build system with dependency caching
- Dynamic dependency resolution with CMake generation
- Support for Boost, OpenCV, igraph, PyTorch, hiredis libraries
- C++ implementations: 010.sleep, 210.thumbnailer, 501.graph-pagerank, 503.graph-bfs, 411.image-recognition
* **Python**: Updated support for Python 3.8, 3.9, 3.10, 3.11, 3.12
* **Node.js**: Updated support for Node.js 14, 16, 18, 20

#### NoSQL Database Support

* Complete NoSQL storage integration across platforms (#214)
- AWS: DynamoDB support with query interface
- Azure: CosmosDB integration
- GCP: Cloud Datastore support
- Local/OpenWhisk: ScyllaDB for local testing
* New CRUD API benchmark (130.crud-api) demonstrating NoSQL operations (#214)
* Multi-tier storage system with both object and NoSQL storage

#### Platform-Specific Enhancements

* **AWS**:
- Container deployment with ECR integration
- Support for ARM64 Lambda functions
- DynamoDB table management
- Updated Lambda runtime support (#166)
* **Azure**:
- CosmosDB database management
- Improved HTTP trigger handling
* **GCP**:
- Datastore database management
- Updated dependency versions for Python 3.10+
- Custom deployment waiters
* **Local**:
- ScyllaDB wrapper for NoSQL database
- Improved container lifecycle management
- Memory measurement improvements
* **OpenWhisk**:
- ScyllaDB wrapper for NoSQL database
- Adapted to new container build API
- Downloading metrics

### Bug Fixes

* Fix init.sh to quote variables and add curl fallback (#287)
* Fix bug in type serialization (#264)
* Add SeBS user agent to 120.uploader (#255)
* Fix GCP and local deployment issues (#252)
* Fix local deployment invocation issues (#231, #249)
* Fix invocation overhead experiment (#240)
* Fix storage connection timeout on non-Linux platforms (#197)
* Fix PyTorch benchmark support for Python 3.8 and 3.9 (#165)
* Fix memory measurements on local deployment (#136)
* Fix incorrect igraph version (#113)
* Fix cache handling for code packages and containers

### Improvements

* Comprehensive docstrings across codebase (#244)
* Sphinx-based HTML documentation with API reference (#244)
* Update linting process (#241)
* Dynamic port mapping for function containers in local deployment (#199)
* Single-bucket design for cloud storage (#186)
* Improved handling of cloud credentials (#181)
* Invocation statistics reporting for benchmark and experiment results
* Improved logging throughout the system
* Improved storage and benchmarks documentation
* Versioning for Docker build images
* Support for multiple Azure subscriptions
* Enhanced regression test system with container and ARM support

### Deprecations

* Python 3.6 no longer supported on all platforms
* Node.js 8, 10, 12 deprecated on various platforms
* Older runtime versions phased out across AWS, Azure, and GCP.

### Contributors

This release includes contributions from:
* @userlaurin - multi-platform robustness (#287)
* @DJAntivenom - C++ benchmark bugfixes (#264)
* @HoriaMercan - C++benchmarks (#251)
* @rabbull - GCP bug fixes (#252), local deployment fixes (#249)
* @qdelamea-aneo - Fix invocation overhead experiment (#240)
* @ojninja16 - Versioning and resource IDs (#232)
* @MahadMuhammad - Fixes in local deployment (#231)
* @aidenh6307 - Local documentation updates (#210)
* @prajinkhadka - Container support for AWS (#205)
* @octonawish-akcodes - Local deployment and benchmark version improvements (#198)
* @Kaleab-git - Dynamic port mapping (#199), storage timeout fix (#197)
* @nurSaadat - Documentation improvements (#175)
* @lawrence910426 - Colored CLI output (#141)
* @alevy - Documentation improvements (#139)
* @skehrli - Local memory measurements (#101)
* And many others who contributed bug reports, testing, and feedback!

## [1.1.0](https://github.com/spcl/serverless-benchmarks/compare/v1.0...v1.1) (2022-05-30)

### Features

* Support for the open-source FaaS platform OpenWhisk.
* New system of handling non-root containers that do not require rebuilding Docker images.
* Initial release of deploying functions as Docker containers, first released on OpenWhisk.
* Support for function states on AWS for correct deployment and configuration updates.

### Improvements

* Deprecate Python 3.6 on all platforms.
* Update documentation and tutorials.
* Docker build images for AWS now use official AWS images.
* AWS Lambda functions now support Python 3.9 and Node.js 14. Python 3.6 and Node.js 8 are no longer supported.
* Azure Functions support now Python 3.8 and Python 3.9, and Node.js 12 and 14. Python 3.6, Node.js 8, and 10 are no longer supported.
* Google Cloud Functions support now Node 12 and 14. Node 6 and 8 are no longer supported.
* OpenWhisk supports Python 3.7 and 3.9, Node.js 10 and 12.
58 changes: 55 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@

[![CircleCI](https://circleci.com/gh/spcl/serverless-benchmarks.svg?style=shield)](https://circleci.com/gh/spcl/serverless-benchmarks)
[![Documentation Status](https://readthedocs.org/projects/sebs/badge/?version=latest)](https://sebs.readthedocs.io/en/latest/?badge=latest)
![Release](https://img.shields.io/github/v/release/spcl/serverless-benchmarks)
![License](https://img.shields.io/github/license/spcl/serverless-benchmarks)
![GitHub issues](https://img.shields.io/github/issues/spcl/serverless-benchmarks)
Expand Down Expand Up @@ -52,12 +53,19 @@ documentation:
* [How SeBS package is designed?](docs/design.md)
* [How to extend SeBS with new benchmarks, experiments, and platforms?](docs/modularity.md)

### Publication
## Tutorial

When using SeBS, please cite our [Middleware '21 paper](https://dl.acm.org/doi/abs/10.1145/3464298.3476133).
We provide a tutorial on basic SeBS functionality in the [SeBS-Tutorial repository](https://github.com/spcl/sebs-tutorial.git).
You can learn there how to install SeBS, configure it, deploy OpenWhisk on your system, and launch your first experiments.

## Publications

When using SeBS, please cite our published work.
You can cite our software repository as well, using the citation button on the right.

SeBS has been originally released with the [Middleware '21 paper](https://dl.acm.org/doi/abs/10.1145/3464298.3476133).
An extended version of our paper is [available on arXiv](https://arxiv.org/abs/2012.14132), and you can
find more details about research work [in this paper summary](https://mcopik.github.io/projects/sebs/).
You can cite our software repository as well, using the citation button on the right.

```
@inproceedings{copik2021sebs,
Expand All @@ -78,6 +86,50 @@ You can cite our software repository as well, using the citation button on the r
}
```

The SeBS-Flow paper published at [EuroSys'25](https://dl.acm.org/doi/abs/10.1145/3689031.3717465)
extends SeBS with support for serverless workflows and NoSQL database:

```
@inproceedings{10.1145/3689031.3717465,
author = {Schmid, Larissa and Copik, Marcin and Calotoiu, Alexandru and Brandner, Laurin and Koziolek, Anne and Hoefler, Torsten},
title = {SeBS-Flow: Benchmarking Serverless Cloud Function Workflows},
year = {2025},
isbn = {9798400711961},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3689031.3717465},
doi = {10.1145/3689031.3717465},
booktitle = {Proceedings of the Twentieth European Conference on Computer Systems},
pages = {902–920},
numpages = {19},
keywords = {benchmark, faas, function-as-a-service, orchestration, serverless, serverless DAG, workflow},
location = {Rotterdam, Netherlands},
series = {EuroSys '25}
}
```

The SeBS 2.0 workshop paper published at [SESAME @ EuroSys'25](https://dl.acm.org/doi/abs/10.1145/3721465.3721867)
provides an overview of new and ongoing contributions to SeBS - benchmarks, platforms, languages.

```
@inproceedings{10.1145/3721465.3721867,
author = {Copik, Marcin and Calotoiu, Alexandru and Hoefler, Torsten},
title = {SeBS 2.0: Keeping up with the Clouds},
year = {2025},
isbn = {9798400715570},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3721465.3721867},
doi = {10.1145/3721465.3721867},
booktitle = {Proceedings of the 3rd Workshop on SErverless Systems, Applications and MEthodologies},
pages = {42–44},
numpages = {3},
keywords = {Benchmark, FaaS, Function-as-a-Service, Serverless},
location = {Rotterdam, Netherlands},
series = {SESAME' 25}
}
```

## Installation

Requirements:
Expand Down
9 changes: 9 additions & 0 deletions benchmarks/100.webapps/110.dynamic-html/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# 110.dynamic-html - Dynamic HTML

**Type:** Webapps
**Languages:** Python, Node.js
**Architecture:** x64, arm64

## Description

The benchmark represents a dynamic generation of webpage contents through a serverless function. It generates an HTML from an existing template, with random numbers inserted to control the output. It uses the `jinja2` and `mustache` libraries on Python and Node.js, respectively.
9 changes: 9 additions & 0 deletions benchmarks/100.webapps/120.uploader/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# 120.uploader - Uploader

**Type:** Webapps
**Languages:** Python, Node.js
**Architecture:** x64, arm64

## Description

The benchmark implements the common workflow of uploading user-defined data to the persistent cloud storage. It accepts a URL, downloads file contents, and uploads them to the storage. Python implementation uses the standard library `requests`, while the Node.js version uses the third-party `requests` library installed with `npm`.
9 changes: 9 additions & 0 deletions benchmarks/100.webapps/130.crud-api/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# 130.crud-api - CRUD API

**Type:** Webapps
**Languages:** Python
**Architecture:** x64, arm64

## Description

The benchmark implements a simple CRUD application simulating a webstore cart. It offers three basic methods: add new item (`PUT`), get an item (`GET`), and query all items in a cart. It uses the NoSQL storage, with each item stored using cart id as primary key and item id as secondary key. The Python implementation uses cloud-native libraries to access the database.
9 changes: 9 additions & 0 deletions benchmarks/200.multimedia/210.thumbnailer/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# 210.thumbnailer - Thumbnailer

**Type:** Multimedia
**Languages:** Python, Node.js
**Architecture:** x64, arm64

## Description

This benchmark implements one of the most common serverless workloads. It downloads an image from the cloud storage, resizes it to a thumbnail size, uploads the new smaller version to the cloud storage, and returns the location to the caller, allowing them to insert the newly created thumbnail. To resize the image, it uses the `Pillow` and `sharp` libraries on Python and Node.js, respectively.
9 changes: 9 additions & 0 deletions benchmarks/200.multimedia/220.video-processing/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# 220.video-processing - Video Processing

**Type:** Multimedia
**Languages:** Python
**Architecture:** x64, arm64

## Description

The benchmark implements two operations on video files: adding a watermark and creating a gif. Both input and output media are passed through the cloud storage. To process the video, the benchmark uses `ffmpeg`. The benchmark installs the most recent static binary of `ffmpeg` provided by [John van Sickle](https://johnvansickle.com/ffmpeg/).
9 changes: 9 additions & 0 deletions benchmarks/300.utilities/311.compression/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# 311.compression - Compression

**Type:** Utilities
**Languages:** Python
**Architecture:** x64, arm64

## Description

The benchmark implements a common functionality of websites managing file operations - gather a set of files in cloud storage, compress them together, and return a single archive to the user. It implements the .zip file creation with the help of the `shutil` standard library in Python.
21 changes: 21 additions & 0 deletions benchmarks/400.inference/411.image-recognition/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# 411.image-recognition - Image Recognition

**Type:** Inference
**Languages:** Python
**Architecture:** x64

## Description

The benchmark is inspired by MLPerf and implements image recognition with Resnet50. It downloads the input and model from the storage and uses the CPU-only `pytorch` library in Python.

## Important Notes

> [!WARNING]
> This benchmark contains PyTorch which is often too large to fit into a code package. Up to Python 3.7, we can directly ship the dependencies. For Python 3.8, we use an additional zipping step that requires additional setup during the first run, making cold invocations slower. Warm invocations are not affected.

> [!WARNING]
> This benchmark does not work on AWS with Python 3.9 due to excessive code size. While it is possible to ship the benchmark by zipping `torchvision` and `numpy` (see `benchmarks/400.inference/411.image-recognition/python/package.sh`), this significantly affects cold startup. On the lowest supported memory configuration of 512 MB, the cold startup can reach 30 seconds, making HTTP trigger unusable due to 30 second timeout of API gateway. Use Docker deployments for these configurations.

> [!WARNING]
> This benchmark does not work on GCP with Python 3.8+ due to excessive code size. To the best of our knowledge, there is no way of circumventing that limit, as Google Cloud offers neither layers nor custom Docker images.
Comment on lines +13 to +20
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix markdownlint MD028 blockquote formatting.

Lines 15 and 18 introduce blank lines inside blockquotes, which triggers MD028 (no-blanks-blockquote).

🛠️ Proposed markdown fix
 > [!WARNING]
 > This benchmark contains PyTorch which is often too large to fit into a code package. Up to Python 3.7, we can directly ship the dependencies. For Python 3.8, we use an additional zipping step that requires additional setup during the first run, making cold invocations slower. Warm invocations are not affected.
-
+>
 > [!WARNING]
 > This benchmark does not work on AWS with Python 3.9 due to excessive code size. While it is possible to ship the benchmark by zipping `torchvision` and `numpy` (see `benchmarks/400.inference/411.image-recognition/python/package.sh`), this significantly affects cold startup. On the lowest supported memory configuration of 512 MB, the cold startup can reach 30 seconds, making HTTP trigger unusable due to 30 second timeout of API gateway. Use Docker deployments for these configurations.
-
+>
 > [!WARNING]
 > This benchmark does not work on GCP with Python 3.8+ due to excessive code size. To the best of our knowledge, there is no way of circumventing that limit, as Google Cloud offers neither layers nor custom Docker images.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
> [!WARNING]
> This benchmark contains PyTorch which is often too large to fit into a code package. Up to Python 3.7, we can directly ship the dependencies. For Python 3.8, we use an additional zipping step that requires additional setup during the first run, making cold invocations slower. Warm invocations are not affected.
> [!WARNING]
> This benchmark does not work on AWS with Python 3.9 due to excessive code size. While it is possible to ship the benchmark by zipping `torchvision` and `numpy` (see `benchmarks/400.inference/411.image-recognition/python/package.sh`), this significantly affects cold startup. On the lowest supported memory configuration of 512 MB, the cold startup can reach 30 seconds, making HTTP trigger unusable due to 30 second timeout of API gateway. Use Docker deployments for these configurations.
> [!WARNING]
> This benchmark does not work on GCP with Python 3.8+ due to excessive code size. To the best of our knowledge, there is no way of circumventing that limit, as Google Cloud offers neither layers nor custom Docker images.
> [!WARNING]
> This benchmark contains PyTorch which is often too large to fit into a code package. Up to Python 3.7, we can directly ship the dependencies. For Python 3.8, we use an additional zipping step that requires additional setup during the first run, making cold invocations slower. Warm invocations are not affected.
>
> [!WARNING]
> This benchmark does not work on AWS with Python 3.9 due to excessive code size. While it is possible to ship the benchmark by zipping `torchvision` and `numpy` (see `benchmarks/400.inference/411.image-recognition/python/package.sh`), this significantly affects cold startup. On the lowest supported memory configuration of 512 MB, the cold startup can reach 30 seconds, making HTTP trigger unusable due to 30 second timeout of API gateway. Use Docker deployments for these configurations.
>
> [!WARNING]
> This benchmark does not work on GCP with Python 3.8+ due to excessive code size. To the best of our knowledge, there is no way of circumventing that limit, as Google Cloud offers neither layers nor custom Docker images.
🧰 Tools
🪛 LanguageTool

[style] ~19-~19: Using many exclamation marks might seem excessive (in this case: 3 exclamation marks for a text that’s 1387 characters long)
Context: ...ployments for these configurations. > [!WARNING] > This benchmark does not work ...

(EN_EXCESSIVE_EXCLAMATION)

🪛 markdownlint-cli2 (0.21.0)

[warning] 15-15: Blank line inside blockquote

(MD028, no-blanks-blockquote)


[warning] 18-18: Blank line inside blockquote

(MD028, no-blanks-blockquote)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@benchmarks/400.inference/411.image-recognition/README.md` around lines 13 -
20, The blockquote sections starting with "[!WARNING]" contain blank lines
inside them which violate markdownlint MD028; remove the empty lines so each
blockquote is continuous (i.e., delete the blank line between the two lines in
the first and second [!WARNING] blocks and between the two lines in the third
block) so every "[!WARNING]" block has no internal blank lines while preserving
the existing text.


9 changes: 9 additions & 0 deletions benchmarks/500.scientific/501.graph-pagerank/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# 501.graph-pagerank - Graph PageRank

**Type:** Scientific
**Languages:** Python
**Architecture:** x64, arm64

## Description

The benchmark represents scientific computations offloaded to serverless functions. It uses the `python-igraph` library to generate an input graph and process it with the PageRank algorithm.
9 changes: 9 additions & 0 deletions benchmarks/500.scientific/502.graph-mst/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# 502.graph-mst - Graph MST

**Type:** Scientific
**Languages:** Python
**Architecture:** x64, arm64

## Description

The benchmark represents scientific computations offloaded to serverless functions. It uses the `python-igraph` library to generate an input graph and process it with the Minimum Spanning Tree (MST) algorithm.
9 changes: 9 additions & 0 deletions benchmarks/500.scientific/503.graph-bfs/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# 503.graph-bfs - Graph BFS

**Type:** Scientific
**Languages:** Python
**Architecture:** x64, arm64

## Description

The benchmark represents scientific computations offloaded to serverless functions. It uses the `python-igraph` library to generate an input graph and process it with the Breadth-First Search (BFS) algorithm.
9 changes: 9 additions & 0 deletions benchmarks/500.scientific/504.dna-visualisation/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
# 504.dna-visualisation - DNA Visualization

**Type:** Scientific
**Languages:** Python
**Architecture:** x64, arm64

## Description

This benchmark is inspired by the [DNAVisualization](https://github.com/Benjamin-Lee/DNAvisualization.org) project and it implements processing the `.fasta` file with the `squiggle` Python library.
20 changes: 20 additions & 0 deletions docs/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Minimal makefile for Sphinx documentation
#

# You can set these variables from the command line, and also
# from the environment for the first two.
SPHINXOPTS ?=
SPHINXBUILD ?= sphinx-build
SOURCEDIR = source
BUILDDIR = build

# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

.PHONY: help Makefile

# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
Loading