Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
54da5eb
[system] Rename CLI driver for easier packaging
mcopik Mar 6, 2025
41840f2
[system] Add first version of the packaging script with setuptools
mcopik Mar 6, 2025
4cc5994
Merge branch 'master' into feature/better-install
mcopik Mar 16, 2026
5fc89aa
[system] Change from setup.py to pyproject.oml
mcopik Mar 16, 2026
b8eb47a
[system] New detection of platform support
mcopik Mar 16, 2026
9331d5e
[aws] Remove runtime dependency on boto3 mypy stubs
mcopik Mar 16, 2026
3393dc7
[system] Add support for resources within a package
mcopik Mar 16, 2026
45a09b5
[dev] Fix wrong copyright
mcopik Mar 16, 2026
5c47473
Merge remote-tracking branch 'origin/master' into feature/better-install
mcopik Mar 16, 2026
673d135
[system] Ensure benchmarks data is available
mcopik Mar 16, 2026
ef10a62
[system] Rework project root location to work properly with dockerfil…
mcopik Mar 16, 2026
a6957d8
[dev] Linting
mcopik Mar 16, 2026
f0e7c4c
[system] Install all platforms by default
mcopik Mar 17, 2026
4a0ffb3
[dev] Linting
mcopik Mar 17, 2026
41959c9
[system] Deprecate old install.py script
mcopik Mar 17, 2026
3394215
[docs] Update main README to new methods
mcopik Mar 17, 2026
58d8190
[system] Minor issues
mcopik Mar 17, 2026
08ed3f0
[system] Rename types
mcopik Mar 17, 2026
386dcb7
[docs] Update documentation with new CLI drivers
mcopik Mar 17, 2026
4ae8e9e
[docs] Minor fixes
mcopik Mar 17, 2026
ca909d4
[docs] Final doc fixes
mcopik Mar 17, 2026
55bf28d
Merge branch 'master' into feature/better-install
mcopik Mar 17, 2026
ce652e6
[system] Remove old exmaple
mcopik Mar 17, 2026
74407ae
[dev] Linting
mcopik Mar 17, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 50 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,40 +55,70 @@ Or do you want to use SeBS in your work and you need new features?

Requirements:
- Docker (at least 19)
- Python 3.10+ with:
- pip
- venv
- Python 3.10+ with `pip` + `venv` or `uv`
- `libcurl` and its headers must be available on your system to install `pycurl`
- Standard Linux tools and `zip` installed

... and that should be all. We currently support Linux and other POSIX systems with Bash available.
On Windows, we recommend using WSL.
... and that should be all. We currently support Linux and other POSIX systems with Bash available. On Windows, we recommend using WSL.

To install the benchmarks with a support for all platforms, use:
> [!WARNING]
> Please do not use SeBS with `sudo`. There is no requirement to use any superuser permissions. **Make sure** that your Docker daemon is running and your user has sufficient permissions to use it (see [Docker documentation](https://docs.docker.com/engine/install/linux-postinstall/) on configuring your user to have non-sudo access to containers). Otherwise, you might see many "Connection refused" and "Permission denied" errors when using SeBS.

SeBS can be installed in one of three ways:

### 1. Package Install (Recommended for Users)

Install SeBS directly from PyPI with your favorite tools:

```bash
pip install serverless-benchmarks
sebs --help

uv pip install serverless-benchmarks
uv run sebs --help
```
./install.py --aws --azure --gcp --openwhisk --local
```

It will create a virtual environment in `python-venv`, and install necessary Python
dependencies and third-party dependencies. To use SeBS, you must first active the new Python
virtual environment:
Now you can deploy serverless experiments :-) Benchmarks data will be automatically cloned to `~/.sebs/benchmarks-data/` on first benchmark use.

To verify the correctness of installation, you can use [our regression testing](docs/usage.md#regression).

### 2. Git Install (For Contributors)

For developers who want to modify SeBS or contribute to the project:

```bash
git clone https://github.com/spcl/serverless-benchmarks.git
cd serverless-benchmarks
# -e for editable install, i.e, changes are immediately visible in the package
# [dev] adds developer dependencies, e.g., for code linting
pip install -e '.[dev]'
sebs --help

# alternative
uv sync --extra dev
uv run sebs --help
```
. python-venv/bin/activate

### 3. Legacy Development Install

This method is deprecated and will be removed in future releases. It is recommended to use the Git Install method instead.

```bash
git clone https://github.com/spcl/serverless-benchmarks.git
cd serverless-benchmarks
./install.py --aws --azure --gcp --openwhisk --local
```

Now you can deploy serverless experiments :-)
This will create a virtual environment in `python-venv`, and install necessary Python
dependencies and third-party dependencies. To use SeBS, you must first activate the new Python virtual environment:

The installation of additional platforms is controlled with the `--{platform}` and `--no-{platform}`
switches. Currently, the default behavior for `install.py` is to install only the
local environment.
```bash
. python-venv/bin/activate
python -m sebs.cli --help
```

To verify the correctness of installation, you can use [our regression testing](docs/usage.md#regression).
The installation of additional platforms is controlled with the `--{platform}` and `--no-{platform}` switches. Currently, the default behavior for `install.py` is to install only the local environment.

> [!WARNING]
> Please do not use SeBS with `sudo`. There is no requirement to use any superuser permissions. **Make sure** that your Docker daemon is running and your user has sufficient permissions to use it (see [Docker documentation](https://docs.docker.com/engine/install/linux-postinstall/) on configuring your user to have non-sudo access to containers). Otherwise, you might see many "Connection refused" and "Permission denied" errors when using SeBS.

## Tutorial

We provide a tutorial on basic SeBS functionality in the [SeBS-Tutorial repository](https://github.com/spcl/sebs-tutorial.git).
Expand Down
69 changes: 0 additions & 69 deletions config/example2.json

This file was deleted.

File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
28 changes: 19 additions & 9 deletions docs/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,16 @@ SeBS enforces code quality through four tools:

Run `tools/linting.py sebs` to apply formatting and check code quality.

To use it, make sure that SeBS has been installed with development tools:

```bash
# with pip
pip install .[dev]

# with uv
uv sync --extra dev
```

## Docker Images

SeBS uses Docker images for building functions and running benchmarks:
Expand Down Expand Up @@ -46,22 +56,22 @@ as these are built and pushed as part of the benchmark deployment process:

```bash
# Build all images for all platforms
./sebs.py docker build
sebs docker build

# Build all images for a specific platform
./sebs.py docker build --deployment aws
sebs docker build --deployment aws

# Build images for a specific language and version
./sebs.py docker build --deployment aws --language python --language-version 3.9
sebs docker build --deployment aws --language python --language-version 3.9

# Build specific image type
./sebs.py docker build --deployment aws --language python --image-type build
sebs docker build --deployment aws --language python --image-type build

# Build for specific architecture
./sebs.py docker build --deployment aws --architecture x64
sebs docker build --deployment aws --architecture x64

# Build C++ dependencies
./sebs.py docker build --deployment local --language cpp --image-type dependencies --dependency-type opencv
sebs docker build --deployment local --language cpp --image-type dependencies --dependency-type opencv
```

### Pushing Images
Expand All @@ -70,19 +80,19 @@ Push locally-built infrastructure images to DockerHub (requires authentication):

```bash
# Push all images for a platform
./sebs.py docker push --deployment aws
sebs docker push --deployment aws
```

## Project Structure

```
sebs/
├── sebs.py # Main CLI entry point
├── install.py # Installation script
├── install.py # (Deprecated) Installation script
├── sebs/ # Main Python package
│ ├── __init__.py
│ ├── sebs.py # Core SeBS class
│ ├── cache.py # Caching system of cloud resources
│ ├── cli.py # Main CLI entry point
│ ├── benchmark.py # Benchmark core class
│ ├── docker_builder.py # Docker image build/push operations
│ ├── config.py # SeBS configuration management
Expand Down
14 changes: 7 additions & 7 deletions docs/design.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ and the external dependencies of SeBS.
## Directory structure


`sebs.py` - the CLI for SeBS (see next section for details).
`sebs/cli.py` - the CLI for SeBS (see next section for details).

### Benchmarks

Expand All @@ -22,7 +22,7 @@ of a function and provide cloud compatibility with those wrappers.
### Management

`config` - JSON configuration files for SeBS, and examples of user configuration files
provided to the `sebs.py` with flag `--config`.
provided to the CLI driver with flag `--config`.

`.black.toml, .mypy.ini, .flake8.cfg` - configuration files for PEP8 linting and verification
of static types.
Expand Down Expand Up @@ -82,27 +82,27 @@ used for microarchitectural analysis of local invocations.

## CLI Interface

`sebs.py benchmark invoke` - the workflow begins by creating an instance of `sebs.faas.System`
`sebs benchmark invoke` - the workflow begins by creating an instance of `sebs.faas.System`
for the selected platform. Then, an instance of `sebs.Benchmark` is created, and both objects
are used to create or update function code package and upload or update input data in the cloud
storage with the help of `sebs.faas.PersistentStorage` implementation.
In the end, an object of `sebs.faas.function.Function` is created with exposes a list of triggers
encapsulated in `sebs.faas.function.Trigger`. The function is invoked via a selected trigger,
and the output includes a JSON file with invocation ID and results.

`sebs.py benchmark process` - the JSON result from benchmark invocation is read, deserialized,
`sebs benchmark process` - the JSON result from benchmark invocation is read, deserialized,
and the cloud instance is queried for performance metrics related to these invocations.
Afterward, the updated JSON is written to the output directory.

`sebs.py benchmark regression` - this workflow uses the `regression_suite` function from
`sebs benchmark regression` - this workflow uses the `regression_suite` function from
`sebs/regresion.py` to deploy all benchmarks to the selected cloud platform.
The function reports all errors encountered during deployment and invocation.

`sebs.py experiment invoke` - an instance of `sebs.experiments.Experiment` is created,
`sebs experiment invoke` - an instance of `sebs.experiments.Experiment` is created,
and the `run` function implementing experiment logic is executed. The configuration of experiment
is passed by the user in the config JSON.

`sebs.py experiment process` - similarly to the benchmark processing, the cloud metrics are queried
`sebs experiment process` - similarly to the benchmark processing, the cloud metrics are queried
for all invocations in the experiment, and the results are stored as dataframes in .csv files.

## FaaS Interface
Expand Down
6 changes: 3 additions & 3 deletions docs/experiments.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

For details on experiments and methodology, please refer to our papers.

To run experiments, use the `sebs.py benchmark invoke <experiment> -c <config-path>` command.
To run experiments, use the `sebs benchmark invoke <experiment> -c <config-path>` command.
The configuration of each experiment consists of two parts: deployment and experiment.
The specification of deployment, which is the same for all experiments, is shown below.
Then, each benchmark has its own JSON object containing parameters specific to the experiment.
Expand Down Expand Up @@ -75,7 +75,7 @@ The field `benchmark` and `input-size` specifies the benchmark function to be ex
To execute the benchmark, provide the path to the configuration:

```
sebs.py experiment invoke perf-cost --config config/example_perf_cost.json --output-dir experiments-result
sebs experiment invoke perf-cost --config config/example_perf_cost.json --output-dir experiments-result
```

At the end of each configuration, you should in the output statistical results summarizing the experiment:
Expand All @@ -95,7 +95,7 @@ The full data can be found in the `experiments-result/perf-cost` directory. Each
We support querying cloud logs to locate cloud provider billing data. SeBS achieves this by reading the experiment data obtained in the previous step, finding all invocation IDs, querying cloud log entries, and finding matching data. To process results, run:

```
sebs.py experiment process perf-cost --config config/example_perf_cost.json --output-dir experiments-result
sebs experiment process perf-cost --config config/example_perf_cost.json --output-dir experiments-result
```

For example, on the AWS, you should see the following output for each experiment configuration:
Expand Down
12 changes: 6 additions & 6 deletions docs/storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,13 @@ You can start the necessary storage services using the `storage` command in SeBS

```bash
# Start only object storage
./sebs.py storage start object config/storage.json --output-json storage_object.json
sebs storage start object config/storage.json --output-json storage_object.json

# Start only NoSQL database
./sebs.py storage start nosql config/storage.json --output-json storage_nosql.json
sebs storage start nosql config/storage.json --output-json storage_nosql.json

# Start both storage types
./sebs.py storage start all config/storage.json --output-json storage.json
sebs storage start all config/storage.json --output-json storage.json
```

The command deploys the requested storage services as Docker containers and generates a configuration file in JSON format.
Expand Down Expand Up @@ -136,11 +136,11 @@ By default, storage containers are retained after experiments complete. This all
When you're done with your experiments, you can stop the storage services:

```bash
./sebs.py storage stop object storage.json
sebs storage stop object storage.json

./sebs.py storage stop nosql storage.json
sebs storage stop nosql storage.json

./sebs.py storage stop all storage.json
sebs storage stop all storage.json
```

### Erasing Volumes
Expand Down
Loading