Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 18 additions & 4 deletions docs/concepts/flows.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,24 @@
# Flows

Flows are machine learning pipelines, models, or scripts. They are typically uploaded directly from machine learning libraries (e.g. scikit-learn, pyTorch, TensorFlow, MLR, WEKA,...) via the corresponding [APIs](https://www.openml.org/apis). Associated code (e.g., on GitHub) can be referenced by URL.
Flows are machine learning pipelines, models, or scripts that can transform data into a model.
They often have a number of hyperparameters which may be configured (e.g., a Random Forest's "number of trees" hyperparameter).
Flows are, for example, scikit-learn's `RandomForestClassifier`, mlr3's `"classif.rpart"`, or WEKA's `J48`, but can also be "AutoML Benchmark's autosklearn integration" or any other script.
The metadata of a flow describes, if provided, the configurable hyperparameters, their default values, and recommended ranges.
They _do not_ describe a specific configuration (Setups log the configuration of a flow used in a [run](./runs.md)).

They are typically uploaded directly from machine learning libraries (e.g. scikit-learn, pyTorch, TensorFlow, MLR, WEKA,...) via the corresponding [APIs](https://www.openml.org/apis), but is possible to define them manually too (see also [this example of openml-python](http://openml.github.io/openml-python/latest/examples/Basics/simple_flows_and_runs_tutorial/) or the REST API documentation). Associated code (e.g., on GitHub) can be referenced by URL.


!!! note "Versions"

It is convention to distinguish between software versions through the Flow's `external_version` property.
This is because both internal and external changes can be made to code the Flow references, which would affect people using them.
For example, hyperparameters may be introduced or deprecated across different versions of the same algorithm, or their internal behavior may change (and result in different models).
Automatically generated flows from e.g. `openml-python` or `mlr3oml` automatically populated the `external_version` property.

## Analysing algorithm performance

Every flow gets a dedicated page with all known information. The Analysis tab shows an automated interactive analysis of all collected results. For instance, below are the results of a <a href="https://www.openml.org/f/17691" target="_blank">scikit-learn pipeline</a> including missing value imputation, feature encoding, and a RandomForest model. It shows the results across multiple tasks, and how the AUC score is affected by certain hyperparameters.
Every flow gets a dedicated page with information about the flow, such as its dependencies, hyperparameters, and which runs used it. The Analysis tab shows an automated interactive analysis of all collected results. For instance, below are the results of a <a href="https://www.openml.org/f/17691" target="_blank">scikit-learn pipeline</a> including missing value imputation, feature encoding, and a RandomForest model. It shows the results across multiple tasks and configurations, and how the AUC score is affected by certain hyperparameters.

<!-- <img src="img/flow_top.png" style="width:100%; max-width:800px;"/> -->
![](../img/flow_top.png)
Expand All @@ -13,7 +27,7 @@ This helps to better understand specific models, as well as their strengths and

## Automated sharing

When you evaluate algorithms and share the results, OpenML will automatically extract all the details of the algorithm (dependencies, structure, and all hyperparameters), and upload them in the background.
When you evaluate algorithms and share the results using `openml-python` or `mlr3oml` details of the algorithm (dependencies, structure, and all hyperparameters) are automatically extracted and can easily be shared. When the Flow is used in a Run, the specific hyperparameter configuration used in the experiment is also saved separately in a Setup. The code snippet below creates a Flow description for the RandomForestClassifier, and also runs the experiment. The resulting Run contains information about the used configuration of the Flow in the experiment (Setup).

``` python
from sklearn import ensemble
Expand Down Expand Up @@ -41,4 +55,4 @@ Given an OpenML run, the exact same algorithm or model, with exactly the same hy
```

!!! note
You may need the exact same library version to reconstruct flows. The API will always state the required version. We aim to add support for VMs so that flows can be easily (re)run in any environment <i class="fa fa-heart fa-fw fa-lg" style="color:red"></i>
You may need the exact same library version to reconstruct flows. The API will always state the required version.
39 changes: 31 additions & 8 deletions docs/concepts/runs.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,39 @@
# Runs

Runs are the results of experiments evaluating a flow with a specific configuration on a specific task.
They contain at least a description of the hyperparameter configuration of the Flow and the predictons produced for the machine learning Task.
Users may also provide additional metadata related to the experiment, such as the time it took to train or evaluate the model, or their predictive performance.
The OpenML server will also compute several common metrics on the provided predictions as appropriate for the task, such as accuracy for a classification task or root mean squared error for regression tasks.

## Automated reproducible evaluations
Runs are experiments (benchmarks) evaluating a specific flows on a specific task. As shown above, they are typically submitted automatically by machine learning
libraries through the OpenML [APIs](https://www.openml.org/apis)), including lots of automatically extracted meta-data, to create reproducible experiments. With a few for-loops you can easily run (and share) millions of experiments.
While the REST API and the OpenML connectors allow you to manually submit Run data, openml-python and mlr3oml also support automated running of experiments and data collection.
The openml-python example below will evaluate the `RandomForestClassifier` on a given task and automatically track information such as the duration of the experiment, the hyperparameter configuration of the model, and version information about the software used in the experiment, and bundle it for convenient upload to OpenML.

## Online organization
OpenML organizes all runs online, linked to the underlying data, flows, parameter settings, people, and other details. See the many examples above, where every dot in the scatterplots is a single OpenML run.
``` python
from sklearn import ensemble
from openml import tasks, runs

# Build any model you like.
clf = ensemble.RandomForestClassifier()

# Evaluate the model on a task
run = runs.run_model_on_task(clf, task)

## Independent (server-side) evaluation
OpenML runs include all information needed to independently evaluate models. For most tasks, this includes all predictions, for all train-test splits, for all instances in the dataset, including all class confidences. When a run is uploaded, OpenML automatically evaluates every run using a wide array of evaluation metrics. This makes them directly comparable with all other runs shared on OpenML. For completeness, OpenML will also upload locally computed evaluation metrics and runtimes.
# Share the results, including the flow and all its details.
run.publish()
```

New metrics can also be added to OpenML's evaluation engine, and computed for all runs afterwards. Or, you can download OpenML runs and analyse the results any way you like.
The standardized way of accessing datasets and tasks makes it easy to run large scale experiments in this manner.

!!! note
Please note that while OpenML tries to maximise reproducibility, exactly reproducing all results may not always be possible because of changes in numeric libraries, operating systems, and hardware.
While OpenML tries to facilitate reproducibility, exactly reproducing all results is not generally possible because of changes in numeric libraries, operating systems, hardware, and even random factors (such as hardware errors).

## Online organization

All runs are available from the OpenML platform, through either direct access with the REST API or through visualizations in the website.
The scatterplot below shows many runs for a single Flow, each dot represents a Run.
For each run, all metadata is available online, as well as the produced predictions and any other provided artefacts.
You can download OpenML runs and analyse the results any way you like.

<!-- <img src="img/flow_top.png" style="width:100%; max-width:800px;"/> -->
![](../img/flow_top.png)