From c96b228ff361c779efa381d30f1eea6282ed55a3 Mon Sep 17 00:00:00 2001
From: Pieter Gijsbers
Date: Tue, 3 Feb 2026 11:11:13 +0200
Subject: [PATCH 1/3] Clarify Flow vs Setup and other implicit details
---
docs/concepts/flows.md | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)
diff --git a/docs/concepts/flows.md b/docs/concepts/flows.md
index 7e768af3..537c4100 100644
--- a/docs/concepts/flows.md
+++ b/docs/concepts/flows.md
@@ -1,10 +1,24 @@
# Flows
-Flows are machine learning pipelines, models, or scripts. They are typically uploaded directly from machine learning libraries (e.g. scikit-learn, pyTorch, TensorFlow, MLR, WEKA,...) via the corresponding [APIs](https://www.openml.org/apis). Associated code (e.g., on GitHub) can be referenced by URL.
+Flows are machine learning pipelines, models, or scripts that can transform data into a model.
+They often have a number of hyperparameters which may be configured (e.g., a Random Forest's "number of trees" hyperparameter).
+Flows are, for example, scikit-learn's `RandomForestClassifier`, mlr3's `"classif.rpart"`, or WEKA's `J48`, but can also be "AutoML Benchmark's autosklearn integration" or any other script.
+The metadata of a flow describes, if provided, the configurable hyperparameters, their default values, and recommended ranges.
+They _do not_ describe a specific configuration ([setups](./runs.md#setups) log the configuration of a flow used in a [run](./runs.md)).
+
+They are typically uploaded directly from machine learning libraries (e.g. scikit-learn, pyTorch, TensorFlow, MLR, WEKA,...) via the corresponding [APIs](https://www.openml.org/apis), but is possible to define them manually too (see also [this example of openml-python](http://openml.github.io/openml-python/latest/examples/Basics/simple_flows_and_runs_tutorial/) or the REST API documentation). Associated code (e.g., on GitHub) can be referenced by URL.
+
+
+!!! note "Versions"
+
+ It is convention to distinguish between software versions through the Flow's `external_version` property.
+ This is because both internal and external changes can be made to code the Flow references, which would affect people using them.
+ For example, hyperparameters may be introduced or deprecated across different versions of the same algorithm, or their internal behavior may change (and result in different models).
+ Automatically generated flows from e.g. `openml-python` or `mlr3oml` automatically populated the `external_version` property.
## Analysing algorithm performance
-Every flow gets a dedicated page with all known information. The Analysis tab shows an automated interactive analysis of all collected results. For instance, below are the results of a scikit-learn pipeline including missing value imputation, feature encoding, and a RandomForest model. It shows the results across multiple tasks, and how the AUC score is affected by certain hyperparameters.
+Every flow gets a dedicated page with information about the flow, such as its dependencies, hyperparameters, and which runs used it. The Analysis tab shows an automated interactive analysis of all collected results. For instance, below are the results of a scikit-learn pipeline including missing value imputation, feature encoding, and a RandomForest model. It shows the results across multiple tasks and configurations, and how the AUC score is affected by certain hyperparameters.

@@ -13,7 +27,7 @@ This helps to better understand specific models, as well as their strengths and
## Automated sharing
-When you evaluate algorithms and share the results, OpenML will automatically extract all the details of the algorithm (dependencies, structure, and all hyperparameters), and upload them in the background.
+When you evaluate algorithms and share the results using `openml-python` or `mlr3oml` details of the algorithm (dependencies, structure, and all hyperparameters) are automatically extracted and can easily be shared. When the Flow is used in a Run, the specific hyperparameter configuration used in the experiment is also saved separately in a Setup. The code snippet below creates a Flow description for the RandomForestClassifier, and also runs the experiment. The resulting Run contains information about the used configuration of the Flow in the experiment (Setup).
``` python
from sklearn import ensemble
@@ -41,4 +55,4 @@ Given an OpenML run, the exact same algorithm or model, with exactly the same hy
```
!!! note
- You may need the exact same library version to reconstruct flows. The API will always state the required version. We aim to add support for VMs so that flows can be easily (re)run in any environment
\ No newline at end of file
+ You may need the exact same library version to reconstruct flows. The API will always state the required version.
From 44a488f998e3e3fb4b3448a02221146946088903 Mon Sep 17 00:00:00 2001
From: Pieter Gijsbers
Date: Tue, 3 Feb 2026 11:40:17 +0200
Subject: [PATCH 2/3] Remove link to setup
---
docs/concepts/flows.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/concepts/flows.md b/docs/concepts/flows.md
index 537c4100..e44ba4f6 100644
--- a/docs/concepts/flows.md
+++ b/docs/concepts/flows.md
@@ -4,7 +4,7 @@ Flows are machine learning pipelines, models, or scripts that can transform data
They often have a number of hyperparameters which may be configured (e.g., a Random Forest's "number of trees" hyperparameter).
Flows are, for example, scikit-learn's `RandomForestClassifier`, mlr3's `"classif.rpart"`, or WEKA's `J48`, but can also be "AutoML Benchmark's autosklearn integration" or any other script.
The metadata of a flow describes, if provided, the configurable hyperparameters, their default values, and recommended ranges.
-They _do not_ describe a specific configuration ([setups](./runs.md#setups) log the configuration of a flow used in a [run](./runs.md)).
+They _do not_ describe a specific configuration (Setups log the configuration of a flow used in a [run](./runs.md)).
They are typically uploaded directly from machine learning libraries (e.g. scikit-learn, pyTorch, TensorFlow, MLR, WEKA,...) via the corresponding [APIs](https://www.openml.org/apis), but is possible to define them manually too (see also [this example of openml-python](http://openml.github.io/openml-python/latest/examples/Basics/simple_flows_and_runs_tutorial/) or the REST API documentation). Associated code (e.g., on GitHub) can be referenced by URL.
From fd0c0540af61acacb7b66e49fbd23043f7d028bf Mon Sep 17 00:00:00 2001
From: Pieter Gijsbers
Date: Tue, 3 Feb 2026 11:41:33 +0200
Subject: [PATCH 3/3] Fix missing references, clarify details, reorder
---
docs/concepts/runs.md | 39 +++++++++++++++++++++++++++++++--------
1 file changed, 31 insertions(+), 8 deletions(-)
diff --git a/docs/concepts/runs.md b/docs/concepts/runs.md
index d5a3fc12..5bdebe67 100644
--- a/docs/concepts/runs.md
+++ b/docs/concepts/runs.md
@@ -1,16 +1,39 @@
# Runs
+Runs are the results of experiments evaluating a flow with a specific configuration on a specific task.
+They contain at least a description of the hyperparameter configuration of the Flow and the predictons produced for the machine learning Task.
+Users may also provide additional metadata related to the experiment, such as the time it took to train or evaluate the model, or their predictive performance.
+The OpenML server will also compute several common metrics on the provided predictions as appropriate for the task, such as accuracy for a classification task or root mean squared error for regression tasks.
+
## Automated reproducible evaluations
-Runs are experiments (benchmarks) evaluating a specific flows on a specific task. As shown above, they are typically submitted automatically by machine learning
-libraries through the OpenML [APIs](https://www.openml.org/apis)), including lots of automatically extracted meta-data, to create reproducible experiments. With a few for-loops you can easily run (and share) millions of experiments.
+While the REST API and the OpenML connectors allow you to manually submit Run data, openml-python and mlr3oml also support automated running of experiments and data collection.
+The openml-python example below will evaluate the `RandomForestClassifier` on a given task and automatically track information such as the duration of the experiment, the hyperparameter configuration of the model, and version information about the software used in the experiment, and bundle it for convenient upload to OpenML.
-## Online organization
-OpenML organizes all runs online, linked to the underlying data, flows, parameter settings, people, and other details. See the many examples above, where every dot in the scatterplots is a single OpenML run.
+``` python
+ from sklearn import ensemble
+ from openml import tasks, runs
+
+ # Build any model you like.
+ clf = ensemble.RandomForestClassifier()
+
+ # Evaluate the model on a task
+ run = runs.run_model_on_task(clf, task)
-## Independent (server-side) evaluation
-OpenML runs include all information needed to independently evaluate models. For most tasks, this includes all predictions, for all train-test splits, for all instances in the dataset, including all class confidences. When a run is uploaded, OpenML automatically evaluates every run using a wide array of evaluation metrics. This makes them directly comparable with all other runs shared on OpenML. For completeness, OpenML will also upload locally computed evaluation metrics and runtimes.
+ # Share the results, including the flow and all its details.
+ run.publish()
+```
-New metrics can also be added to OpenML's evaluation engine, and computed for all runs afterwards. Or, you can download OpenML runs and analyse the results any way you like.
+The standardized way of accessing datasets and tasks makes it easy to run large scale experiments in this manner.
!!! note
- Please note that while OpenML tries to maximise reproducibility, exactly reproducing all results may not always be possible because of changes in numeric libraries, operating systems, and hardware.
\ No newline at end of file
+ While OpenML tries to facilitate reproducibility, exactly reproducing all results is not generally possible because of changes in numeric libraries, operating systems, hardware, and even random factors (such as hardware errors).
+
+## Online organization
+
+All runs are available from the OpenML platform, through either direct access with the REST API or through visualizations in the website.
+The scatterplot below shows many runs for a single Flow, each dot represents a Run.
+For each run, all metadata is available online, as well as the produced predictions and any other provided artefacts.
+You can download OpenML runs and analyse the results any way you like.
+
+
+