You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a task execution microservice based on the [TES standard](https://github.com/ga4gh/task-execution-schemas) that sends job executions to [Pulsar](https://github.com/galaxyproject/pulsar) application.
9
-
10
-
Read about our project on the [Galaxy Hub](https://galaxyproject.org/news/2025-10-06-tesp-api/) and [e-INFRA CZ Blog](https://blog.e-infra.cz/blog/tesp-api/).
11
-
12
-
This effort is part of the [EuroScienceGateway](https://galaxyproject.org/projects/esg/) project.
13
-
For more details on TES, see the [Task Execution Schemas documentation](https://ga4gh.github.io/task-execution-schemas/docs/).
14
-
Pulsar is a Python server application that allows a [Galaxy](https://github.com/galaxyproject/galaxy) server to run jobs on remote systems.
8
+
This project is an effort to create Open-source implementation of a task execution engine based on the [TES standard](https://github.com/ga4gh/task-execution-schemas)
9
+
distributing executions to services exposing [Pulsar](https://github.com/galaxyproject/pulsar) application. For more details
10
+
on `TES`, see the Task Execution Schemas [documentation](https://ga4gh.github.io/task-execution-schemas/docs/). `Pulsar`
11
+
is a Python server application that allows a [Galaxy](https://github.com/galaxyproject/galaxy) server to run jobs on remote systems. The original intention of this
12
+
project was to modify the `Pulsar` project (e.g. via forking) so its Rest API would be compatible with the `TES` standard.
13
+
Later a decision was made that rather a separate microservice will be created, decoupled from the `Pulsar`, implementing the `TES`
14
+
standard and distributing `TES` tasks execution to `Pulsar` applications.
15
15
16
16
## Quick start
17
17
@@ -22,19 +22,19 @@ The most straightforward way to deploy the TESP is to use Docker Compose.
22
22
```
23
23
docker compose up -d
24
24
```
25
-
Starts the API and MongoDB containers. Configure an external Pulsar in `settings.toml`
26
-
(default points to `http://localhost:8913`). REST is the default; AMQP is used only
27
-
if `pulsar.amqp_url` is set.
25
+
Expecting exetrnal Pulsar configured in `settings.toml` before the compose is run.
26
+
So far only REST Pulsar communication is supported.
28
27
29
28
#### With pulsar_rest service:
30
29
```
31
30
docker compose --profile pulsar up -d
32
31
```
33
-
Starts a local Pulsar REST container in the same compose network.
34
32
33
+
<br />
34
+
<br />
35
35
<br />
36
36
37
-
Depending on your Docker and Docker Compose installation, you may need to use `docker-compose` (with hyphen) instead.
37
+
Depending on you Docker and Docker Compose installation, you may need to use `docker-compose` (with hyphen) instead.
38
38
39
39
You might encounter a timeout error in container runtime which can be solved by correct `mtu` configuration either in the `docker-compose.yaml`:
40
40
```
@@ -47,19 +47,17 @@ networks:
47
47
or directly in your `/etc/docker/daemon.json`:
48
48
```
49
49
{
50
-
"mtu": 1442
50
+
"mtu": 1442
51
51
}
52
52
```
53
53
54
-
The Data Transfer Services (HTTP/S3/FTP) are defined in [docker/dts](docker/dts/README.md)
55
-
and run via a separate compose file.
54
+
The `docker-compose.yaml` spins also collection of [Data Transfer Services](docker/dts/README.md) which can be used for testing.
56
55
57
-
58
56
### Usage
59
57
If the TESP is running, you can try to submit a task. One way is to use cURL. Although the project is still in development, the TESP should be compatible with TES so you can try TES clients such as Snakemake or Nextflow. The example below shows how to submit task using cURL.
60
58
61
59
#### 1. Create JSON file
62
-
The first step you need to take is to prepare JSON file with the task. For inspiration you can use [tests/test_jsons](tests/test_jsons) located in this repository, or [TES documentation](https://ga4gh.github.io/task-execution-schemas/docs/).
60
+
The first step you need to take is to prepare JSON file with the task. For inspiration you can use [tests](https://github.com/CESNET/tesp-api/tree/dev/tests/test_jsons) located in this repository, or [TES documentation](https://ga4gh.github.io/task-execution-schemas/docs/).
63
61
64
62
Example JSON file:
65
63
```
@@ -91,10 +89,10 @@ Please check the URL of the running TES and the file with the task you just crea
(The only reason for the subshell is to remove whitespaces and newlines.)
94
-
After the task is submitted, the endpoint returns the task ID. This is useful to check the task status.
92
+
After the task is submitted, the endpoint returns the task ID. This is usefull to check the task status.
95
93
96
94
#### 3. Check the task status
97
-
There are more useful endpoints to check the task status.
95
+
There are more usefull endpoints to check the task status.
98
96
99
97
List all tasks:
100
98
```
@@ -132,9 +130,8 @@ instead of starting the project locally without `docker`. In that case only thos
132
130
| poetry | 1.1.13+ |_pip install poetry_|
133
131
| mongodb | 4.4+ |_docker-compose uses latest_|
134
132
| pulsar | 0.14.13 |_actively trying to support latest. Must have access to docker with the same host as pulsar application itself_|
135
-
| ftp server | - |_optional for I/O testing. The [docker/dts](docker/dts/README.md) stack provides FTP/S3/HTTP services_. |
133
+
| ftp server | - |_no real recommendation here. docker-compose uses [ftpserver](https://github.com/fclairamb/ftpserver) so local alternative should support same fpt commands_. |
136
134
137
-
138
135
### Configuring TESP API
139
136
`TESP API` uses [dynaconf](https://www.dynaconf.com/) for its configuration. Configuration is currently set up by using
140
137
[./settings.toml](https://github.com/CESNET/tesp-api/blob/main/settings.toml) file. This file declares sections which represent different environments for `TESP API`. Default section
@@ -156,25 +153,13 @@ To apply different environment (i.e. to switch which section will be picked by `
156
153
`FASTAPI_PROFILE` must be set to the concrete name of such section (e.g. `FASTAPI_PROFILE=dev-docker` which can be seen
157
154
in the [./docker/tesp_api/Dockerfile](https://github.com/CESNET/tesp-api/blob/main/docker/tesp_api/Dockerfile))
158
155
159
-
160
-
### Authentication
161
-
`TESP API` can run without authentication (default). To enable Basic Auth, set `basic_auth.enable = true`
162
-
and configure `basic_auth.username` and `basic_auth.password` in `settings.toml`. To enable OAuth2,
163
-
set `oauth.enable = true` and pass a Bearer token; the token is validated via the issuer in its `iss`
164
-
claim using OIDC discovery.
165
-
166
-
Container execution runtime is controlled by the `CONTAINER_TYPE` environment variable (`docker` or
167
-
`singularity`). The default is `docker`.
168
-
169
-
170
156
### Configuring required services
171
157
You can have a look at [./docker-compose.yaml](https://github.com/CESNET/tesp-api/blob/main/docker-compose.yaml) to see how
172
158
the infrastructure for development should look like. Of course, you can configure those services in your preferred way if you are
173
159
going to start the project without `docker` or if you are trying to create other than `development` environment but some things
174
-
must remain as they are. For example, `TESP API` currently communicates with `Pulsar`via REST by default; configure Pulsar for
175
-
REST unless you set `pulsar.amqp_url` to enable AMQP.
160
+
must remain as they are. For example, `TESP API` currently supports communication with `Pulsar`only through its Rest API and
161
+
therefore `Pulsar` must be configured in such a way.
176
162
177
-
178
163
### Current Docker services
179
164
All the current `Docker` services which will be used when the project is started with `docker-compose` have common directory
180
165
[./docker](https://github.com/CESNET/tesp-api/tree/main/docker) for configurations, data, logs and Dockerfiles if required.
@@ -184,12 +169,15 @@ example trying to create data folder for given service. Such issues should be re
184
169
which ports to be used etc. Following services are currently defined by [./docker-compose.yaml](https://github.com/CESNET/tesp-api/blob/main/docker-compose.yaml)
185
170
-**tesp-api** - This project itself. Depends on mongodb
186
171
-**tesp-db** - [MongoDB](https://www.mongodb.com/) instance for persistence layer
187
-
-**pulsar_rest** - `Pulsar` configured to use REST API with access to a docker instance thanks to [DIND](https://hub.docker.com/_/docker) (enabled with `--profile pulsar`).
172
+
-**pulsar_rest** - `Pulsar` configured to use Rest API with access to a docker instance thanks to [DIND](https://hub.docker.com/_/docker).
173
+
-**pulsar_amqp** - currently disabled, will be used in the future development
174
+
-**ftpserver** - online storage for `TES` tasks input/output content
175
+
-**minio** - currently acting only as a storage backend for the `ftpserver` with simple web interface to access data.
188
176
189
-
If you want HTTP/FTP/S3 data transfer services for testing, use the separate
folders for `minio` service which must be copied to the `./docker/minio/data` folder before starting up the infrastructure. Those data
179
+
configure `minio` to start with already created bucket and user which will be used by `ftpserver` for access.**
191
180
192
-
193
181
### Run the project
194
182
This project uses [Poetry](https://python-poetry.org/) for `dependency management` and `packaging`. `Poetry` makes it easy
195
183
to install libraries required by `TESP API`. It uses [./pyproject.toml](https://github.com/CESNET/tesp-api/blob/feature/TESP-0-github-proper-readme/pyproject.toml)
@@ -222,37 +210,34 @@ initialized properly or whether any errors occurred.
222
210
-**http://localhost:8080/** - will redirect to Swagger documentation of `TESP API`. This endpoint also currently acts as a frontend.
223
211
You can use it to execute REST based calls expected by the `TESP API`. Swagger is automatically generated from the sources,
224
212
and therefore it corresponds to the very current state of the `TESP API` interface.
225
-
- If you run the DTS stack from [docker/dts](docker/dts/README.md), MinIO console is available at
226
-
**http://localhost:9001/** with `root` / `123456789` credentials.
213
+
-**http://localhost:40949/** - `minio` web interface. Use `admin` and `!Password123` credentials to login. Make sure
214
+
that bucket `tesp-ftp` is already present, otherwise see [Current Docker services](#current-docker-services) section of this readme to properly
215
+
prepare infrastructure before the startup.
227
216
228
217
### Executing simple TES task
229
218
This section will demonstrate execution of simple `TES` task which will calculate _[md5sum](https://en.wikipedia.org/wiki/Md5sum)_
230
219
hash of given input. There are more approaches of how I/O can be handled by `TES` but main goal here is to demonstrate `ftp server` as well.
231
220
232
-
If you want to use the bundled HTTP/FTP/S3 services, start the DTS stack in [docker/dts](docker/dts/README.md)
233
-
and adapt hostnames/ports to match your network setup.
234
-
235
-
1. Upload a new file with your preferred name and content (e.g. name `holy_file` and content `Hello World!`) to your
236
-
FTP-backed storage. If you run the DTS stack, use the MinIO console at **http://localhost:9001/** to create a bucket
237
-
and upload the file. This file will be accessible through your FTP service and will be used as an input file for this
238
-
demonstration.
221
+
1. Head over to **http://localhost:40949/buckets/tesp-ftp/browse** and upload a new file with your preferred name and content (e.g. name
222
+
`holy_file` and content `Hello World!`). This file will now be accessible trough `ftpserver` service and will be used as
223
+
an input file for this demonstration.
239
224
2. Go to **http://localhost:8080/** and use `POST /v1/tasks` request to create following `TES` task (task is sent in the request body).
240
-
In the `"inputs.url"` replace `<file_uploaded_to_storage>` with the file name you chose in the previous step. If http status of
225
+
In the `"inputs.url"` replace `<file_uploaded_to_minio>` with the file name you chose in the previous step. If http status of
241
226
returned response is 200, the response will contain `id` of created task in the response body which will be used to
|_Pulsar_|`TESP API` communicates with `Pulsar` only through its REST API, missing functionality for message queues |
289
273
|_Pulsar_|`TESP API` should be able to dispatch executions to multiple `Pulsar` services via different types of `Pulsar` interfaces. Currently, only one `Pulsar` service is supported |
290
274
|_Pulsar_|`Pulsar` must be "polled" for job state. Preferably `Pulsar` should notify `TESP API` about state change. This is already default behavior when using `Pulsar` with message queues |
291
-
|_TES_| Canceling a `TES` task calls Pulsar's cancel endpoint but container termination depends on Pulsar/runtime behavior. In-flight tasks may still complete. |
292
-
|_TES_| Only `cpu_cores` and `ram_gb` are mapped to container runtime flags. Other resource fields (disk, preemptible, zones) are stored but not enforced. |
293
-
|_TES_| Task `tags` are accepted and stored but not used by the scheduler or runtime. |
294
-
|_TES_| Task `logs.outputs` is not populated. Use `outputs` to persist result files. |
275
+
|_TES_| Canceling `TES` task does not immediately stop the task. Task even cannot be canceled while it is running. |
276
+
|_TES_|`TES` does not state specific urls to be supported for file transfer (e.g. tasks `inputs.url`). Only FTP is supported for now |
277
+
|_TES_| tasks `inputs.type` and `outputs.type` can be either DIRECTORY or FILE. Only FILE is supported, DIRECTORY will lead to undefined behavior for now |
278
+
|_TES_| tasks `resources` currently do not change execution behavior in any way. This configuration will take effect once `Pulsar` limitations are resolved |
279
+
|_TES_| tasks `executors.workdir` and `executors.env` functionality is not yet implemented. You can use them but they will have no effect |
280
+
|_TES_| tasks `volumes` and `tags` functionality is not yet implemented. You use them but they will have no effect |
281
+
|_TES_| tasks `logs.outputs` functionality is not yet implemented. However this limitation can be bypassed with tasks `outputs`|
295
282
296
283
284
+
## GIT
285
+
Current main branch is `origin/main`. This happens to be also a release branch for now. Developers should typically derive their
286
+
own feature branches such as e.g. `feature/TESP-111-task-monitoring`. This project has not yet configured any CI/CD. Releases are
287
+
done manually by creating a tag in the current release branch. There is not yet configured any issue tracking software but for
288
+
any possible future integration this project should reference commits, branches PR's etc ... with prefix `TESP-0` as a reference
289
+
to a work that has been done before such integration. Pull request should be merged using `Squash and merge` option with message format `Merge pull request #<PRnum> from <branch-name>`.
290
+
Since there is no CI/CD setup this is only opinionated view on how branching policies should work and for now everything is possible.
297
291
298
-
History note: _The original intention of this project was to modify the `Pulsar` project so its Rest API would be compatible with the `TES` standard.
299
-
Later a decision was made that rather a separate microservice will be created, decoupled from the `Pulsar`, implementing the `TES`
300
-
standard and distributing `TES` tasks execution to `Pulsar` applications._
0 commit comments