diff --git a/.gitignore b/.gitignore
index c4dd6af..8075ad7 100644
--- a/.gitignore
+++ b/.gitignore
@@ -1,2 +1,3 @@
site/
_site/
+.venv
\ No newline at end of file
diff --git a/Makefile b/Makefile
index 6d39ca6..b470061 100644
--- a/Makefile
+++ b/Makefile
@@ -40,11 +40,17 @@ i: install
.PHONY: venv
venv:
@echo "\nCreating a virtual environment ++++++++++++++++++++++++++++++++++++++++++++++++\n"
- @python -m venv .venv
+ @if command -v python3 >/dev/null 2>&1; then \
+ python3 -m venv .venv; \
+ elif command -v python >/dev/null 2>&1; then \
+ python -m venv .venv; \
+ else \
+ echo "Python interpreter not found" >&2; exit 1; \
+ fi
@echo "\nSummary +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n"
@echo "Virtual environment created successfully."
@echo "To activate the environment for this shell session, run:"
@echo "source .venv/bin/activate"
.PHONY: v
-v: venv
\ No newline at end of file
+v: venv
diff --git a/docs/guides/guide-admin/cloudregistry.md b/docs/guides/guide-admin/cloudregistry.md
new file mode 100644
index 0000000..547ed53
--- /dev/null
+++ b/docs/guides/guide-admin/cloudregistry.md
@@ -0,0 +1,69 @@
+## Synopsis
+
+GA4GH Service Registry API implementation for the ELIXIR Cloud.
+
+Service entries comply with the [external service schema](https://github.com/ga4gh-discovery/ga4gh-service-registry/blob/8c45be52940db92c2fa1cd821519c271c22b1c4c/service-registry.yaml#L158) defined in the [GA4GH Service Registry API][ga4gh-service-registry]
+
+Developers can find the API documentation [here](https://cloud-registry.readthedocs.io/en/latest/)
+
+## Installation
+
+You can find a Helm chart in the [GitHub repository](https://github.com/elixir-cloud-aai/cloud-registry/tree/dev/deployment) of Cloud-registry
+
+Follow these instructions
+
+- Install [Helm][helm-install]
+- Clone the [Cloud-registry repository](https://github.com/elixir-cloud-aai/cloud-registry/)
+
+ ```sh
+ git clone https://github.com/elixir-cloud-aai/cloud-registry.git
+ ```
+
+- Browse to `deployment` to find the `Chart.yaml` and the `values.yaml` files
+
+## Usage
+
+First you must create a namespace in Kubernetes in which to deploy Cloud-registry. The
+commands below assume that everything is created in the context of this
+namespace. How the namespace is created depends on the cluster, so we won't
+document it here.
+
+You need to edit the `values.yaml` file
+
+After this you can deploy Cloud-registry using `helm`:
+
+```bash
+helm install cloud-registry . -f values.yaml
+```
+
+### Updates
+
+If you want to edit any of the Deployments, you can update them with
+`helm` and the `values.yaml` file. Once edited, you can run this command:
+
+```bash
+helm upgrade cloud-registry . -f values.yaml
+```
+
+## Technical details
+
+### MongoDB
+
+The MongoDB database is deployed using:
+
+- `templates/mongo-deploy.yaml`
+
+### Cloud-registry
+
+TRS-Filer is deployed using:
+
+- `templates/cloud-registry-deploy.yaml`
+
+## Destroy
+
+Simply run:
+
+```bash
+helm uninstall cloud-registry
+```
+
diff --git a/docs/guides/guide-admin/cwlwes.md b/docs/guides/guide-admin/cwlwes.md
new file mode 100644
index 0000000..ccb05c3
--- /dev/null
+++ b/docs/guides/guide-admin/cwlwes.md
@@ -0,0 +1,105 @@
+## Synopsis
+
+Microservice implementing the [Global Alliance for Genomics and
+Health][ga4gh] (GA4GH) [Workflow Execution Service][ga4gh-wes] (WES)
+API specification for the execution of workflows written in the [Common
+Workflow Language](https://www.commonwl.org/) (CWL).
+
+cwl-WES is a core service of the [ELIXIR Cloud & AAI
+project][elixir-cloud-aai-github].
+
+## Description
+
+cwl-WES (formerly: WES-ELIXIR) is a Flask/Gunicorn
+application that makes use of [Connexion](https://github.com/ga4gh/workflow-execution-service-schemas) to implement the
+[GA4GH WES OpenAPI specification][ga4gh-wes]. It enables clients/users
+to execute [CWL](https://www.commonwl.org) workflows in the cloud via a [GA4GH Task Execution
+Service][ga4gh-tes] (TES)-compatible execution backend (e.g.,
+[TESK][tesk] or [Funnel][funnel]). Workflows can be sent for execution,
+previous runs can be listed, and the status and run information of individual
+runs can be queried. The service leverages [cwl-tes][res-cwl-tes] to
+interpret [CWL](https://www.commonwl.org) workflows, break them down into individual tasks and
+emit [GA4GH TES][ga4gh-tes]-compatible HTTP requests to a configured
+[TES][ga4gh-tes] instance. Access to endpoints can be configured to require
+JSON Web Token-based access tokens, such as those issued by
+[ELIXIR AAI](https://elixir-europe.org/platforms/compute/aai). Run information is stored in a
+MongoDB database.
+
+Note that development is currently in beta stage.
+Further test deployments can be found at the [ELIXIR Cloud & AAI's resource
+listings](https://github.com/elixir-cloud-aai/elixir-cloud-aai/blob/dev/resources/resources.md).
+
+cwl-WES is developed and maintained by the [ELIXIR Cloud & AAI
+project][elixir-cloud], a multinational effort aimed at establishing and
+implementing [FAIR][fair] research in the Life Sciences.
+
+## Installation
+
+You can find a Helm chart in the [GitHub repository](https://github.com/elixir-cloud-aai/cwl-wes/tree/dev/deployment) of CWL-WES
+
+Follow these instructions
+
+- Install [Helm][helm-install]
+- Clone the [CWL-WES repository](https://github.com/elixir-cloud-aai/cwl-wes/)
+
+ ```sh
+ git clone https://github.com/elixir-cloud-aai/cwl-wes.git
+ ```
+
+- Browse to `deployment` to find the `Chart.yaml` and the `values.yaml` files
+
+## Usage
+
+First you must create a namespace in Kubernetes in which to deploy CWL-WES. The
+commands below assume that everything is created in the context of this
+namespace. How the namespace is created depends on the cluster, so we won't
+document it here.
+
+You need to edit the `values.yaml` file
+
+After this you can deploy CWL-WES using `helm`:
+
+```bash
+helm install CWL-WES . -f values.yaml
+```
+
+### Updates
+
+If you want to edit any of the Deployments, you can update them with
+`helm` and the `values.yaml` file. Once edited, you can run this command:
+
+```bash
+helm upgrade CWL-WES . -f values.yaml
+```
+
+## Technical details
+
+### MongoDB
+
+The MongoDB database is deployed using:
+
+- `templates/mongodb-deployment.yaml`
+
+### RabbitMQ
+
+The message broker RabbitMQ that allows the app to communicate with the
+worker is deployed using:
+
+- `templates/rabbitmq/rabbitmq-deployment.yaml`
+
+### CWL-WES
+
+CWL-WES consists of a Flask server and a Celery worker.
+There are deployed using:
+
+- `templates/wes-deployment.yaml`
+- `templates/celery-deployment.yaml`
+
+## Destroy
+
+Simply run:
+
+```bash
+helm uninstall cwl-wes
+```
+
diff --git a/docs/guides/guide-admin/drsfiler.md b/docs/guides/guide-admin/drsfiler.md
new file mode 100644
index 0000000..74b772a
--- /dev/null
+++ b/docs/guides/guide-admin/drsfiler.md
@@ -0,0 +1,67 @@
+## Synopsis
+
+Microservice implementing the [Global Alliance for Genomics and
+Health (GA4GH)][ga4gh] [Data Repository Service][ga4gh-drs] (DRS)
+API specification.
+
+## Installation
+
+You can find a Helm chart in the [GitHub repository](https://github.com/elixir-cloud-aai/drs-filer/tree/dev/deployment) of DRS-Filer
+
+Follow these instructions
+
+- Install [Helm][helm-install]
+- Clone the [DRS-Filer repository](https://github.com/elixir-cloud-aai/drs-filer/)
+
+ ```sh
+ git clone https://github.com/elixir-cloud-aai/drs-filer.git
+ ```
+
+- Browse to `deployment` to find the `Chart.yaml` and the `values.yaml` files
+
+## Usage
+
+First you must create a namespace in Kubernetes in which to deploy DRS-Filer. The
+commands below assume that everything is created in the context of this
+namespace. How the namespace is created depends on the cluster, so we won't
+document it here.
+
+You need to edit the `values.yaml` file
+
+After this you can deploy DRS-Filer using `helm`:
+
+```bash
+helm install drs-filer . -f values.yaml
+```
+
+### Updates
+
+If you want to edit any of the Deployments, you can update them with
+`helm` and the `values.yaml` file. Once edited, you can run this command:
+
+```bash
+helm upgrade drs-filer . -f values.yaml
+```
+
+## Technical details
+
+### MongoDB
+
+The MongoDB database is deployed using:
+
+- `templates/mongo-deploy.yaml`
+
+### DRS-Filer
+
+DRS-Filer is deployed using:
+
+- `templates/drs-filer-deploy.yaml`
+
+## Destroy
+
+Simply run:
+
+```bash
+helm uninstall drs-filer
+```
+
diff --git a/docs/guides/guide-admin/funnel.md b/docs/guides/guide-admin/funnel.md
new file mode 100644
index 0000000..c46a98b
--- /dev/null
+++ b/docs/guides/guide-admin/funnel.md
@@ -0,0 +1,63 @@
+Follow these instructions if you wish to deploy a TES endpoint in front of your
+HPC/HTC cluster (currently tested with [Slurm][slurm] and [OpenPBS][openpbs].
+
+- Make sure the build dependencies `make` and [Go 1.21+][go-install] are
+ installed, `GOPATH` is set and `GOPATH/bin` is added to `PATH`.
+
+ For example, in Ubuntu this can be achieved via:
+
+ ```sh
+ sudo apt update
+ sudo apt install make golang-go
+ export GOPATH=/your/desired/path
+ export PATH=$GOPATH/bin:$PATH
+ go version
+ ```
+
+- Clone the repository:
+
+ ```sh
+ git clone https://github.com/ohsu-comp-bio/funnel.git
+ ```
+
+- Build Funnel:
+
+ ```sh
+ cd funnel
+ make
+ ```
+
+- Test the installation by starting the Funnel server with:
+
+ ```sh
+ funnel server run
+ ```
+
+If all works, Funnel should be ready for deployment on your HPC/HTC.
+
+Alternatively, you can install Funnel via Homebrew:
+
+```sh
+brew tap ohsu-comp-bio/formula
+brew install funnel@0.11
+```
+
+Source: [Funnel website](https://ohsu-comp-bio.github.io/funnel/)
+
+### Slurm
+
+For the use of Funnel with Slurm, make sure the following conditions are met:
+
+1. The `funnel` binary must be placed in a server with access to Slurm.
+2. A config file must be created and placed on the same server. [This
+ file][funnel-config-slurm] can be used as a starting point.
+3. If we would like to deploy Funnel as a Systemd service,
+ [this file][funnel-config-slurm-service] can be used as a template. Set the
+ correct paths to the `funnel` binary and config file.
+
+If successful, Funnel should be listening on port `8080`.
+
+### OpenPBS
+
+!!! warning "Under construction"
+ More info coming soon...
diff --git a/docs/guides/guide-admin/images/overview_protes.svg b/docs/guides/guide-admin/images/overview_protes.svg
new file mode 100644
index 0000000..ab7ac4c
--- /dev/null
+++ b/docs/guides/guide-admin/images/overview_protes.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/guides/guide-admin/images/overview_prowes.svg b/docs/guides/guide-admin/images/overview_prowes.svg
new file mode 100644
index 0000000..6065615
--- /dev/null
+++ b/docs/guides/guide-admin/images/overview_prowes.svg
@@ -0,0 +1,6785 @@
+
+
diff --git a/docs/guides/guide-admin/index.md b/docs/guides/guide-admin/index.md
index 79fca08..977e853 100644
--- a/docs/guides/guide-admin/index.md
+++ b/docs/guides/guide-admin/index.md
@@ -70,387 +70,19 @@ optionally, available in the ELIXIR Cloud compute network.
### Deploying compute
Depending on whether you have a Native Cloud cluster or an HPC/HTC, you will
-need to follow the instructions for deploying [TESK][tesk] or [Funnel][funnel]
-below, respectively.
-
-#### Deploying TESK
-
-[TESK][tesk] uses the Kubernetes Batch API ([Jobs][k8s-jobs]) to schedule
-execution of TES tasks. This means that it should be possible to deploy TESK in
-any flavor of Kubernetes, but tests are currently only performed with
-[Kubernetes][k8s], [OpenShift][openshift], and [Minikube][minikube]. Follow
-these instructions if you wish to deploy a TES endpoint on your Native Cloud
-cluster, and please let us know if you deploy TESK in any new and interensting
-platform.
-
-TESK currently does not use any other storage (DB) than Kubernetes itself.
-[Persistent Volume Claims][k8s-pvc] are used as a temporary storage to handle
-input and output files of a task and pass them over between executors of a
-task. Note that PVCs are destroyed immediately after task completion! This
-means your cluster will need to provide a ReadWriteMany
-[StorageClass][k8s-storage-class]. Commonly used storage classes are
-[NFS][nfs] and [CephFS][cephfs].
-
-Here is an overview of TESK's architecture:
-
-
-
-A [Helm][helm] chart is provided for the convenient deployment of TESK. The
-chart is available in the [TESK code repository][tesk-helm].
-
-Follow these steps:
-
-1. [Install Helm][helm-install]
-2. Clone the [TESK repository][tesk]:
-
- ```sh
- git clone https://github.com/elixir-cloud-aai/TESK.git
- ```
-
-3. Find the Helm chart at `charts/tesk`
-4. Edit file [`values.yaml`][tesk-helm-values] (see
- [notes](#notes-for-editing-chart-values) below)
-5. Log into the cluster and install TESK with:
-
- ```sh
- helm install -n TESK-NAMESPACE TESK-DEPLOYMENT-NAME . \
- -f secrets.yaml \
- -f values.yaml
- ```
-
- * Replace `TESK-NAMESPACE` with the name of the namespace where you want to
- install TESK. If the namespace is not specified, the default namespace will
- be used.
- * The argument provided for `TESK-DEPLOYMENT-NAME` will be used by Helm to
- refer to the deployment, for example when upgrading or deleting the
- deployment. You can choose whichever name you like.
-
-You should now have a working TESK isntance!
-
-##### Notes for editing chart values
-
-In the [TESK deployment documentation][tesk-docs-deploy] documentation there is
-a [description of every value][tesk-docs-deploy-values]. Briefly, the most
-important are:
-
-1. `host_name`: Will be used to serve the API.
-2. `storageClass`: Specify the storage class. If left empty, TESK will use the
- default one configred in the Kubernetes cluster.
-3. `auth.mode`: Enable (`auth`) or disable (`noauth`; default) authentication.
- When enabled, an OIDC client **must** be in a file `./secrets.yaml`, with
- the following format:
-
- ```yaml
- auth:
- client_id:
- client_secret:
- ```
-
-4. `ftp`: Which FTP credentials mode to use. Two options are supported:
- `.classic_ftp_secret` for basic authentication (username and password) or
- `.netrc_secret` for using a [`.netrc`][netrc] file.
-
- For the classic approach, you must write in `values.yaml`:
-
- ```yaml
- ftp:
- classic_ftp_secret: ftp-secret
- ```
-
- And in a file `.secrets.yaml` write down the username and password as:
-
- ```yaml
- ftp:
- username:
- password:
- ```
-
- For the `.netrc` approach, create a `.netrc` file in the `ftp` folder with
- the connections details in the correct format.
-
-5. `clusterType`: Type of Kubernetes flavor. Currently supported: `kubernetes`
- (default) and `openshift`.
-
-!!! warning "Careful"
- When creating a `.secrets.yaml` file, ensure that the file is never shared
- or committed to a code repository!
-
-##### Notes for deployment with microk8s
-
-This section outlines how to install TESK via [microk8s][microk8s] as tested on
-an Ubuntu 22.04 machine.
-
-First, install microk8s through the Snap Store and add yourself to the
-`microk8s` group::
-
-```bash
-sudo snap install microk8s --classic
-sudo usermod -a -G microk8s $USER
-```
-
-Now let's create a directory for the microk8s configuration and enable Helm:
-
-```bash
-mkdir ~/.kube
-sudo chown -R $USER ~/.kube
-microk8s enable helm3
-```
-
-Next, let's clone the TESK repository and move into it the Helm chart directory:
-
-```bash
-git clone https://github.com/elixir-cloud-aai/TESK.git
-cd TESK/charts/tesk
-```
-
-Follow the deployment instructions to create `secrets.yaml` and modify
-`values.yaml` as per your requirements.
-
-> You **MUST** set `host_name`. To make the service available through the
-> internet, see further below on how to configure the `service` section.
-
-Great - you are now ready to deploy TESK!
-
-First, let's create a namespace:
-
-```bash
-microk8s kubectl create namespace NAMESPACE
-```
-
-where `NAMESPACE` is an arbitrary name for your resource group.
-
-Now let's use Helm to install:
-
-```bash
-microk8s helm3 install -n NAMESPACE RELEASE_NAME . -f secrets.yaml -f values.yaml
-```
-
-where `RELEASE_NAME` is an arbitrary name for this particular TESK release.
-
-Congratulations - TESK should now be successfully deployed!
-
-To find out the IP address at which TESK is available, run the following
-command:
-
-```bash
-microk8s kubectl get svc -n NAMESPACE
-```
-
-The output could look something like this:
-
-```console
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-tesk-api ClusterIP 123.123.123.123 8080/TCP 8s
-```
-
-Use the `CLUSTER-IP` and the `PORT` with the following template to construct the
-URL at which the service is available (and make sure to replace the dummy URL
-when you want to try out the calls below):
-
-```console
-http://CLUSTER-IP:PORT/ga4gh/tes/v1
-```
-
-So, in this example case, we get the following URL:
-
-```console
-http://123.123.123.123:8080/ga4gh/tes/v1
-```
-
-You can now test the intallation with the following example call to get a list
-of tasks:
-
-```bash
-curl http://123.123.123.123:8080/ga4gh/tes/v1/tasks
-```
-
-If everything worked well, you should get an output like this:
-
-```json
-{
- "tasks": []
-}
-```
-
-Let's try to send a small task to TESK:
-
-```console
-curl \
- -H "Accept: application/json" \
- -H "Content-Type: application/json" \
- -X POST \
- --data '{"executors": [ { "command": [ "echo", "TESK says: Hello World" ], "image": "alpine" } ]}' \
- "http://123.123.123.123:8080/ga4gh/tes/v1/tasks"
-```
-
-That should give you a task ID:
-
-```json
-{
- "id" : "task-123ab456"
-}
-```
-
-You can run the task list command from before again. Now the response should not
-be an empty list anymore. Rather, you should see something like this:
-
-```json
-{
- "tasks" : [ {
- "id" : "task-123ab456",
- "state" : "COMPLETE"
- } ]
-}
-```
-
-To get more details on your task, use the task ID from before in a call like
-this:
-
-```bash
-curl http://123.123.123.123:8080/ga4gh/tes/v1/tasks/TASK_ID?view=FULL
-```
-
-We can use `jq` to parse the results. Let's say we want to see the logs of the
-first (only, in this case) TES executor, we could do something like this:
-
-```console
-$curl -s http://123.123.123.123:8080/ga4gh/tes/v1/tasks/task-123ab456?view=FULL | jq '.logs[0].logs'
-```
-
-Which would give us an output like this:
-
-```json
-[
- {
- "start_time": "2023-11-01T14:54:20.000Z",
- "end_time": "2023-11-01T14:54:25.000Z",
- "stdout": "TESK says: Hello World\n",
- "exit_code": 0
- }
-]
-```
-
-Note that in the example, the API is only accessible internally. To make it
-accessible publicly, we need to properly configure the `service` section in
-`values.yaml`.
-
-In particular, we would like to set the type to `NodePort` and then set an open
-port on the host machine at which the API is exposed. For example, with
-
-```yaml
-service:
- type: NodePort
- node_port: 31567
-```
-
-Kubernetes will route requests coming in to port `31567` on the host machine to
-port `8080` on the pod.
-
-Let's confirm this by upgrading the Helm chart and again inspecting the services
-in our namespace with:
-
-```bash
-microk8s helm3 upgrade -n NAMESPACE RELEASE_NAME . -f secrets.yaml -f values.yaml
-microk8s kubectl get svc -n NAMESPACE
-```
-
-We should get an output like this:
-
-```console
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-tesk-api NodePort 123.123.123.111 8080:31567/TCP 5s
-```
-
-Indeed, the port section changed as expected. Now, note that the `CLUSTER-IP`
-_also_ changed. However, this is not a problem as Kubernetes will manage the
-routing, so we don't really need to know the `CLUSTER-IP`. Instead, now we can
-use the hostname (or IP) of the host machine, together with the port we set to
-call our TES API from anywhere:
-
-```
-curl http://HOST_NAME_OR_IP:31567/ga4gh/tes/v1/tasks
-```
-
-> Of course you need to make sure that the port you selected is opened for
-> public access. This will depend on your router/firewall settings.
-
-If you would like to tear down the TESK service, simply run:
-
-```bash
-microk8s helm uninstall RELEASE_NAME -n NAMESPACE
-```
-
-#### Deploying Funnel
-
-Follow these instructions if you wish to deploy a TES endpoint in front of your
-HPC/HTC cluster (currently tested with [Slurm][slurm] and [OpenPBS][openpbs].
-
-1. Make sure the build dependencies `make` and [Go 1.11+][go-install] are
- installed, `GOPATH` is set and `GOPATH/bin` is added to `PATH`.
-
- For example, in Ubuntu this can be achieved via:
-
- ```sh
- sudo apt update
- sudo apt install make golang-go
- export GOPATH=/your/desired/path
- export PATH=$GOPATH/bin:$PATH
- go version
- ```
-
-2. Clone the repository:
-
- ```sh
- git clone https://github.com/ohsu-comp-bio/funnel.git
- ```
-
-3. Build Funnel:
-
- ```sh
- cd funnel
- make
- ```
-
-4. Test the installation by starting the Funnel server with:
-
- ```sh
- funnel server run
- ```
-
-If all works, Funnel should be ready for deployment on your HPC/HTC.
-
-##### Slurm
-
-For the use of Funnel with Slurm, make sure the following conditions are met:
-
-1. The `funnel` binary must be placed in a server with access to Slurm.
-2. A config file must be created and placed on the same server. [This
- file][funnel-config-slurm] can be used as a starting point.
-3. If we would like to deploy Funnel as a Systemd service,
- [this file][funnel-config-slurm-service] can be used as a template. Set the
- correct paths to the `funnel` binary and config file.
-
-If successfull Funnel should be listening on port `8080`.
-
-##### OpenPBS
-
-!!! warning "Under construction"
- More info coming soon...
+need to follow the instructions for deploying [TESK](tesk.md) or [Funnel](funnel.md)
+respectively.
### Deploying storage
-Follow the instructions below to connect your TES endpoint to one or more
+Follow the instructions to connect your TES endpoint to one or more
ELIXIR Cloud cloud storage solutions. The currently supported solutions are:
- [MinIO][minio] (Amazon S3)
-- [`vsftpd`][vsftpd] (FTP)
+- [vsftpd][vsftpd] (FTP)
!!! note "Other storage solutions"
- Other S3 and FTP implementations may work but have not being tested.
+ Other S3 and FTP implementations may work but have not been tested.
#### Deploying MinIO (Amazon S3)
@@ -460,6 +92,8 @@ documentation][minio-docs-k8s]. It is very simple
If you are deploying Minio to OpenShift, you may find this
[Minio-OpenShift][minio-deploy-openshift-template] template useful.
+You can also follow the instructions of this [MinIO Helm Chart](https://github.com/CSCfi/helm-charts/tree/main/charts/minio)
+
#### Deploying `vsftpd` (FTP)
There are a lot of guides available online to deploy [`vsftpd`][vsftpd], for
diff --git a/docs/guides/guide-admin/protes.md b/docs/guides/guide-admin/protes.md
new file mode 100644
index 0000000..a19b363
--- /dev/null
+++ b/docs/guides/guide-admin/protes.md
@@ -0,0 +1,131 @@
+## Synopsis
+
+proTES is a robust and scalable [Global Alliance for Genomics and Health
+(GA4GH)][ga4gh] [Task Execution Service (TES) API](https://github.com/ga4gh/task-execution-schemas) gateway
+that may play a pivotal role in augmenting the capabilities of your GA4GH Cloud
+ecosystem by offering flexible middleware injection for effectively federating
+atomic, containerized workloads across on premise, hybrid and multi-cloud
+environments composed of GA4GH TES nodes.
+
+## Description
+
+proTES gateway may serve as a crucial component in federated compute networks
+based on the GA4GH Cloud ecosystem. Its primary purpose is to provide
+centralized features to a federated network of independently operated GA4GH TES
+instances. As such, it can serve, for example, as a compatibility layer, a load
+balancer workload distribution layer, a public entry point to an enclave of
+independent compute nodes, or a means of collecting telemetry.
+
+When TES requests are received, proTES applies configured middleware before
+forwarding the requests to appropriate TES instances in the network. A plugin
+system makes it easy to write and inject middlewares tailored to specific
+requirements, such as for access control, request/response processing or
+validation, or the selection of suitable endpoints considering data use
+restrictions and client preferences.
+
+### Built-in middleware plugins
+
+Currently, there are two plugins shipped with proTES that each serve as
+proof-of-concept examples for different task distribution scenarios:
+
+* **Load balancing**: The `pro_tes.middleware.task_distribution.random` plugin
+ evenly (actually: randomly!) distributes workloads across a network of TES
+ endpoints
+* **Bringing compute to the data**: The
+ `pro_tes.middleware.task_distribution.distance` plugin selects TES endpoints
+ to relay incoming requests to in such a way that the distance the (input) data
+ of a task has to travel across the network of TES endpoints is minimized.
+
+### Implementation notes
+
+proTES is a [Flask][res-flask] microservice that supports
+[OAuth2][res-oauth2]-based authorization out of the box (bearer authentication)
+and stores information about incoming and outgoing tasks in a NoSQL database
+([MongoDB][res-mongodb]). Based on our [FOCA][res-foca] microservice archetype,
+it is highly configurable in a declarative (YAML-based!) manner. Forwarded tasks
+are tracked asynchronously via a [RabbitMQ][res-rabbitmq] broker and
+[Celery][res-celery] workers that can be easily scaled up. Both a
+[Helm][res-helm] chart and a [Docker Compose][res-docker-compose] configuration
+are provided for easy deployment in native cloud-based production and
+development environments, respectively.
+
+
+
+## Installation
+
+You can find a Helm chart in the [GitHub repository](https://github.com/elixir-cloud-aai/proTES/tree/dev/deployment) of proTES
+
+Follow these instructions
+
+- Install [Helm][helm-install]
+- Clone the [proTES repository](https://github.com/elixir-cloud-aai/proTES/)
+
+ ```sh
+ git clone https://github.com/elixir-cloud-aai/proTES.git
+ ```
+
+- Browse to `deployment` to find the `Chart.yaml` and the `values.yaml` files
+
+## Usage
+
+First you must create a namespace in Kubernetes in which to deploy proTES. The
+commands below assume that everything is created in the context of this
+namespace. How the namespace is created depends on the cluster, so we won't
+document it here.
+
+You need to edit the `values.yaml` file to specify your `applicationDomain` and the `clusterType`
+
+After this you can deploy proTES using `helm`:
+
+```bash
+helm install protes . -f values.yaml
+```
+
+### Updates
+
+If you want to edit any of the Deployments, you can update them with
+`helm` and the `values.yaml` file. Once edited, you can run this command:
+
+```bash
+helm upgrade protes . -f values.yaml
+```
+
+## Technical details
+
+### MongoDB
+
+The MongoDB database is deployed using:
+
+- `templates/mongodb/mongodb-deployment.yaml`
+
+### RabbitMQ
+
+The message broker RabbitMQ that allows the app to communicate with the worker
+is deployed using:
+
+- `templates/rabbitmq/rabbitmq-deployment.yaml`
+
+### proTES
+
+proTES consists of five deployments: a Flask server and a Celery worker. These
+are deployed using:
+
+- `templates/protes/protes-deployment.yaml`
+- `templates/protes/celery-deployment.yaml`
+
+You can use `ReadWriteOnce` if you don't have `StorageClass`
+that supports `RWX`. In that case, a `podAffinity` will be set to have the proTES pods
+running on the same node.
+
+## Destroy
+
+Simply run:
+
+```bash
+helm uninstall protes
+```
+
diff --git a/docs/guides/guide-admin/prowes.md b/docs/guides/guide-admin/prowes.md
new file mode 100644
index 0000000..9a48214
--- /dev/null
+++ b/docs/guides/guide-admin/prowes.md
@@ -0,0 +1,132 @@
+[proWES](https://github.com/elixir-cloud-aai/proWES/) is a proxy service that wraps around [GA4GH WES](https://github.com/ga4gh/workflow-execution-service-schemas)
+
+When WES requests are received, proWES applies one or more configurable middlewares
+before forwarding the requests to appropriate WES instances in the network.
+A plugin system makes it easy to write and inject middlewares tailored to specific
+requirements, such as for access control, request/response processing or validation,
+or the selection of suitable endpoints considering data use restrictions and client
+preferences.
+
+### Implementation notes
+
+proWES is a Flask microservice that supports OAuth2-based authorization out of the box
+(bearer authentication) and stores information about incoming and outgoing tasks in a
+NoSQL database (MongoDB). Based on our FOCA microservice archetype, it is highly
+configurable in a declarative (YAML-based!) manner. Forwarded tasks are tracked
+asynchronously via a RabbitMQ broker and Celery workers that can be easily scaled up.
+Both a Helm chart and a Docker Compose configuration are provided for easy deployment in
+native cloud-based production and development environments, respectively.
+
+
+
+## Installation
+
+You can find a Helm chart in the [GitHub repository](https://github.com/elixir-cloud-aai/proWES/tree/dev/deployment) of proWES
+
+Follow these instructions
+
+- Install [Helm][helm-install]
+- Clone the [proWES repository](https://github.com/elixir-cloud-aai/proWES/)
+
+ ```sh
+ git clone https://github.com/elixir-cloud-aai/proWES/
+ ```
+
+- Browse to `deployment` to find the `Chart.yaml` and the `values.yaml` files
+
+## Usage
+
+First you must create a namespace in Kubernetes in which to deploy proWES. The
+commands below assume that everything is created in the context of this
+namespace. How the namespace is created depends on the cluster, so we won't
+document it here.
+
+There are some prerequisites to deploying proWES on Kubernetes. Namely:
+
+- MongoDB:
+ - in the same namespace reachable via 'mongodb'
+ - DB called `prowes-db` created
+ - database-user and database-password for `prowes-db` available in a Secret
+ called 'mongodb'
+- RabbitMQ:
+ - in the same namespace reachable via 'rabbitmq-cluster'
+- Secret called `.netrc` created (see below)
+
+You'll need to configure an SFTP server connection using a `.netrc` file with
+the following format:
+
+```
+machine my-sftp-server.com
+login
+password
+```
+
+Create a Kubernetes Secret from the `.netrc` file:
+
+```bash
+kubectl create secret generic netrc --from-file .netrc
+```
+
+You need to edit the `values.yaml` file to specify your `applicationDomain` and the `clusterType`
+
+After this you can deploy proWES using `helm`:
+
+```bash
+helm install prowes . -f values.yaml
+```
+
+### Updates
+
+If you want to edit any of the Deployments, you can update them with
+`helm` and the `values.yaml` file. Once edited, you can run this command:
+
+```bash
+helm upgrade prowes . -f values.yaml
+```
+
+If you want to point to a different FTP server or change the login credentials
+for the current FTP server, you can update the `.netrc` secret like so:
+
+```bash
+kubectl create secret generic netrc --from-file .netrc --dry-run -o yaml | kubectl apply -f -
+```
+
+## Technical details
+
+### MongoDB
+
+The MongoDB database is deployed using:
+
+- `templates/mongodb/mongodb-deployment.yaml`
+
+### RabbitMQ
+
+The message broker RabbitMQ that allows the app to communicate with the worker
+is deployed using:
+
+- `templates/rabbitmq/rabbitmq-deployment.yaml`
+
+### WES
+
+proWES consists of five deployments: a Flask server and a Celery worker. These
+are deployed using:
+
+- `templates/prowes/prowes-deployment.yaml`
+- `templates/prowes/celery-deployment.yaml`
+
+You can use `ReadWriteOnce` if you don't have a `StorageClass`
+that supports `RWX`. In that case, a `podAffinity` will be set to have the proWES pods
+running on the same node.
+
+## Destroy
+
+Simply run:
+
+```bash
+helm uninstall prowes
+```
+
diff --git a/docs/guides/guide-admin/services_to_ls_aai.md b/docs/guides/guide-admin/services_to_ls_aai.md
index fdbcb69..cb56d4a 100644
--- a/docs/guides/guide-admin/services_to_ls_aai.md
+++ b/docs/guides/guide-admin/services_to_ls_aai.md
@@ -37,55 +37,48 @@ cull:
Before configuring Hedgedoc, you need to register your service with LS-Login. Follow the registration process at https://lifescience-ri.eu/ls-login/documentation/how-to-integrate/registration.html
-Hedgedoc is configured using environment variables. This guide assumes that a Hedgedoc is already deployed, in our case we used this chart:
-
-https://github.com/CSCfi/helm-charts/tree/main/charts/hedgedoc
+Hedgedoc is configured using environment variables. This guide assumes that a Hedgedoc is already deployed, in our case we used this [chart](https://github.com/CSCfi/helm-charts/tree/main/charts/hedgedoc).
Once Hedgedoc is deployed, in order to add LS-AAI login one just needs to add these variables:
+```yaml
- name: CMD_OAUTH2_USER_PROFILE_URL
- - value: https://login.aai.lifescience-ri.eu/oidc/userinfo
+ value: https://login.aai.lifescience-ri.eu/oidc/userinfo
- name: CMD_OAUTH2_USER_PROFILE_USERNAME_ATTR
- - value: preferred_username
+ value: preferred_username
- name: CMD_OAUTH2_USER_PROFILE_DISPLAY_NAME_ATTR
- - value: name
+ value: name
- name: CMD_OAUTH2_USER_PROFILE_EMAIL_ATTR
- - value: email
+ value: email
- name: CMD_OAUTH2_TOKEN_URL
- - value: https://login.aai.lifescience-ri.eu/oidc/token
+ value: https://login.aai.lifescience-ri.eu/oidc/token
- name: CMD_OAUTH2_AUTHORIZATION_URL
- - value: https://login.aai.lifescience-ri.eu/oidc/authorize
+ value: https://login.aai.lifescience-ri.eu/oidc/authorize
- name: CMD_OAUTH2_CLIENT_ID
- - value: _REPLACE BY CLIENT ID_
+ value: _REPLACE BY CLIENT ID_
- name: CMD_OAUTH2_CLIENT_SECRET
- - value: _REPLACE BY CLIENT SECRET_
+ value: _REPLACE BY CLIENT SECRET_
- name: CMD_OAUTH2_PROVIDERNAME
- - value: ELIXIR Cloud & AAI
+ value: ELIXIR Cloud & AAI
- name: CMD_OAUTH2_SCOPE
- - value: openid email profile
-
-The documentation from Hedgedoc about this is at:
+ value: openid email profile
+```
-https://docs.hedgedoc.org/configuration/#oauth2-login
+The documentation from Hedgedoc about this is available [here](https://docs.hedgedoc.org/configuration/#oauth2-login).
# Using LS-Login in MinIO
-LS-Login can be activated in MinIO either by using the MinIO console using the OIDC configuration or by setting environmental variables ([MinIO OIDC Documentation](https://min.io/docs/minio/linux/operations/external-iam/configure-openid-external-identity-management.html)).
+LS-Login can be activated in MinIO either by using the MinIO console using the OIDC configuration or by setting environment variables ([MinIO OIDC Documentation](https://min.io/docs/minio/linux/operations/external-iam/configure-openid-external-identity-management.html)).
-- Config URL (MINIO_IDENTITY_OPENID_CONFIG_URL)
- - https://login.aai.lifescience-ri.eu/oidc/.well-known/openid-configuration
-- Client ID (MINIO_IDENTITY_OPENID_CLIENT_ID)
- - ID of the LS-Login service
-- Client secret (MINIO_IDENTITY_OPENID_CLIENT_SECRET)
- - Secret of the LS-Login service
-- Display Name (MINIO_IDENTITY_OPENID_DISPLAY_NAME)
- - A human readable label for the login button (e.g. `LS-Login`)
-- Scopes (MINIO_IDENTITY_OPENID_SCOPES)
- - Scopes that will be requested from LS-Login (e.g. `openid,email,profile`)
-- Role policy (MINIO_IDENTITY_OPENID_ROLE_POLICY)
- - Name of a policy in MinIO that will be used to manage access of LS-Login users (e.g. `readonly`).
-- Claim User Info (MINIO_IDENTITY_OPENID_CLAIM_USERINFO)
- - Allow MinIO to request the userinfo endpoint for additional information (`on`).
+```sh
+export MINIO_IDENTITY_OPENID_CONFIG_URL="https://login.aai.lifescience-ri.eu/oidc/.well-known/openid-configuration"
+export MINIO_IDENTITY_OPENID_CLIENT_ID=""
+export MINIO_IDENTITY_OPENID_CLIENT_SECRET=""
+export MINIO_IDENTITY_OPENID_DISPLAY_NAME=""
+export MINIO_IDENTITY_OPENID_SCOPES=""
+export MINIO_IDENTITY_OPENID_ROLE_POLICY=""
+export MINIO_IDENTITY_OPENID_CLAIM_USERINFO=""
+```
MinIO supports two different mechanisms for authorization of users with OIDC ([MinIO OIDC authorization](https://min.io/docs/minio/linux/administration/identity-access-management/oidc-access-management.html#minio-external-identity-management-openid)). It is recommended to use the RolePolicy flow. Here, all LS-Login users in MinIO will be assigned to one or more policies. These policies can control access to specific buckets by group membership; e.g. require that users belong to a specific LS-AAI group (see [policy based access control](https://min.io/docs/minio/linux/administration/identity-access-management/policy-based-access-control.html#tag-based-policy-conditions)).
@@ -115,4 +108,4 @@ In the example below, access to a bucket (`sensitive/`) is restricted to a list
}
]
}
-```
\ No newline at end of file
+```
diff --git a/docs/guides/guide-admin/tesk.md b/docs/guides/guide-admin/tesk.md
new file mode 100644
index 0000000..b06b77c
--- /dev/null
+++ b/docs/guides/guide-admin/tesk.md
@@ -0,0 +1,336 @@
+## Deploying TESK
+
+[TESK][tesk] uses the Kubernetes Batch API ([Jobs][k8s-jobs]) to schedule
+execution of TES tasks. This means that it should be possible to deploy TESK in
+any flavor of Kubernetes, but tests are currently only performed with
+[Kubernetes][k8s], [OpenShift][openshift], and [Minikube][minikube]. Follow
+these instructions if you wish to deploy a TES endpoint on your Native Cloud
+cluster, and please let us know if you deploy TESK in any new and interesting
+platform.
+
+TESK currently does not use any other storage (DB) than Kubernetes itself.
+[Persistent Volume Claims][k8s-pvc] are used as a temporary storage to handle
+input and output files of a task and pass them over between executors of a
+task. Note that PVCs are destroyed immediately after task completion! This
+means your cluster will need to provide a ReadWriteMany
+[StorageClass][k8s-storage-class]. Commonly used storage classes are
+[NFS][nfs] and [CephFS][cephfs].
+
+Here is an overview of TESK's architecture:
+
+
+
+A [Helm][helm] chart is provided for the convenient deployment of TESK. The
+chart is available in the [TESK code repository][tesk-helm].
+
+Follow these steps:
+
+- [Install Helm][helm-install]
+- Clone the [TESK repository][tesk]:
+
+ ```sh
+ git clone https://github.com/elixir-cloud-aai/TESK.git
+ ```
+
+- Find the Helm chart at `charts/tesk`
+- Edit file [`values.yaml`][tesk-helm-values] (see
+ [notes](#edit-chart-values) below)
+- Log into the cluster and install TESK with:
+
+ ```sh
+ helm install -n TESK-NAMESPACE TESK-DEPLOYMENT-NAME . \
+ -f values.yaml
+ ```
+
+ * Replace `TESK-NAMESPACE` with the name of the namespace where you want to
+ install TESK. If the namespace is not specified, the default namespace will
+ be used.
+ * The argument provided for `TESK-DEPLOYMENT-NAME` will be used by Helm to
+ refer to the deployment, for example when upgrading or deleting the
+ deployment. You can choose whichever name you like.
+
+You should now have a working TESK instance! You can try to `curl` the address by
+running this command:
+
+```sh
+$ curl http:///ga4gh/tes/v1/tasks
+{
+ "tasks" : []
+}
+```
+
+### Edit Chart values
+
+In the [TESK deployment documentation][tesk-docs-deploy] documentation there is
+a [description of every value][tesk-docs-deploy-values]. Briefly, the most
+important are:
+
+- `host_name`: Will be used to serve the API.
+
+- `storage`: `none` or `s3`. If `s3` is set, you must create two files: `config`
+ and `credentials`. You can find templates in the `s3-config/` folder:
+
+ `config`:
+
+ ```
+ [default]
+ # Non-standard entry, parsed by TESK, not boto3
+ endpoint_url=
+ ```
+
+ `credentials`:
+
+ ```
+ [default]
+ aws_access_key_id=
+ aws_secret_access_key=
+ ```
+
+ These files will be retrieved during the deployment of the Helm Chart
+
+- `storageClass`: Specify the storage class. If left empty, TESK will use the
+ default one configured in the Kubernetes cluster.
+
+- `auth.mode`: Enable (`auth`) or disable (`noauth`; default) authentication.
+ When enabled, you must add those two keys: `client_id` and `client_secret`
+ with your values:
+
+ ```yaml
+ auth:
+ client_id:
+ client_secret:
+ ```
+
+- `ftp`: Which FTP credentials mode to use. Two options are supported:
+ `.classic_ftp_secret` for basic authentication (username and password) or
+ `.netrc_secret` for using a [`.netrc`][netrc] file.
+
+ For the classic approach, you must write in `values.yaml` and add two values
+ `username` and `password`:
+
+ ```yaml
+ ftp:
+ classic_ftp_secret: ftp-secret
+ netrc_secret:
+ username:
+ password:
+ ```
+
+ For the `.netrc` approach, create a `.netrc` file in the `ftp` folder with
+ the connections details in the correct format and set a name in `ftp.netrc_secret`:
+
+ ```yaml
+ ftp:
+ classic_ftp_secret:
+ netrc_secret: netrc-secret
+ ```
+
+ You can find a template named `.netrc-TEMPLATE` in the `ftp` folder:
+
+ ```
+ machine ftp-private.ebi.ac.uk
+ login ftp-username
+ password ftp-password
+ ```
+
+### Deploy with microk8s
+
+This section outlines how to install TESK via [microk8s][microk8s] as tested on
+an Ubuntu 22.04 machine.
+
+First, install microk8s through the Snap Store and add yourself to the
+`microk8s` group::
+
+```bash
+sudo snap install microk8s --classic
+sudo usermod -a -G microk8s $USER
+```
+
+Next, let's clone the TESK repository and move into it the Helm chart directory:
+
+```bash
+git clone https://github.com/elixir-cloud-aai/TESK.git
+cd TESK/charts/tesk
+```
+
+Follow the deployment instructions to modify
+`values.yaml` as per your requirements.
+
+!!! warning
+ You **MUST** set `host_name`. To make the service available through the
+ internet, see further below on how to configure the `service` section.
+
+Great - you are now ready to deploy TESK!
+
+First, let's create a namespace:
+
+```bash
+microk8s kubectl create namespace NAMESPACE
+```
+
+where `NAMESPACE` is an arbitrary name for your resource group.
+
+Now let's use Helm to install:
+
+```bash
+microk8s helm install -n NAMESPACE RELEASE_NAME . -f values.yaml
+```
+
+where `RELEASE_NAME` is an arbitrary name for this particular TESK release.
+
+Congratulations - TESK should now be successfully deployed!
+
+To find out the IP address at which TESK is available, run the following
+command:
+
+```bash
+microk8s kubectl get svc -n NAMESPACE
+```
+
+The output should look something like this:
+
+```console
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+tesk-api ClusterIP 123.123.123.123 8080/TCP 8s
+```
+
+Use the `CLUSTER-IP` and the `PORT` with the following template to construct the
+URL at which the service is available (and make sure to replace the dummy URL
+when you want to try out the calls below):
+
+```console
+http://CLUSTER-IP:PORT/ga4gh/tes/v1
+```
+
+So, in this example case, we get the following URL:
+
+```console
+http://123.123.123.123:8080/ga4gh/tes/v1
+```
+
+You can now test the installation with the following example call to get a list
+of tasks:
+
+```bash
+curl http://123.123.123.123:8080/ga4gh/tes/v1/tasks
+```
+
+If everything worked well, you should get an output like this:
+
+```json
+{
+ "tasks": []
+}
+```
+
+Let's try to send a small task to TESK:
+
+```console
+curl \
+ -H "Accept: application/json" \
+ -H "Content-Type: application/json" \
+ -X POST \
+ --data '{"executors": [ { "command": [ "echo", "TESK says: Hello World" ], "image": "alpine" } ]}' \
+ "http://123.123.123.123:8080/ga4gh/tes/v1/tasks"
+```
+
+That should give you a task ID:
+
+```json
+{
+ "id" : "task-123ab456"
+}
+```
+
+You can run the task list command from before again. Now the response should not
+be an empty list anymore. Rather, you should see something like this:
+
+```json
+{
+ "tasks" : [ {
+ "id" : "task-123ab456",
+ "state" : "COMPLETE"
+ } ]
+}
+```
+
+To get more details on your task, use the task ID from before in a call like
+this:
+
+```bash
+curl http://123.123.123.123:8080/ga4gh/tes/v1/tasks/TASK_ID?view=FULL
+```
+
+We can use `jq` to parse the results. Let's say we want to see the logs of the
+first (only, in this case) TES executor, we could do something like this:
+
+```console
+$curl -s http://123.123.123.123:8080/ga4gh/tes/v1/tasks/task-123ab456?view=FULL | jq '.logs[0].logs'
+```
+
+Which would give us an output like this:
+
+```json
+[
+ {
+ "start_time": "2023-11-01T14:54:20.000Z",
+ "end_time": "2023-11-01T14:54:25.000Z",
+ "stdout": "TESK says: Hello World\n",
+ "exit_code": 0
+ }
+]
+```
+
+Note that in the example, the API is only accessible internally. To make it
+accessible publicly, we need to properly configure the `service` section in
+`values.yaml`.
+
+In particular, we would like to set the type to `NodePort` and then set an open
+port on the host machine at which the API is exposed. For example, with
+
+```yaml
+service:
+ type: NodePort
+ node_port: 31567
+```
+
+Kubernetes will route requests coming in to port `31567` on the host machine to
+port `8080` on the pod.
+
+Let's confirm this by upgrading the Helm chart and again inspecting the services
+in our namespace with:
+
+```bash
+microk8s helm upgrade -n NAMESPACE RELEASE_NAME . -f values.yaml
+microk8s kubectl get svc -n NAMESPACE
+```
+
+We should get an output like this:
+
+```console
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+tesk-api NodePort 123.123.123.111 8080:31567/TCP 5s
+```
+
+Indeed, the port section changed as expected. Now, note that the `CLUSTER-IP`
+_also_ changed. However, this is not a problem as Kubernetes will manage the
+routing, so we don't really need to know the `CLUSTER-IP`. Instead, now we can
+use the hostname (or IP) of the host machine, together with the port we set to
+call our TES API from anywhere:
+
+```
+curl http://HOST_NAME_OR_IP:31567/ga4gh/tes/v1/tasks
+```
+
+Of course you need to make sure that the port you selected is opened for
+public access. This will depend on your router/firewall settings.
+
+If you would like to tear down the TESK service, simply run:
+
+```bash
+microk8s helm uninstall RELEASE_NAME -n NAMESPACE
+```
\ No newline at end of file
diff --git a/docs/guides/guide-admin/trsfiler.md b/docs/guides/guide-admin/trsfiler.md
new file mode 100644
index 0000000..7a82801
--- /dev/null
+++ b/docs/guides/guide-admin/trsfiler.md
@@ -0,0 +1,68 @@
+## Synopsis
+
+Microservice implementing the [Global Alliance for Genomics and
+Health (GA4GH)][ga4gh] [Tool Registry Service][ga4gh-trs] (TRS)
+API specification.
+
+## Installation
+
+You can find a Helm chart in the [GitHub repository](https://github.com/elixir-cloud-aai/trs-filer/tree/dev/deployment) of TRS-Filer
+
+Follow these instructions
+
+- Install [Helm][helm-install]
+- Clone the [TRS-Filer repository](https://github.com/elixir-cloud-aai/trs-filer/)
+
+ ```sh
+ git clone https://github.com/elixir-cloud-aai/trs-filer.git
+ ```
+
+- Browse to `deployment` to find the `Chart.yaml` and the `values.yaml` files
+
+## Usage
+
+First you must create a namespace in Kubernetes in which to deploy TRS-Filer. The
+commands below assume that everything is created in the context of this
+namespace. How the namespace is created depends on the cluster, so we won't
+document it here.
+
+You need to edit the `values.yaml` file
+
+After this you can deploy TRS-Filer using `helm`:
+
+```bash
+helm install trs-filer . -f values.yaml
+```
+
+### Updates
+
+If you want to edit any of the Deployments, you can update them with
+`helm` and the `values.yaml` file. Once edited, you can run this command:
+
+```bash
+helm upgrade trs-filer . -f values.yaml
+```
+
+## Technical details
+
+### MongoDB
+
+The MongoDB database is deployed using:
+
+- `templates/mongo-deploy.yaml`
+
+### TRS-Filer
+
+TRS-Filer is deployed using:
+
+- `templates/trs-filer-deploy.yaml`
+
+## Destroy
+
+Simply run:
+
+```bash
+helm uninstall trs-filer
+```
+
+
diff --git a/docs/guides/guide-user/index.md b/docs/guides/guide-user/index.md
index b70b7b5..0a49e31 100644
--- a/docs/guides/guide-user/index.md
+++ b/docs/guides/guide-user/index.md
@@ -89,11 +89,17 @@ A demo workflow is available [here][elixir-cloud-demo-cwl].
### Nextflow
-!!! warning "Under construction"
- More info coming soon...
+You can find an article about NextFlow with GA4GH TES [here](https://techcommunity.microsoft.com/blog/healthcareandlifesciencesblog/introducing-nextflow-with-ga4gh-tes-a-new-era-of-scalable-data-processing-on-azu/4253160)
-## Workflow Execution Service (WES)
+To use TES in your Nextflow config, use the plugin `nf-ga4gh`:
+
+```
+plugins {
+ id 'nf-ga4gh'
+}
+```
+## Workflow Execution Service (WES)
The GA4GH [WES][ga4gh-wes] is a standard specification protocol for executing
and monitoring bioinformatics workflows. It allows researchers to easily
execute and manage complex analysis pipelines across multiple computing
@@ -119,10 +125,42 @@ specification are:
## Data Repository Service (DRS)
-!!! warning "Under construction"
- More info coming soon...
+The GA4GH [DRS][ga4gh-drs] API provides a standard set of data retrieval methods
+to access genomic and related health data across different repositories.
+It allows researchers to simplify and standardize data retrieval in cloud-based
+environements. Some key features like Standardized data access that offers a consistent
+API for retrieving datasets. Cloud-agnostic means that it works accross different
+cloud infrastructures. Two use cases for the GA4GH DRS:
+
+- Scenario 1: A researcher wants to run an analysis pipeline on a dataset without
+ worrying about where the data physically resides. The researcher uses a DRS ID
+ to request the dataset. DRS resolves the ID to the actual storage location and
+ provides signed URLs or access tokens and the pipeline retrievess the data
+ seamlessly, regardless of the underlying cloud or storage system.
+
+- Scenario 2: A pharmaceutical company is collaborating with hospitals to analyze
+ patient genomic data. Due to privacy regulations, raw data cannot be moved outside
+ the hospital’s secure environment. The hospital can expose their datasets via DRS
+ endpointsand the pharmaceutical company's workflow engine queries DRS to get metadata.
+ Finally, the analysis is performed without violating data residency rules.
## Tool Registry Service (TRS)
-!!! warning "Under construction"
- More info coming soon...
+The GA4GH [TRS][ga4gh-trs] API provides a standard mechanism to list, search and
+register tools and worflows across different platforms and cloud environments.
+It supports workflows written in CWL, WDL, Nextflow, Galaxy, Snakemake.
+Here are examples of two use cases:
+
+- Scenario 1: A bioinformatics researcher develops a workflow for variant calling
+ using WDL and Docker containers. They want to share it with collaborators who use
+ different platform. TRS can help, the researcher registers the workflow in a
+ TRS-compliant registry like Dockstore. The collaborators can discover the workflow
+ via TRS API and run it on their platform.
+ TRS will ensure that metadata, versioning, and container are standardized and
+ accessible
+
+- Scenario 2: A hospital’s genomics lab uses an automated pipeline to analyze patient
+ exome data for rare disease diagnosis. The pipeline queries a TRS registry to find
+ the latest version of tools (like VEP or GATK), retrieves the workflow descriptor
+ and container images. Finally, the pipeline executes the tools in a secure,
+ compliant environment.
\ No newline at end of file
diff --git a/mkdocs.yml b/mkdocs.yml
index 7a3f8cf..dc9c036 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -89,7 +89,15 @@ nav:
- "guides/guide-dev/index.md"
- Administrators:
- "guides/guide-admin/index.md"
+ - "Cloud-Registry": "guides/guide-admin/cloudregistry.md"
+ - "CWL-WES": "guides/guide-admin/cwlwes.md"
+ - "DRS-Filer": "guides/guide-admin/drsfiler.md"
+ - "Funnel": "guides/guide-admin/funnel.md"
- "LS Login configuration": "guides/guide-admin/services_to_ls_aai.md"
+ - "proWES": "guides/guide-admin/prowes.md"
+ - "proTES": "guides/guide-admin/protes.md"
+ - "TESK" : "guides/guide-admin/tesk.md"
+ - "TRS-Filer": "guides/guide-admin/trsfiler.md"
- Contributors:
- "guides/guide-contributor/index.md"
- "Workflow": "guides/guide-contributor/workflow.md"