diff --git a/Makefile b/Makefile
index dff7a6fa..42c960b6 100644
--- a/Makefile
+++ b/Makefile
@@ -10,7 +10,7 @@ PYTHON := python3
.PHONY: virtenv_create
## Create virtualenv
virtenv_create:
- @python3 -c "import virtualenv" >/dev/null 2>&1 || pip install virtualenv
+ @python3 -c "import virtualenv" >/dev/null 2>&1 || pip install --break-system-packages --user virtualenv
python3 -m virtualenv $(VENV_NAME)
source $(VENV_NAME)/bin/activate && pip install -r requirements.txt
diff --git a/docs/documentation/data_space_connectors/fiware/img/consumer_arch.svg b/docs/documentation/data_space_connectors/fiware/img/consumer_arch.svg
new file mode 100644
index 00000000..bd84d9f2
--- /dev/null
+++ b/docs/documentation/data_space_connectors/fiware/img/consumer_arch.svg
@@ -0,0 +1,3 @@
+
+
+
\ No newline at end of file
diff --git a/docs/documentation/data_space_connectors/fiware/img/provider_arch.svg b/docs/documentation/data_space_connectors/fiware/img/provider_arch.svg
new file mode 100644
index 00000000..89809bbd
--- /dev/null
+++ b/docs/documentation/data_space_connectors/fiware/img/provider_arch.svg
@@ -0,0 +1,3 @@
+
+
+
\ No newline at end of file
diff --git a/docs/documentation/data_space_connectors/fiware/index.md b/docs/documentation/data_space_connectors/fiware/index.md
index dc44e317..40a2c48d 100644
--- a/docs/documentation/data_space_connectors/fiware/index.md
+++ b/docs/documentation/data_space_connectors/fiware/index.md
@@ -1,49 +1,92 @@
-# FIWARE Connector
-
-## Introduction
-The [FIWARE Data Space Connector](https://github.com/FIWARE/data-space-connector) is an integrated suite of components every organization participating in a data space should deploy to “connect” to a data space. Following the DSBA recommendations, it allows to:
-
-* Interface with Trust Services aligned with [EBSI specifications](https://api-pilot.ebsi.eu/docs/apis)
-* Implement authentication based on [W3C DID](https://www.w3.org/TR/did-core/) with
- [VC/VP standards](https://www.w3.org/TR/vc-data-model/) and
- [SIOPv2](https://openid.net/specs/openid-connect-self-issued-v2-1_0.html#name-cross-device-self-issued-op) /
- [OIDC4VP](https://openid.net/specs/openid-4-verifiable-presentations-1_0.html#request_scope) protocols
-* Implement authorization based on attribute-based access control (ABAC) following an
- [XACML P*P architecture](https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=xacml) using
- [Open Digital Rights Language (ODRL)](https://www.w3.org/TR/odrl-model/) and the
- [Open Policy Agent (OPA)](https://www.openpolicyagent.org/)
-* Provide compatibility with [ETSI NGSI-LD](https://www.etsi.org/committee/cim) as data exchange API
-* Supports the [TMForum APIs](https://www.tmforum.org/oda/open-apis/) for contract negotiation
-
-**Note:** Although the FIWARE Data Space Connector provides compatibility with NGSI-LD as the data exchange
-API, it could also be used for any other RESTful API by replacing or extending the PDP component of the
-connector.
-
-The functionalities mentioned above can be used by an organization to connect to the data space in its role
-as data (processing) service provider, consumer of data (processing) services, or both.
-
-### Key points
-
-- Final and ready-to-use software (versus the framework approach of Eclipse)
-- (Partial support for) IDS Dataspace Protocol (DSP)
-- Not as agnostic as Eclipse, although its modular approach makes it possible (in theory) to extend its capabilities.
-- It is not very tested; expect bugs and error reporting work.
-- Development is relatively slow.
+---
+title: FIWARE Data Space Connector
+---
-## Getting started
-The [Minimum Viable Data Space Infrastructure](https://github.com/CitComAI-Hub/Minimum_Viable_DataSpace_Infrastructure) repository provides a ready-to-use, minimal deployment of a FIWARE Data Space Connector. Its main purpose is to offer an easy way to deploy the FIWARE connector locally.
+## Overview
+
+The [FIWARE Data Space Connector (FDSC)](https://github.com/FIWARE/data-space-connector) is an integrated suite of components every organization participating in a data space should deploy to _connect_ to a data space. Following the DSBA recommendations, it allows to:
+
+- Interface with Trust Services aligned with [EBSI specifications](https://api-pilot.ebsi.eu/docs/apis).
+- Implement authentication based on [W3C DID](https://www.w3.org/TR/did-core/) with [VC/VP standards](https://www.w3.org/TR/vc-data-model/) and [SIOPv2](https://openid.net/specs/openid-connect-self-issued-v2-1_0.html#name-cross-device-self-issued-op)/[OIDC4VP](https://openid.net/specs/openid-4-verifiable-presentations-1_0.html#request_scope) protocols.
+- Implement authorization based on attribute-based access control (ABAC) following an [XACML P*P architecture](https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=xacml) using [Open Digital Rights Language (ODRL)](https://www.w3.org/TR/odrl-model/) and the [Open Policy Agent (OPA)](https://www.openpolicyagent.org/).
+- Provide compatibility with [ETSI NGSI-LD](https://www.etsi.org/committee/cim) as data exchange API.
+- Supports the [TMForum APIs](https://www.tmforum.org/oda/open-apis/) for contract negotiation.
+
+!!! note
+ Although the FIWARE Data Space Connector provides compatibility with NGSI-LD as the data exchange API, it could also be used for any other RESTful API by replacing or extending the PDP component of the connector.
+
+??? info "Key points"
-### Minimum Viable Data Space (Local)
-Check out the [examples/kind_minimal_ds_local exmple](https://github.com/CitComAI-Hub/Minimum_Viable_DataSpace_Infrastructure/tree/main/examples/kind_minimal_ds_local). This example is composed of the following blocks:
+ - Final and ready-to-use software (versus the framework approach of Eclipse).
+ - (Partial support for) IDS Dataspace Protocol (DSP).
+ - Not as agnostic as Eclipse, although its modular approach makes it possible (in theory) to extend its capabilities.
+ - It is not very tested; expect bugs and error reporting work.
+ - Development is relatively slow.
-- Fiware Data Space (FDS) Operator or Trust Anchor: Trust Anchor that manages the issuers and credentials.
-- FDS Connector A (Provider): Entity that provides and consumes data from the data space.
-- FDS Connector B (Consumer): Entity that only consumes data from the data space.
+## Getting started
+
+A good way to start working with the connector is to deploy a [Minimum Viable Data Space (MVDS)](../../../getting_started/data_spaces/index.md#minimum-viable-data-space) using FIWARE's minimum infrastructure. This infrastructure provides a minimal implementation of a data space using Fiware technology, which allows test the FIWARE Data Space Connector and its components in a local environment.
{ loading=lazy }
-For further details and to get started, please visit the [repository on GitHub](https://github.com/CitComAI-Hub/Minimum_Viable_DataSpace_Infrastructure).
+This MVDS is composed of the following blocks:
+
+| Component | Description |
+|-----------|-------------|
+| **Fiware Data Space Operator or Trust Anchor** | The entity responsible for managing the issuers and credentials within the data space. It ensures the trustworthiness of the data space by managing the identities and credentials of participants. |
+| **FDS Connector A (Provider)** | An entity that provides data from the data space. It acts as a data provider, allowing for data exchange within the data space. |
+| **FDS Connector B (Consumer)** | An entity that consumes data from the data space. It acts as a data consumer, retrieving data from the data space without providing any data in return. |
+
+!!! example
+
+ - **FIWARE MVDS local example:** [Code](https://github.com/FIWARE/data-space-connector/blob/main/doc/deployment-integration/local-deployment/LOCAL.MD) repository.
+ - **CitcomAI MVDS local example:** [Code](https://github.com/CitComAI-Hub/Minimum_Viable_DataSpace_Infrastructure) repository.
+
+## Technical Details & Deployments
+
+The [FIWARE Data Space Connector repository](https://github.com/FIWARE/data-space-connector) provides a Helm chart for deploying the connector in a Kubernetes cluster. The chart includes all the necessary components to set up a data space connector in both consumer and provider modes. The chart is designed to be flexible and can be customized to fit the specific needs of the data space.
+
+### Consumer
+
+The consumer mode of the FIWARE Data Space Connector is composed of the following components:
+
+
+
+!!! example "Deployments"
+ - Minimum AWS deployment example: [Code](../../mv_data_space/fiware/consumer.md)
+
+| Component | Functionality | Description |
+|:---------:|---------------|-------------|
+| **DID (did-helper)** | Config Services | A component that provides support for W3C Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs). It helps in managing DIDs and VCs within the data space. |
+| **Keycloak** | Authentication | An identity and access management solution that provides authentication and authorization services. It is used to manage user identities and access to resources within the data space. |
+| **Rainbow** | IDSA Data Space Protocol | Rainbow or also known as Dataspace Rainbow is an implementation of Dataspace Protocol 2024-1 promoted by IDSA (International Data Spaces Association). |
+| **PostgreSQL** | Database | A relational database management system that stores data related to the data space. |
+
+### Provider
+
+The provider mode of the FIWARE Data Space Connector is composed of the following components:
+
+{ loading=lazy }
+!!! example "Deployments"
+ - Minimum AWS deployment example: [Code](../../mv_data_space/fiware/provider.md)
+| Component | Functionality | Description |
+|:---------:|---------------|-------------|
+| **APISIX** | Authorization | A component that provides API gateway functionality with a OPA plugin for traffic management. |
+| **OPA** | Authorization | An open-source policy engine that provides attribute-based access control (ABAC) for the data space. It evaluates policies and makes authorization decisions based on attributes and rules defined in the data space. |
+| **ODRL-PAP** | Authorization | A component that implements the ODRL (Open Digital Rights Language) Policy Administration Point (PAP) for managing data access policies within the data space. |
+| **Scopio** | Data Broker | A data broker, facilitating the exchange of data between different participants in the data space. It manages data discovery and retrieval processes. |
+| **VCVerifier** | Authentication | A component that verifies the authenticity of Verifiable Credentials (VCs) and exchanges them for tokens. It ensures that the credentials presented by participants are valid and trustworthy. |
+| **Credential Config Service** | Authentication | A service that manages the configuration of credentials. Holds the information which VCs are required for accessing a service. |
+| **Trusted Issuers List** | Authentication | A list of trusted issuers for the provider. Acts as Trusted Issuers List by providing an [EBSI Trusted Issuers Registry](https://hub.ebsi.eu/) API. |
+| **TM Forum API** | Data Discovery | A component that implements the [TM Forum APIs](https://www.tmforum.org/oda/open-apis/) for contract negotiation within the data space. It allows participants to negotiate and manage contracts related to data exchange. |
+| **Contract Management** | Data Discovery | Notification listener for contract management events out of TMForum. |
+| **Rainbow** | IDSA Data Space Protocol | Rainbow or also known as Dataspace Rainbow is an implementation of Dataspace Protocol 2024-1 promoted by IDSA (International Data Spaces Association). |
+| **TPP** | IDSA Data Space Protocol | Integration of checks for the transfer process protocol. |
+| **PostgreSQL** | Database | A relational database management system that stores data related to the data space. |
+| **PostGIS** | Data Bases | PostgreSQL Database with PostGIS extensions |
+| **MySQL** | Data Bases | An open-source relational database management system that uses SQL for data management. |
+| **DID (did-helper)** | Config Services | A component that provides support for W3C Decentralized Identifiers (DIDs) and Verifiable Credentials (VCs). It helps in managing DIDs and VCs within the data space. |
\ No newline at end of file
diff --git a/docs/documentation/data_space_connectors/index.md b/docs/documentation/data_space_connectors/index.md
index 72d381ff..be668c5e 100644
--- a/docs/documentation/data_space_connectors/index.md
+++ b/docs/documentation/data_space_connectors/index.md
@@ -1,35 +1,44 @@
---
-title: Overview
+title: Connectors
---
-Currently, only two data space connector technologies can be considered prominent: **Eclipse** and **Fiware**. On the other hand, **Simpl**, which is based on Eclipse technologies, is still in its early stages of development. As it progresses, Simpl aims to leverage the foundational strengths of Eclipse to provide a final product solution within the data space connector landscape in the EU.
+A data space connector is a technical component that enables the secure and controlled exchange of data between organisations within a data space. It functions as a gateway that manages communication, authentication, and data usage policies. **Each participant deploys their own connector**, ensuring sovereignty over their data. Furthermore, it promotes interoperability by adhering to standards such as those set by IDSA or Gaia-X.
-Below, we provide an analysis of each alternative and an introductory guide to testing the software.
+The most extended data space connector technology are:
-- :material-power-plug-outline:{ .lg .middle } __Eclipse__
+- :material-power-plug-outline:{ .lg .middle } __Fiware__
---
- Eclipse data space connector.
+ The FIWARE Data Space Connector is an integrated suite of components implementing [_DSBA Technical Convergence recommendations_](https://data-spaces-business-alliance.eu/wp-content/uploads/dlm_uploads/Data-Spaces-Business-Alliance-Technical-Convergence-V2.pdf).
- [:octicons-arrow-right-24: Learn more](./eclipse/index.md)
+ [:octicons-arrow-right-24: Official Documentation](https://github.com/FIWARE/data-space-connector)
-- :material-power-plug-outline:{ .lg .middle } __Fiware__
+ [:octicons-arrow-right-24: Learn more](./fiware/index.md)
+
+- :material-power-plug-outline:{ .lg .middle } __Eclipse__
---
- Fiware data space connector.
+ Eclipse data space connector is a framework that provides a set of components and APIs for building data space connectors. It is designed to be modular and extensible, allowing developers to create custom connectors that meet their specific needs.
- [:octicons-arrow-right-24: Learn more](./fiware/index.md)
+ [:octicons-arrow-right-24: Official Documentation](https://github.com/eclipse-edc/Connector)
+
+ [:octicons-arrow-right-24: Learn more](./eclipse/index.md)
- :material-power-plug-outline:{ .lg .middle } __Simpl__
---
- Simpl data space connector. *(Not tested)*
+ Simpl programme is an initiative that aims to create a data space connector **based on the Eclipse Data Space Components**. It is still in its early stages of development, but it aims to leverage the foundational strengths of Eclipse to provide a final product solution within the data space connector landscape in the EU.
+
+ [:octicons-arrow-right-24: Official Site](https://simpl-programme.ec.europa.eu/)
-
+ ??? note "Official References"
+ - [Installation Guide](https://code.europa.eu/simpl/simpl-open/documentation/installation-guide)
+ - [Functional and Technical Architecture Specifications](https://code.europa.eu/simpl/simpl-open/architecture/-/blob/master/functional_and_technical_architecture_specifications/Functional-and-Technical-Architecture-Specifications.md?ref_type=heads)
+ - [User Manual](https://code.europa.eu/simpl/simpl-open/documentation/user-manual)
-
+
\ No newline at end of file
diff --git a/docs/documentation/index.md b/docs/documentation/index.md
index 2891008f..47a18e51 100644
--- a/docs/documentation/index.md
+++ b/docs/documentation/index.md
@@ -5,24 +5,25 @@ title: Guides
Reports and deployment guides of different components.
-## How can I connect different data platforms or data spaces?
+## Data Space components
-- :material-graph-outline:{ .lg .middle } __Data Federation__
+- :material-security:{ .lg .middle } __Trust Frameworks__
---
- The *data federation section* groups the guides to be able to communicate different brokers based on their technology.
+ A _trust framework_ is a set of policies, principles, and mechanisms that establish and maintain trust among participants in a data space ecosystem.
- [:octicons-arrow-right-24: Learn more](./data_federation/index.md)
+ [:octicons-arrow-right-24: Learn more](./trust_frameworks/index.md)
-
+- :material-account-credit-card-outline:{ .lg .middle } __Verifiable Credentials__
+ ---
-## What data space connector technologies exist and how can I deploy a connector?
+ A _Verifiable Credential_ is a digital document that proves something about its holder. It's like a digital passport, diploma, or any other credential, but with the added benefit of being tamper-evident and verifiable using cryptography.
-
+ [:octicons-arrow-right-24: Learn more](./verifiable_credentials/index.md)
- :material-power-plug-outline:{ .lg .middle } __Data Space Connector__
@@ -32,4 +33,18 @@ Reports and deployment guides of different components.
[:octicons-arrow-right-24: Learn more](./data_space_connectors/index.md)
-
\ No newline at end of file
+
+
+## How can I connect different data platforms or data spaces?
+
+
+
+- :material-graph-outline:{ .lg .middle } __Data Federation__
+
+ ---
+
+ The *data federation section* groups the guides to be able to communicate different brokers based on their technology.
+
+ [:octicons-arrow-right-24: Learn more](./data_federation/index.md)
+
+
diff --git a/docs/documentation/mv_data_space/fiware/consumer.md b/docs/documentation/mv_data_space/fiware/consumer.md
new file mode 100644
index 00000000..e1ea0f28
--- /dev/null
+++ b/docs/documentation/mv_data_space/fiware/consumer.md
@@ -0,0 +1,232 @@
+---
+title: FDSC Consumer
+---
+
+!!! warning
+ Check the [prerequisites section](../index.md#common-setup-steps) before proceeding with the deployment.
+
+## Step by Step AWS deployment
+
+The Consumer role allows you to request and consume data/services from providers in the data space.
+
+### Step 1: Create Security Group
+
+Create a dedicated security group for the Consumer:
+
+```bash
+# Set your configuration
+export YOUR_PUBLIC_IP="YOUR_IP_HERE" # Replace with your public IP
+export AWS_REGION="eu-west-1" # Replace with your preferred region
+
+# Create security group
+aws ec2 create-security-group \
+ --group-name consumer-sg \
+ --description "Security group for Consumer" \
+ --region $AWS_REGION
+
+# Add SSH access from your IP
+aws ec2 authorize-security-group-ingress \
+ --group-name consumer-sg \
+ --protocol tcp \
+ --port 22 \
+ --cidr ${YOUR_PUBLIC_IP}/32 \
+ --region $AWS_REGION
+
+# Add Kubernetes API access from your IP
+aws ec2 authorize-security-group-ingress \
+ --group-name consumer-sg \
+ --protocol tcp \
+ --port 6443 \
+ --cidr ${YOUR_PUBLIC_IP}/32 \
+ --region $AWS_REGION
+
+# Add HTTP/HTTPS access (public)
+aws ec2 authorize-security-group-ingress \
+ --group-name consumer-sg \
+ --protocol tcp \
+ --port 80 \
+ --cidr 0.0.0.0/0 \
+ --region $AWS_REGION
+```
+
+!!! warning "Important"
+ Note the security group ID returned by the create command.
+
+### Step 2: Launch Consumer Instance
+
+For the Consumer instance we use Ubuntu 22.04 LTS image (`ami-0694d931cee176e7d`) and `t3.large` instance type. Feel free to change these parameters, especially if you see that the load to be supported is greater than the capacity of the virtual machine.
+
+```bash
+# Replace with your security group ID
+export CONSUMER_SG_ID="sg-xxxxxxxxx"
+
+# Launch Consumer instance
+aws ec2 run-instances \
+ --image-id ami-0694d931cee176e7d \
+ --instance-type t3.large \
+ --key-name dataspace-key \
+ --security-group-ids $CONSUMER_SG_ID \
+ --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=consumer}]' \
+ --region $AWS_REGION
+```
+
+!!! warning "Important"
+ Note the instance ID returned by this command.
+
+### Step 3: Assign Elastic IP
+
+```bash
+# Replace with your Consumer instance ID
+export CONSUMER_INSTANCE_ID="i-xxxxxxxxx"
+
+# Allocate Elastic IP
+aws ec2 allocate-address \
+ --domain vpc \
+ --tag-specifications 'ResourceType=elastic-ip,Tags=[{Key=Name,Value=consumer-ip}]' \
+ --region $AWS_REGION
+
+# Associate IP to instance (replace ALLOCATION_ID with the one returned above)
+aws ec2 associate-address \
+ --instance-id $CONSUMER_INSTANCE_ID \
+ --allocation-id ALLOCATION_ID_FROM_ABOVE \
+ --region $AWS_REGION
+```
+
+### Step 4: Verify Instance Status
+
+```bash
+aws ec2 describe-instances \
+ --instance-ids $CONSUMER_INSTANCE_ID \
+ --query 'Reservations[*].Instances[*].[Tags[?Key==`Name`].Value | [0], PublicIpAddress, State.Name]' \
+ --output table \
+ --region $AWS_REGION
+```
+
+### Step 5: Install k3s
+
+```bash
+# Replace with your Consumer public IP
+export CONSUMER_IP="YOUR_CONSUMER_IP"
+
+# Connect to the instance
+ssh -i "dataspace-key.pem" ubuntu@$CONSUMER_IP
+
+# Install k3s
+curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--tls-san $CONSUMER_IP" sh -
+
+# Get the kubeconfig
+sudo cat /etc/rancher/k3s/k3s.yaml
+```
+
+### Step 6: Configure Local Access
+
+On your local machine, create a kubeconfig file for the Consumer:
+
+```bash
+# Create k3s-consumer.yaml with the content from the previous step
+# Replace 127.0.0.1 with your Consumer IP in the server field
+# The file should contain:
+# server: https://YOUR_CONSUMER_IP:6443
+
+# Test the connection
+export KUBECONFIG=k3s-consumer.yaml
+kubectl get nodes
+```
+
+### Step 7: Configure Storage
+
+```bash
+# Enable storage provisioner
+kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.30/deploy/local-path-storage.yaml
+
+# Wait a few seconds for it to start. You can check its status with
+kubectl get pods -n local-path-storage
+```
+
+### Step 8: Create namespace
+```bash
+kubectl create namespace consumer
+```
+
+### Step 9: Create Consumer Identity
+
+```bash
+# Create directory for identity files
+mkdir consumer-identity
+
+# Generate the private key - dont get confused about the curve, openssl uses the name `prime256v1` for `secp256r1`(as defined by P-256)
+openssl ecparam -name prime256v1 -genkey -noout -out consumer-identity/private-key.pem
+
+# Generate corresponding public key
+openssl ec -in consumer-identity/private-key.pem -pubout -out consumer-identity/public-key.pem
+
+# Create a (self-signed) certificate
+openssl req -new -x509 -key consumer-identity/private-key.pem -out consumer-identity/cert.pem -days 360
+
+# Export the keystore
+openssl pkcs12 -export -inkey consumer-identity/private-key.pem -in consumer-identity/cert.pem -out consumer-identity/cert.pfx -name didPrivateKey
+
+# Check the contents
+keytool -v -keystore consumer-identity/cert.pfx -list -alias didPrivateKey
+
+# Generate did from the keystore
+wget https://github.com/wistefan/did-helper/releases/download/0.1.1/did-helper
+chmod +x did-helper
+./did-helper -keystorePath ./consumer-identity/cert.pfx -keystorePassword=test
+```
+!!! warning "Important"
+ Note the DID returned by the `did-helper`. It is the consumer DID.
+
+### Step 10: Deploy Identity Secret
+
+```bash
+# Create secret with the identity
+kubectl create secret generic consumer-identity --from-file=consumer-identity/cert.pfx -n consumer
+```
+
+### Step 11: Configure Values
+
+!!! danger
+ Before deploying, you must modify the Consumer's `values.yaml` file to use your actual IP address instead of `127.0.0.1.nip.io`. Modify `consumer/values.yaml`:
+
+```yaml
+# 1. Replace the localhost address for the Keycloak ingress hostname:
+keycloak:
+ ingress:
+ enabled: true
+ hostname: keycloak-consumer.YOUR_CONSUMER_IP.nip.io
+
+
+# 2. Replace the localhost also for KC_HOSTNAME in extraVars:
+- name: KC_HOSTNAME
+ value: keycloak-consumer.YOUR_CONSUMER_IP.nip.io
+
+# 3. In realm, replace:
+realm:
+frontendUrl: http://keycloak-consumer.127.0.0.1.nip.io:8080
+
+# with
+realm:
+frontendUrl: http://keycloak-consumer.YOUR_CONSUMER_IP.nip.io
+
+# 4. Replace DID with you own, previously generated consumer DID.
+- name: DID
+ value: "did:key:xxxxxxxxxx"
+```
+
+### Step 12: Add Helm Repository
+
+```bash
+helm repo add data-space-connector https://fiware.github.io/data-space-connector/
+helm repo update
+```
+
+### Step 13: Deploy Consumer
+
+```bash
+# Deploy using your modified values file
+helm install consumer-dsc data-space-connector/data-space-connector --version 7.17.0 -f consumer/values.yaml --namespace=consumer
+
+# Monitor deployment
+watch kubectl get pods -n consumer
+```
diff --git a/docs/documentation/mv_data_space/fiware/provider.md b/docs/documentation/mv_data_space/fiware/provider.md
new file mode 100644
index 00000000..dbcfef4f
--- /dev/null
+++ b/docs/documentation/mv_data_space/fiware/provider.md
@@ -0,0 +1,274 @@
+---
+title: FDSC Provider
+---
+
+!!! warning
+ Check the [prerequisites section](../index.md#common-setup-steps) before proceeding with the deployment.
+
+## Step by Step AWS deployment
+
+The Provider role allows you to offer data/services to consumers in the data space.
+
+### Step 1: Create Security Group
+
+Create a dedicated security group for the Provider:
+
+```bash
+# Set your configuration
+export YOUR_PUBLIC_IP="YOUR_IP_HERE" # Replace with your public IP
+export AWS_REGION="eu-west-1" # Replace with your preferred region
+
+# Create security group
+aws ec2 create-security-group \
+ --group-name provider-sg \
+ --description "Security group for Provider" \
+ --region $AWS_REGION
+
+# Add SSH access from your IP
+aws ec2 authorize-security-group-ingress \
+ --group-name provider-sg \
+ --protocol tcp \
+ --port 22 \
+ --cidr ${YOUR_PUBLIC_IP}/32 \
+ --region $AWS_REGION
+
+# Add Kubernetes API access from your IP
+aws ec2 authorize-security-group-ingress \
+ --group-name provider-sg \
+ --protocol tcp \
+ --port 6443 \
+ --cidr ${YOUR_PUBLIC_IP}/32 \
+ --region $AWS_REGION
+
+# Add HTTP/HTTPS access (public)
+aws ec2 authorize-security-group-ingress \
+ --group-name provider-sg \
+ --protocol tcp \
+ --port 80 \
+ --cidr 0.0.0.0/0 \
+ --region $AWS_REGION
+```
+
+!!! warning "Important"
+ Note the security group ID returned by the create command.
+
+### Step 2: Launch Provider Instance
+
+For the Trust Anchor instance we use Ubuntu 22.04 LTS image (`ami-0694d931cee176e7d`) and `t3.xlarge` instance type. Feel free to change these parameters, especially if you see that the load to be supported is greater than the capacity of the virtual machine.
+
+```bash
+# Replace with your security group ID
+export PROVIDER_SG_ID="sg-xxxxxxxxx"
+
+# Launch Provider instance
+aws ec2 run-instances \
+ --image-id ami-0694d931cee176e7d \
+ --instance-type t3.xlarge \
+ --key-name dataspace-key \
+ --security-group-ids $PROVIDER_SG_ID \
+ --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=provider}]' \
+ --region $AWS_REGION
+```
+
+!!! warning "Important"
+ Note the instance ID returned by this command.
+
+### Step 3: Assign Elastic IP
+
+```bash
+# Replace with your Provider instance ID
+export PROVIDER_INSTANCE_ID="i-xxxxxxxxx"
+
+# Allocate Elastic IP
+aws ec2 allocate-address \
+ --domain vpc \
+ --tag-specifications 'ResourceType=elastic-ip,Tags=[{Key=Name,Value=provider-ip}]' \
+ --region $AWS_REGION
+
+# Associate IP to instance (replace ALLOCATION_ID with the one returned above)
+aws ec2 associate-address \
+ --instance-id $PROVIDER_INSTANCE_ID \
+ --allocation-id ALLOCATION_ID_FROM_ABOVE \
+ --region $AWS_REGION
+```
+
+### Step 4: Verify Instance Status
+
+```bash
+aws ec2 describe-instances \
+ --instance-ids $PROVIDER_INSTANCE_ID \
+ --query 'Reservations[*].Instances[*].[Tags[?Key==`Name`].Value | [0], PublicIpAddress, State.Name]' \
+ --output table \
+ --region $AWS_REGION
+```
+
+### Step 5: Prepare Instance Storage
+
+Since the Provider handles more data, you may need to increase the EBS volume size:
+
+1. Go to AWS Console → EC2 → Volumes
+2. Find the volume associated with your Provider instance
+3. Select it and click "Actions" → "Modify volume"
+4. Increase the size to at least 16 GB
+5. Save the changes
+
+```bash
+# Replace with your Provider public IP
+export PROVIDER_IP="YOUR_PROVIDER_IP"
+
+# Connect to the instance
+ssh -i "dataspace-key.pem" ubuntu@$PROVIDER_IP
+
+# Update and install utilities
+sudo apt-get update && sudo apt-get install -y cloud-guest-utils
+
+# Expand the partition and filesystem
+sudo growpart /dev/nvme0n1 1
+sudo resize2fs /dev/root
+
+# Verify the changes
+df -h
+```
+
+### Step 6: Install k3s
+
+```bash
+# Install k3s
+curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--tls-san $PROVIDER_IP" sh -
+
+# Get the kubeconfig
+sudo cat /etc/rancher/k3s/k3s.yaml
+```
+
+### Step 7: Configure Local Access
+
+On your local machine, create a kubeconfig file for the Provider:
+
+```bash
+# Create k3s-provider.yaml with the content from the previous step
+# Replace 127.0.0.1 with your Provider IP in the server field
+# The file should contain:
+# server: https://YOUR_PROVIDER_IP:6443
+
+# Test the connection
+export KUBECONFIG=k3s-provider.yaml
+kubectl get nodes
+```
+
+### Step 8: Configure Storage and Namespace
+
+```bash
+# Enable storage provisioner
+kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.30/deploy/local-path-storage.yaml
+
+# Wait a few seconds for it to start. You can check its status with
+kubectl get pods -n local-path-storage
+
+# Create namespace
+kubectl create namespace provider
+```
+
+### Step 9: Create Provider Identity
+
+```bash
+# Create directory for identity files
+
+# generate the private key - dont get confused about the curve, openssl uses the name `prime256v1` for `secp256r1`(as defined by P-256)
+openssl ecparam -name prime256v1 -genkey -noout -out provider-identity/private-key.pem
+
+# generate corresponding public key
+openssl ec -in provider-identity/private-key.pem -pubout -out provider-identity/public-key.pem
+
+# create a (self-signed) certificate
+openssl req -new -x509 -key provider-identity/private-key.pem -out provider-identity/cert.pem -days 360
+
+# export the keystore
+openssl pkcs12 -export -inkey provider-identity/private-key.pem -in provider-identity/cert.pem -out provider-identity/cert.pfx -name didPrivateKey
+
+# check the contents
+keytool -v -keystore provider-identity/cert.pfx -list -alias didPrivateKey
+
+# generate did from the keystore
+wget https://github.com/wistefan/did-helper/releases/download/0.1.1/did-helper
+chmod +x did-helper
+./did-helper -keystorePath ./provider-identity/cert.pfx -keystorePassword=test
+```
+
+!!! warning "Important"
+ Note the DID returned by the `did-helper`. It is the consumer DID.
+
+### Step 10: Deploy Identity Secret
+
+```bash
+# Create secret with the identity
+kubectl create secret generic provider-identity --from-file=provider-identity/cert.pfx -n provider
+```
+
+### Step 11: Configure Values
+
+!!! danger
+
+ Before deploying, you must modify the Providers's `values.yaml` file to use your actual IP address instead of `127.0.0.1.nip.io`. Modify `provider/values.yaml` file to use the external IP address instead of localhost. Other variables such as the provider DID should also be modified. In your `provider/values.yaml` file, make these changes:
+
+```yaml
+# Summary of Changes in provider/values.yaml
+
+## 1. Hostnames updated from localhost (127.0.0.1.nip.io) to YOUR_PROVIDER_IP (YOUR_PROVIDER_IP.nip.io)
+- provider-verifier.127.0.0.1.nip.io → provider-verifier.YOUR_PROVIDER_IP.nip.io
+# - til-provider.127.0.0.1.nip.io → til-provider.YOUR_PROVIDER_IP.nip.io
+- mp-data-service.127.0.0.1.nip.io → mp-data-service.YOUR_PROVIDER_IP.nip.io
+# - pap-provider.127.0.0.1.nip.io → pap-provider.YOUR_PROVIDER_IP.nip.io
+- tm-forum-api.127.0.0.1.nip.io → tm-forum-api.YOUR_PROVIDER_IP.nip.io
+
+## 2. DID & TIR configuration updated
+- tirAddress: http://tir.127.0.0.1.nip.io:8080 → tirAddress: http://trusted-issuers-list:8080
+- did: did:key:zDnaeQfjsx66YNYV86SDBB1e5kunWKJcWwk686dvjirEE7pqW → did: did:key:provider_key
+
+## 3. Server host URLs updated
+- host: http://provider-verifier.127.0.0.1.nip.io:8080
+ → host: http://provider-verifier.YOUR_PROVIDER_IP.nip.io
+
+## 4. Added fullnameOverride to trusted-issuers-list
++ fullnameOverride: trusted-issuers-list
+
+## 5. APISIX routes and upstream hostnames updated
+- hostname: mp-data-service.127.0.0.1.nip.io → hostname: mp-data-service.YOUR_PROVIDER_IP.nip.io
+- host: mp-data-service.127.0.0.1.nip.io → host: mp-data-service.YOUR_PROVIDER_IP.nip.io
+
+## 6. ODRL PAP organization DID updated
+- value: did:key:zDnaeQfjsx66YNYV86SDBB1e5kunWKJcWwk686dvjirEE7pqW
+ → value: did:key:provider_key
+
+## 7. Scorpio trustedParticipantsLists endpoints updated
+- http://tir.trust-anchor.svc.cluster.local:8080 → http://tir.TRUS_ANCHOR_IP.nip.io
+```
+
+### Step 12: Add Helm Repository
+
+```bash
+helm repo add data-space-connector https://fiware.github.io/data-space-connector/
+helm repo update
+```
+
+### Step 13: Deploy Provider
+
+```bash
+# Deploy using your modified values file
+helm install provider-dsc data-space-connector/data-space-connector \
+ --version 7.17.0 \
+ -f provider/values.yaml \
+ --namespace=provider
+
+# Monitor deployment
+kubectl get pods -n provider -w
+```
+
+### Changes and updates
+```bash
+# Update
+helm upgrade provider-dsc data-space-connector/data-space-connector -f provider/values.yaml --namespace provider
+
+# Monitor
+watch kubectl get pods -n provider
+```
+
diff --git a/docs/documentation/mv_data_space/fiware/trust_anchor.md b/docs/documentation/mv_data_space/fiware/trust_anchor.md
new file mode 100644
index 00000000..fc2c70ac
--- /dev/null
+++ b/docs/documentation/mv_data_space/fiware/trust_anchor.md
@@ -0,0 +1,196 @@
+---
+title: Trust Anchor
+---
+
+!!! warning
+ Check the [prerequisites section](../index.md#common-setup-steps) before proceeding with the deployment.
+
+## Step by Step AWS deployment
+
+!!! warning
+
+ If you are joining an existing dataspace, this step should be skipped as you will use the trust anchor of the dataspace you want to join.
+
+The Trust Anchor provides the basic trust infrastructure for the data space. It is usually the first component to be deployed if you are setting up a data space from scratch.
+
+### Step 1: Create Security Group
+
+Create a dedicated security group for the Trust Anchor:
+
+```bash
+# Set your configuration
+export YOUR_PUBLIC_IP="YOUR_IP_HERE" # Replace with your public IP
+export AWS_REGION="eu-west-1" # Replace with your preferred region
+
+# Create security group
+aws ec2 create-security-group \
+ --group-name trust-anchor-sg \
+ --description "Security group for Trust Anchor" \
+ --region $AWS_REGION
+
+# Add SSH access from your IP
+aws ec2 authorize-security-group-ingress \
+ --group-name trust-anchor-sg \
+ --protocol tcp \
+ --port 22 \
+ --cidr ${YOUR_PUBLIC_IP}/32 \
+ --region $AWS_REGION
+
+# Add Kubernetes API access from your IP
+aws ec2 authorize-security-group-ingress \
+ --group-name trust-anchor-sg \
+ --protocol tcp \
+ --port 6443 \
+ --cidr ${YOUR_PUBLIC_IP}/32 \
+ --region $AWS_REGION
+
+# Add HTTP/HTTPS access (public)
+aws ec2 authorize-security-group-ingress \
+ --group-name trust-anchor-sg \
+ --protocol tcp \
+ --port 80 \
+ --cidr 0.0.0.0/0 \
+ --region $AWS_REGION
+```
+!!! warning "Important"
+ Note the security group ID returned by the create command.
+
+### Step 2: Launch Trust Anchor Instance
+For the Trust Anchor instance we use Ubuntu 22.04 LTS image (`ami-0694d931cee176e7d`) and `t3.medium` instance type. Feel free to change these parameters, especially if you see that the load to be supported is greater than the capacity of the virtual machine.
+
+```bash
+# Replace with your security group ID
+export TRUST_ANCHOR_SG_ID="sg-xxxxxxxxx"
+
+# Launch Trust Anchor instance
+aws ec2 run-instances \
+ --image-id ami-0694d931cee176e7d \
+ --instance-type t3.medium \
+ --key-name dataspace-key \
+ --security-group-ids $TRUST_ANCHOR_SG_ID \
+ --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=trust-anchor}]' \
+ --region $AWS_REGION
+```
+
+!!! warning "Important"
+ Note the instance ID returned by this command.
+
+### Step 3: Assign Elastic IP
+
+```bash
+# Replace with your Trust Anchor instance ID
+export TRUST_ANCHOR_INSTANCE_ID="i-xxxxxxxxx"
+
+# Allocate Elastic IP
+aws ec2 allocate-address \
+ --domain vpc \
+ --tag-specifications 'ResourceType=elastic-ip,Tags=[{Key=Name,Value=trust-anchor-ip}]' \
+ --region $AWS_REGION
+
+# Associate IP to instance (replace ALLOCATION_ID with the one returned above)
+aws ec2 associate-address \
+ --instance-id $TRUST_ANCHOR_INSTANCE_ID \
+ --allocation-id ALLOCATION_ID_FROM_ABOVE \
+ --region $AWS_REGION
+```
+
+### Step 4: Verify Instance Status
+
+```bash
+aws ec2 describe-instances \
+ --instance-ids $TRUST_ANCHOR_INSTANCE_ID \
+ --query 'Reservations[*].Instances[*].[Tags[?Key==`Name`].Value | [0], PublicIpAddress, State.Name]' \
+ --output table \
+ --region $AWS_REGION
+```
+
+### Step 5: Install k3s
+
+```bash
+# Replace with your Trust Anchor public IP
+export TRUST_ANCHOR_IP="YOUR_TRUST_ANCHOR_IP"
+
+# Connect to the instance
+ssh -i "dataspace-key.pem" ubuntu@$TRUST_ANCHOR_IP
+
+# Install k3s
+curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--tls-san $TRUST_ANCHOR_IP" sh -
+
+# Get the kubeconfig
+sudo cat /etc/rancher/k3s/k3s.yaml
+```
+
+### Step 6: Configure Local Access
+
+On your local machine, create a kubeconfig file for the Trust Anchor:
+
+```bash
+# Create k3s-trust-anchor.yaml with the content from the previous step (cat command)
+# Replace 127.0.0.1 with your public Trust Anchor IP in the server field
+# The file should contain:
+# server: https://YOUR_TRUST_ANCHOR_IP:6443
+
+# Test the connection
+export KUBECONFIG=k3s-trust-anchor.yaml
+kubectl get nodes
+```
+
+### Step 7: Configure Storage
+
+```bash
+# Enable storage provisioner
+kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.30/deploy/local-path-storage.yaml
+
+# Wait a few seconds for it to start. You can check its status with
+kubectl get pods -n local-path-storage
+```
+
+### Step 8: Add Helm Repository
+
+```bash
+helm repo add data-space-connector https://fiware.github.io/data-space-connector/
+helm repo update
+```
+
+### Step 9: Configure Values
+
+!!! danger
+ Before deploying, you must modify the Trust Anchor's `values.yaml` file to use your actual IP address instead of `127.0.0.1.nip.io`. Modify `trust-anchor/values.yaml` file to use the external IP address instead of localhost. Replace the `tir` host reference `127.0.0.1.nip.io` with `YOUR_TRUST_ANCHOR_IP.nip.io`. This change ensures that the Trusted Issuer Registry (TIR) is accessible outside the local environment.
+
+```yaml
+trusted-issuers-list:
+ tir:
+ enabled: true
+ hosts:
+ - host: tir.YOUR_TRUST_ANCHOR_IP.nip.io
+ til:
+ enabled: true
+ hosts:
+ - host: til.127.0.0.1.nip.io # Do not modify
+```
+
+### Step 10: Create namespace
+
+```bash
+# Create namespace
+kubectl create namespace trust-anchor
+```
+
+### Step 11: Deploy Trust Anchor
+
+```bash
+# Deploy using your modified values file
+helm install trust-anchor data-space-connector/trust-anchor --version 0.2.0 -f trust-anchor/values.yaml --namespace=trust-anchor
+
+# Monitor deployment
+watch kubectl get pods -n trust-anchor
+```
+
+### Step 12: Changes and updates
+```bash
+# Upgrade
+helm upgrade trust-anchor data-space-connector/trust-anchor -f trust-anchor/values.yaml --namespace trust-anchor
+
+# Monitor
+watch kubectl get pods -n trust-anchor
+```
\ No newline at end of file
diff --git a/docs/documentation/mv_data_space/index.md b/docs/documentation/mv_data_space/index.md
new file mode 100644
index 00000000..ae2a4795
--- /dev/null
+++ b/docs/documentation/mv_data_space/index.md
@@ -0,0 +1,146 @@
+---
+title: MV Data Space on AWS Cloud
+---
+
+## Overview
+
+This guide provides step-by-step instructions for deploying a [Minimum Viable Data Space (MVDS)](../../getting_started/data_spaces/index.md#minimum-viable-data-space) on AWS with three distinct roles using Fiware's Technology ([Trust Framework](../trust_frameworks/fiware_trust_anchor/index.md), [FDS Connector](../data_space_connectors/fiware/index.md)). Each role is completely independent and can be deployed separately:
+
+- **Trust Anchor**: It manages the identities and credentials of participants in the data space, ensuring trustworthiness and security. **Only one instance** of this role is needed in the data space.
+- **Consumer**: Requests and consumes data/services from providers. **Each participant needs** its own consumer instance.
+- **Provider**: Offers data/services to consumers. **Each participant needs** its own provider instance.
+
+!!! warning "Prerequisites"
+
+ Before starting any deployment, ensure you have:
+
+ * [x] AWS CLI installed and configured with appropriate credentials.
+ * [x] `kubectl` installed on your system ([Installation Guide](https://kubernetes.io/docs/tasks/tools/)).
+ * [x] `helm` installed on your system ([Installation Guide](https://helm.sh/docs/intro/install/)).
+ * [x] Basic understanding of Kubernetes and AWS EC2.
+
+## Common Setup Steps
+
+### Get Your Public IP
+
+First, determine your current public IP address for security group configuration:
+
+```bash
+curl -s https://checkip.amazonaws.com
+```
+
+Note this IP address - you'll need it for security group configuration in each role.
+
+### Create SSH Key Pair
+
+In addition, we will also need an SSH key to access EC2 instances. Create an SSH key pair that will be used across all deployments:
+
+```bash
+aws ec2 create-key-pair \
+ --key-name dataspace-key \
+ --query 'KeyMaterial' \
+ --output text > dataspace-key.pem
+
+chmod 400 dataspace-key.pem
+```
+
+### Clone deployment repository
+
+```bash
+# Clone
+git clone https://github.com/wistefan/deployment-demo.git
+cd deployment-demo
+
+# Open with your preferred editor. For intance:
+code .
+```
+
+## Components Deployment
+
+Below you'll find deployment instructions for each component in our Minimum Viable Data Space. Choose the component you need to deploy based on your role in the data space. Remember that a complete data space requires at least one Trust Anchor and at least one pair of Provider and Consumer. Follow the links for detailed deployment instructions specific to each component.
+
+
+
+- :material-account:{ .lg .middle } __Consumer__
+
+ ---
+
+ Fiware Data Space Connector (_Consumer role_) that is configured to access and retrieve data/services from the data space.
+
+ [:octicons-arrow-right-24: _AWS_](../../documentation/mv_data_space/fiware/consumer.md)
+
+ [:octicons-arrow-right-24: _Technical Details_](../../documentation/data_space_connectors/fiware/index.md#consumer)
+
+- :material-factory:{ .lg .middle } __Provider__
+
+ ---
+
+ Fiware Data Space Connector (_Provider role_) that is configured to share data/services with the data space.
+
+ [:octicons-arrow-right-24: _AWS_](../../documentation/mv_data_space/fiware/provider.md)
+
+ [:octicons-arrow-right-24: _Technical Details_](../../documentation/data_space_connectors/fiware/index.md#provider)
+
+- :material-security:{ .lg .middle } __Trust Anchor__
+
+ ---
+
+ !!! warning
+
+ It is not necessary to deploy this if you want to connect to an existing Data Space.
+
+ It serves as a trusted entity that issues and manages digital certificates (Verifiable Credentials) for organizations and individuals participating in the data space.
+
+ [:octicons-arrow-right-24: _AWS_](../../documentation/mv_data_space/fiware/trust_anchor.md)
+
+ [:octicons-arrow-right-24: _Technical Details_](../trust_frameworks/fiware_trust_anchor/index.md)
+
+
+
+## Per-Role Cleanup
+
+??? tip "Trust Anchor"
+
+ ```bash
+ export TRUST_ANCHOR_INSTANCE_ID="i-xxxxxxxxx"
+ export TRUST_ANCHOR_ALLOCATION_ID="eipalloc-xxxxxxxxx"
+
+ helm uninstall trust-anchor
+ aws ec2 terminate-instances --instance-ids $TRUST_ANCHOR_INSTANCE_ID --region $AWS_REGION
+ aws ec2 release-address --allocation-id $TRUST_ANCHOR_ALLOCATION_ID --region $AWS_REGION
+ aws ec2 delete-security-group --group-name trust-anchor-sg --region $AWS_REGION
+ ```
+
+??? tip "Consumer"
+
+ ```bash
+ export CONSUMER_INSTANCE_ID="i-xxxxxxxxx"
+ export CONSUMER_ALLOCATION_ID="eipalloc-xxxxxxxxx"
+
+ helm uninstall consumer-dsc -n consumer
+ kubectl delete namespace consumer
+ aws ec2 terminate-instances --instance-ids $CONSUMER_INSTANCE_ID --region $AWS_REGION
+ aws ec2 release-address --allocation-id $CONSUMER_ALLOCATION_ID --region $AWS_REGION
+ aws ec2 delete-security-group --group-name consumer-sg --region $AWS_REGION
+ ```
+
+??? tip "Provider"
+
+ ```bash
+ export PROVIDER_INSTANCE_ID="i-xxxxxxxxx"
+ export PROVIDER_ALLOCATION_ID="eipalloc-xxxxxxxxx"
+
+ helm uninstall provider-dsc -n provider
+ kubectl delete namespace provider
+ aws ec2 terminate-instances --instance-ids $PROVIDER_INSTANCE_ID --region $AWS_REGION
+ aws ec2 release-address --allocation-id $PROVIDER_ALLOCATION_ID --region $AWS_REGION
+ aws ec2 delete-security-group --group-name provider-sg --region $AWS_REGION
+ ```
+
+## Background Information
+
+### nip.io Service
+nip.io is a free wildcard DNS service that converts subdomains like `service.1.2.3.4.nip.io` into A records pointing to `1.2.3.4`. This eliminates the need for custom DNS configuration during development.
+
+### Architecture
+Each role runs on its own EC2 instance with a dedicated k3s Kubernetes cluster. The Ingress Controller in each cluster routes traffic based on hostnames to the appropriate internal services.
\ No newline at end of file
diff --git a/docs/documentation/trust_frameworks/fiware_trust_anchor/img/trust_anchor_arch.svg b/docs/documentation/trust_frameworks/fiware_trust_anchor/img/trust_anchor_arch.svg
new file mode 100644
index 00000000..094fca32
--- /dev/null
+++ b/docs/documentation/trust_frameworks/fiware_trust_anchor/img/trust_anchor_arch.svg
@@ -0,0 +1,3 @@
+
+
+
\ No newline at end of file
diff --git a/docs/documentation/trust_frameworks/fiware_trust_anchor/index.md b/docs/documentation/trust_frameworks/fiware_trust_anchor/index.md
new file mode 100644
index 00000000..fb4cf582
--- /dev/null
+++ b/docs/documentation/trust_frameworks/fiware_trust_anchor/index.md
@@ -0,0 +1,30 @@
+---
+title: Technical Details
+---
+
+## Technical Details
+
+The **Trusted-Issuers-List Service** provides an [EBSI-Trusted Issuers Registry](https://hub.ebsi.eu/apis/pilot/trusted-issuers-registry/v4) implementation to act as the Trusted-List-Service in the DSBA Trust and IAM Framework. In addition, a Trusted Issuers List API to manage the issuers is provided.
+
+Both APIs [Trusted-Issuers-List API](https://github.com/FIWARE/trusted-issuers-list/blob/main/api/trusted-issuers-list.yaml) and [Trusted-Issuers-Registry API](https://github.com/FIWARE/trusted-issuers-list/blob/main/api/trusted-issuers-registry.yaml), are found on the **same port** (*by default 8080*) but different contexts.
+
+- **Trusted-Issuers-List API**: `/issuer`
+- **Trusted-Issuers-Registry API**: `/v4/issuers/`
+
+!!! info "Fiware Trust Anchor"
+ The Fiware Trust Anchor is based in: [FIWARE Trusted Issuers List](https://github.com/FIWARE/trusted-issuers-list)
+
+
+
+
+> _The default setup of the connector requires an EBSI-Trusted Issuers Registry to provide the list of participants. The local Data Spaces comes with the FIWARE Trusted Issuers List as a rather simple implementation of that API, providing CRUD functionality for Issuers and storage in an MySQL Database. After deployment, the API is available at http://tir.127.0.0.1.nip.io:8080. Both participants are automatically registered as "Trusted Issuers" in the registry with their did's._
+
+## API details (version 0.0.2)
+
+### Trusted Issuers List
+
+
+
+### Trusted Issuers Registry
+
+
diff --git a/docs/documentation/trust_frameworks/index.md b/docs/documentation/trust_frameworks/index.md
new file mode 100644
index 00000000..57b2d0ce
--- /dev/null
+++ b/docs/documentation/trust_frameworks/index.md
@@ -0,0 +1,38 @@
+---
+title: Trust Frameworks
+---
+
+A _trust framework_ is a set of policies, principles, and mechanisms that establish and maintain trust among participants in a data space ecosystem. It outlines the rules and agreements that govern how data is accessed, used, and shared, ensuring security, transparency, compliance with regulations, and ethical use. It often includes aspects like identity verification, access control, data protection, and mutual agreements between entities to facilitate trustworthy collaboration.
+
+> _Every Data Spaces requires a framework that ensures trust between the participants. Depending on the requirements of the concrete Data Space, this can become a rather complex topic. Various trust-providers exist( f.e. Gaia-X Digital Clearing Houses) and could be reused ([see Gaia-X for more information](https://github.com/FIWARE/data-space-connector/blob/main/doc/GAIA_X.MD))._ **Source: [_Fiware data space connector_](https://github.com/FIWARE/data-space-connector/blob/main/doc/deployment-integration/local-deployment/LOCAL.MD#the-trust-anchor)**
+
+
+
+- :material-rocket-outline:{ .lg .middle } __Fiware Trusted Issuers List__
+
+ ---
+
+ _EBSI Trusted Issuers Registry implementation to act as the Trusted-List-Service in the DSBA Trust and IAM Framework._
+
+ [:octicons-arrow-right-24: Official Documentation](https://github.com/FIWARE/trusted-issuers-list/blob/main/README.md)
+
+ [:octicons-arrow-right-24: _More Details_](./fiware_trust_anchor/index.md)
+
+ [:octicons-arrow-right-24: _Deployment_](../mv_data_space/fiware/trust_anchor.md)
+
+- :material-rocket-outline:{ .lg .middle } __Gaia-X Digital Clearing House (GXDCH)__
+
+ ---
+
+ _The Gaia-X Framework describes functional specifications, technical requirements, and software assets necessary to get Gaia-X Compliance._
+
+ [:octicons-arrow-right-24: Official Documentation](https://gaia-x.eu/services-deliverables/digital-clearing-house/)
+
+- :material-rocket-outline:{ .lg .middle } __iShare Trust Framework__
+
+ ---
+
+ _The iSHARE Trust Framework is a collaborative effort to improve the exchange of data between organisations in and across data spaces. The Framework results in a set of agreements which improve circumstances for data exchange._
+
+ [:octicons-arrow-right-24: Official Documentation](https://framework.ishare.eu/)
+
\ No newline at end of file
diff --git a/docs/documentation/verifiable_credentials/index.md b/docs/documentation/verifiable_credentials/index.md
new file mode 100644
index 00000000..4e78818b
--- /dev/null
+++ b/docs/documentation/verifiable_credentials/index.md
@@ -0,0 +1,81 @@
+---
+title: Verifier Credentials
+---
+
+## Verifier Credential Issuer
+
+Every participant in a data space must have a Decentralized Identifier (DID) that identifies them. Also, Keycloak needs this key to set the organization's DID as the issuer in the Verifiable Credential.
+
+To generate the DID in the format required by Keycloak, there is a Dockerized tool called _[did-helper](https://github.com/wistefan/did-helper)_ that produces all the necessary files by configuring just a few parameters to identify your organization:
+
+```shell
+docker run -v $(pwd):/cert \
+ -e OUTPUT_FORMAT="env" \
+ -e OUTPUT_FILE="/cert/did.env" \
+ -e KEY_ALIAS="didPrivateKey" \
+ -e STORE_PASS="fill_me" \
+ -e COUNTRY="BE" \
+ -e STATE="BRUSSELS" \
+ -e LOCALITY="Brussels" \
+ -e ORGANIZATION="Fancy Marketplace Co." \
+ -e COMMON_NAME="www.fancy-marketplace.biz" \
+ quay.io/wi_stefan/did-helper:0.2.0
+```
+
+This will generate two files required by Keycloak to issue Verifiable Credentials: `did.env` and `cert.pfx`.
+
+!!! warning
+
+ Although the process of generating the DID and its use in Keycloak is automated in local FIWARE deployments, this practice, as stated by FIWARE, is not acceptable in production environments. The generation and custody of the DID should be the responsibility of the organization.
+
+
+
+## Identity Management - Keycloak
+
+[Keycloak](https://www.keycloak.org/) is a powerful open-source Identity and Access Management solution that can be used to issue Verifiable Credentials (VCs) for users or services in a Data Space. It provides a flexible and secure way to manage identities, roles, and access policies. Some of the key features of Keycloak include:
+
+1. Manage identities of users and services
+2. Issue Verifiable Credentials
+3. Control access to resources
+4. Establish trust through digital identity verification
+
+!!! info
+
+ When configured with your organization's DID, **Keycloak can issue trusted Verifiable Credentials that other participants in the data space can verify**.
+
+### Concepts
+
+Keycloak operates with the concept of [realms](#realms), [clients](#clients), and [users](#users):
+
+#### Realms
+
+A realm is a isolated logical grouping of users, roles, and clients.
+
+Within a realm:
+
+- Users and their credentials are managed.
+- Clients are configured, which are applications that use Keycloak for authentication.
+- Roles, groups, access policies, etc. are defined.
+- Authentication and login flows can be customized (screens, MFA, etc.).
+- Each realm is completely independent from others.
+
+#### Clients
+
+Clients are applications that use Keycloak for authentication. For each provider that you want to connect to the Data Space, you need to create a new client. Each client can be configured with its own settings, such as:
+
+- Client ID and secret.
+- Redirect URIs.
+- Authentication flows.
+- Access policies.
+
+#### Users
+
+Users are the entities or members of your organization that will authenticate against Keycloak to access the data space. They can be individuals or services. Each user can have:
+
+- A unique username and password.
+- Roles assigned to them.
+- Attributes and metadata.
+
+### Configuration
+
+To configure Keycloak for issuing Verifiable Credentials, follow these steps: [Hands-On Configuration](./keycloak/index.md).
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/10_add_user_assign_roles.png b/docs/documentation/verifiable_credentials/keycloak/img/10_add_user_assign_roles.png
new file mode 100644
index 00000000..feb22191
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/10_add_user_assign_roles.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/11_add_user_assign_operator_role.png b/docs/documentation/verifiable_credentials/keycloak/img/11_add_user_assign_operator_role.png
new file mode 100644
index 00000000..be90f665
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/11_add_user_assign_operator_role.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/12_import_client.png b/docs/documentation/verifiable_credentials/keycloak/img/12_import_client.png
new file mode 100644
index 00000000..1924a255
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/12_import_client.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/13_import_client_form.png b/docs/documentation/verifiable_credentials/keycloak/img/13_import_client_form.png
new file mode 100644
index 00000000..7e7f81e0
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/13_import_client_form.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/14_create_client_role.png b/docs/documentation/verifiable_credentials/keycloak/img/14_create_client_role.png
new file mode 100644
index 00000000..0bc925e3
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/14_create_client_role.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/15_assign_client_role.png b/docs/documentation/verifiable_credentials/keycloak/img/15_assign_client_role.png
new file mode 100644
index 00000000..878b5c15
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/15_assign_client_role.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/1_login.png b/docs/documentation/verifiable_credentials/keycloak/img/1_login.png
new file mode 100644
index 00000000..d6dbc91c
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/1_login.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/2_realm_creation.png b/docs/documentation/verifiable_credentials/keycloak/img/2_realm_creation.png
new file mode 100644
index 00000000..070f6434
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/2_realm_creation.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/2_realm_import.png b/docs/documentation/verifiable_credentials/keycloak/img/2_realm_import.png
new file mode 100644
index 00000000..94c48288
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/2_realm_import.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/3_realm_selection.png b/docs/documentation/verifiable_credentials/keycloak/img/3_realm_selection.png
new file mode 100644
index 00000000..e9b7a5de
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/3_realm_selection.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/4_realm_setting.png b/docs/documentation/verifiable_credentials/keycloak/img/4_realm_setting.png
new file mode 100644
index 00000000..08d43a7c
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/4_realm_setting.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/5_realm_keys.png b/docs/documentation/verifiable_credentials/keycloak/img/5_realm_keys.png
new file mode 100644
index 00000000..a015f6c8
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/5_realm_keys.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/6_client_provider.png b/docs/documentation/verifiable_credentials/keycloak/img/6_client_provider.png
new file mode 100644
index 00000000..0fe11572
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/6_client_provider.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/7_add_user.png b/docs/documentation/verifiable_credentials/keycloak/img/7_add_user.png
new file mode 100644
index 00000000..d673eff6
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/7_add_user.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/8_add_user_form.png b/docs/documentation/verifiable_credentials/keycloak/img/8_add_user_form.png
new file mode 100644
index 00000000..1f196f9e
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/8_add_user_form.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/img/9_add_user_set_password.png b/docs/documentation/verifiable_credentials/keycloak/img/9_add_user_set_password.png
new file mode 100644
index 00000000..02d8c8c1
Binary files /dev/null and b/docs/documentation/verifiable_credentials/keycloak/img/9_add_user_set_password.png differ
diff --git a/docs/documentation/verifiable_credentials/keycloak/index.md b/docs/documentation/verifiable_credentials/keycloak/index.md
new file mode 100644
index 00000000..4455bea1
--- /dev/null
+++ b/docs/documentation/verifiable_credentials/keycloak/index.md
@@ -0,0 +1,567 @@
+---
+title: Hands-on Configuration
+---
+
+Configuring Keycloak is essential for managing Verifiable Credentials (VCs) in a Data Space. This document provides an overview of how to set up and use Keycloak for issuing VCs.
+
+## Web access
+
+Keycloak has a web interface that allows you to manage different aspects of the Keycloak server, such as creating realms, clients, and users, as well as issuing verifiable credentials (VCs). Access to the Keycloak web interface is through a web browser, using the URL provided during the deployment of the Keycloak server.
+
+If you deploy this service following the [MVDS instructions in AWS](../../mv_data_space/fiware/consumer.md#step-11-configure-values), the access URL for the Keycloak web interface is: `http://keycloak-consumer..nip.io`.
+
+By default, Keycloak is deployed with a preconfigured user `keycloak-admin`. The password for this user is generated during the deployment process. You can get the password by running the following command:
+
+```bash
+kubectl get secret -n consumer -o json issuance-secret | jq '.data."keycloak-admin"' -r | base64 --decode
+```
+
+## Configuration
+
+From the **Keycloak administration console**, you can setup different parts of the Keycloak server:
+
+- [**Realms**](#realms): Isolated logical grouping of users, roles, and clients. You can create multiple realms to separate different environments or applications.
+- [**Clients**](#clients): Clients are applications or services that can authenticate users and request access to resources. In the context of VCs, clients can be providers or consumers that interact with the Data Space.
+- [**Users**](#users): Users are the entities that can authenticate and interact with the Keycloak server. Users can be assigned roles and permissions to access specific resources or perform certain actions.
+
+### Realms
+
+From the Keycloak administration console, you can create a new realm by clicking on the `Add Realm` button.
+
+
+
+!!! bug
+
+ **Creating a realm from scratch using only the web interface is not currently possible.** You need to import a realm configuration from a JSON file or use the Keycloak Admin REST API.
+
+
+#### From json file
+
+You can import a realm configuration from a JSON file. This is useful for setting up predefined realms with specific configurations. The realm configuration is provided in the [template.default_config_all-realm.json](./template.default_config_all-realm.json) file. This file contains the necessary configuration for the Keycloak **realm**, including **clients**, **roles**, and **users**.
+
+!!! warning
+
+ Before importing the realm, ensure that:
+
+ - you changed the did:key values in the `template.default_config_all-realm.json` file to match the did:key generated by the did-helper. You can find the did in the did.env file inside the Keycloak container.
+ - you changed the pkcs12 password in the `template.default_config_all-realm.json` file to match the one used in cert.pfx generation. You can find the password executing the command in the 'Realm Settings - Key'.
+
+##### Initialize json realm file
+
+```bash
+cp template.default_config_all-realm.json default_config_all-realm.json
+sed -i "s/{% raw %}{{ISSUER-DID}}{% endraw %}/$(kubectl exec keycloak-0 -c keycloak -n consumer-raw -- cat /did-material/did.env | cut -d= -f2 | xargs)/g" default_config_all-realm.json
+sed -i "s/{% raw %}{{STORE-PASS}}{% endraw %}/$(kubectl get secret -n consumer-raw -o json issuance-secret | jq '.data."store-pass"' -r | base64 --decode)/g" default_config_all-realm.json
+```
+
+
+
+##### Values to change
+
+??? note "Replace with the correct _consumer did-key_ generated by the did-helper:"
+ - **attributes** > `issuerDid`:
+ ```json
+ ...
+ "attributes": {
+ "frontendUrl": "http://keycloak.consumer-raw.local",
+ "issuerDid": "did:key:zDnaeSwnupmLVfbuU6vwpiUBSwgjDuVon1f7u6i31f6v7VTfE"
+ }
+ ...
+ ```
+ - **components** > `org.keycloak.keys.KeyProvider` > `config` >`kid`:
+ ```json
+ ...
+ "components": {
+ "org.keycloak.keys.KeyProvider": [
+ {
+ ...
+ "config": {
+ ...
+ "kid": [
+ "did:key:zDnaeSwnupmLVfbuU6vwpiUBSwgjDuVon1f7u6i31f6v7VTfE"
+ ],
+ ...
+ }
+ }
+ ]
+ ...
+ }
+ ...
+ ```
+ - **components** > `org.keycloak.protocol.oid4vc.issuance.signing.VerifiableCredentialsSigningService` > `config` > `keyId` and `issuerDid`:
+ ```json
+ ...
+ "components": {
+ ...
+ "org.keycloak.protocol.oid4vc.issuance.signing.VerifiableCredentialsSigningService": [
+ {
+ ...
+ "config": {
+ "keyId": [
+ "did:key:zDnaeSwnupmLVfbuU6vwpiUBSwgjDuVon1f7u6i31f6v7VTfE"
+ ],
+ ...
+ "issuerDid": [
+ "did:key:zDnaeSwnupmLVfbuU6vwpiUBSwgjDuVon1f7u6i31f6v7VTfE"
+ ],
+ ...
+ }
+ }
+ ]
+ ...
+ }
+ ...
+ ```
+??? note "Replace with the correct _provider did-key_ generated by the did-helper:"
+ - **clients** > `clientId`:
+ ```json
+ ...
+ "clients": [
+ {
+ "clientId": "did:key:zDnaeud7zQitUYp1MxZyRGSuweXfDXGdu6hbYpKvrM6Ws1Eva",
+ ...
+ }
+ ]
+ ...
+ ```
+ - **clients** > `protocolMappers` > `config` > `clientId`:
+ ```json
+ ...
+ "clients": [
+ {
+ "protocolMappers": [
+ {
+ "config": {
+ ...
+ "clientId": "did:key:zDnaeud7zQitUYp1MxZyRGSuweXfDXGdu6hbYpKvrM6Ws1Eva",
+ ...
+ }
+ }
+ ...
+ ],
+ ...
+ }
+ ]
+ ...
+ ```
+ - **roles** > `client`:
+ ```json
+ ...
+ "roles": {
+ ...
+ "client": {
+ "did:key:zDnaeud7zQitUYp1MxZyRGSuweXfDXGdu6hbYpKvrM6Ws1Eva": [
+ ...
+ ]
+ }
+ },
+ ...
+ ```
+ - **users** > `clientRoles`:
+ ```json
+ ...
+ "users": [
+ {
+ ...
+ "clientRoles": {
+ "did:key:zDnaeud7zQitUYp1MxZyRGSuweXfDXGdu6hbYpKvrM6Ws1Eva": [
+ "OPERATOR"
+ ],
+ ...
+ },
+ ...
+ }
+ ],
+ ...
+ ```
+
+
+### Clients
+
+To create a new client, go to the `Clients` section in the Keycloak administration console and click on the `Create` button.
+
+
+
+!!! warning
+
+ The DID of the client is the DID of the provider that you want to connect to the Data Space. Also you need to set the roles for the client. The roles are used to authorize the client to access the Data Space.
+
+### Users
+
+To create a new user, go to the `Users` section in the Keycloak administration console and click on the `Add User` button.
+
+
+
+??? note "Step by step:"
+ 1. Fill in the user details (username, email, etc.).
+ 
+ 2. Set a password for the user. You can set a temporal or permanent password.
+ 
+ 3. Assign roles to the user.
+ 
+ 1. In this case we are filtering roles and selecting OPERATOR role.
+ 
+ 4. Save the user.
+
+#### Generate Verifiable Credential for new user
+
+##### User Credential
+
+Get an AccessToken from Keycloak:
+
+```bash
+export ACCESS_TOKEN=$(curl -s -X POST http://keycloak.consumer-raw.local/realms/CitcomAI/protocol/openid-connect/token \
+--header 'Accept: */*' \
+--header 'Content-Type: application/x-www-form-urlencoded' \
+--data grant_type=password \
+--data client_id=admin-cli \
+--data username=citcom-test \
+--data password=citcom-test \
+--data scope='openid'| jq '.access_token' -r); echo ${ACCESS_TOKEN}
+```
+
+Get a credential offer uri(for the `user-credential), using the retrieved AccessToken:
+
+```bash
+export OFFER_URI=$(curl -s -X GET 'http://keycloak.consumer-raw.local/realms/CitcomAI/protocol/oid4vc/credential-offer-uri?credential_configuration_id=user-credential' --header "Authorization: Bearer ${ACCESS_TOKEN}" | jq '"\(.issuer)\(.nonce)"' -r); echo ${OFFER_URI}
+```
+
+Use the offer uri(e.g. the issuer and nonce fields), to retrieve the actual offer:
+
+```bash
+export PRE_AUTHORIZED_CODE=$(curl -s -X GET ${OFFER_URI} \
+--header "Authorization: Bearer ${ACCESS_TOKEN}" | jq '.grants."urn:ietf:params:oauth:grant-type:pre-authorized_code"."pre-authorized_code"' -r); echo ${PRE_AUTHORIZED_CODE}
+```
+
+Exchange the pre-authorized code from the offer with an AccessToken at the authorization server:
+
+```bash
+export CREDENTIAL_ACCESS_TOKEN=$(curl -s -X POST http://keycloak.consumer-raw.local/realms/CitcomAI/protocol/openid-connect/token \
+--header 'Accept: */*' \
+--header 'Content-Type: application/x-www-form-urlencoded' \
+--data grant_type=urn:ietf:params:oauth:grant-type:pre-authorized_code \
+--data pre-authorized_code=${PRE_AUTHORIZED_CODE} | jq '.access_token' -r); echo ${CREDENTIAL_ACCESS_TOKEN}
+```
+
+Use the returned access token to get the actual credential:
+
+```bash
+export VERIFIABLE_CREDENTIAL=$(curl -s -X POST http://keycloak.consumer-raw.local/realms/CitcomAI/protocol/oid4vc/credential \
+--header 'Accept: */*' \
+--header 'Content-Type: application/json' \
+--header "Authorization: Bearer ${CREDENTIAL_ACCESS_TOKEN}" \
+--data '{"credential_identifier":"user-credential", "format":"jwt_vc"}' | jq '.credential' -r); echo ${VERIFIABLE_CREDENTIAL}
+```
+
+You will receive a jwt-encoded credential to be used within the data space.
+
+Decoded jwt credential for citcom-test user example:
+```json
+{
+ "nbf": 1752477064,
+ "jti": "urn:uuid:cc974c65-b99c-4a92-9279-b01717e85a0b",
+ "iss": "did:key:zDnaeomDNv18fjUmK6pgfL5fpAkE1LyfHo1oJb6yG6Z7Sf9ft",
+ "vc": {
+ "type": [
+ "UserCredential"
+ ],
+ "issuer": "did:key:zDnaeomDNv18fjUmK6pgfL5fpAkE1LyfHo1oJb6yG6Z7Sf9ft",
+ "issuanceDate": 1752477064.948,
+ "credentialSubject": {
+ "firstName": "Citcom",
+ "lastName": "Test",
+ "email": "citcom-test@example.com"
+ },
+ "@context": [
+ "https://www.w3.org/2018/credentials/v1",
+ "https://www.w3.org/ns/credentials/v1"
+ ]
+ }
+}
+```
+
+##### Operator Credential
+
+Get an AccessToken from Keycloak:
+
+```bash
+export ACCESS_TOKEN=$(curl -s -X POST http://keycloak.consumer-raw.local/realms/CitcomAI/protocol/openid-connect/token \
+--header 'Accept: */*' \
+--header 'Content-Type: application/x-www-form-urlencoded' \
+--data grant_type=password \
+--data client_id=admin-cli \
+--data username=citcom-test \
+--data password=citcom-test \
+--data scope='openid'| jq '.access_token' -r); echo ${ACCESS_TOKEN}
+```
+
+Get a credential offer uri(for the `operator-credential), using the retrieved AccessToken:
+
+```bash
+export OFFER_URI=$(curl -s -X GET 'http://keycloak.consumer-raw.local/realms/CitcomAI/protocol/oid4vc/credential-offer-uri?credential_configuration_id=operator-credential' --header "Authorization: Bearer ${ACCESS_TOKEN}" | jq '"\(.issuer)\(.nonce)"' -r); echo ${OFFER_URI}
+```
+
+Use the offer uri(e.g. the issuer and nonce fields), to retrieve the actual offer:
+
+```bash
+export PRE_AUTHORIZED_CODE=$(curl -s -X GET ${OFFER_URI} \
+--header "Authorization: Bearer ${ACCESS_TOKEN}" | jq '.grants."urn:ietf:params:oauth:grant-type:pre-authorized_code"."pre-authorized_code"' -r); echo ${PRE_AUTHORIZED_CODE}
+```
+
+Exchange the pre-authorized code from the offer with an AccessToken at the authorization server:
+
+```bash
+export CREDENTIAL_ACCESS_TOKEN=$(curl -s -X POST http://keycloak.consumer-raw.local/realms/CitcomAI/protocol/openid-connect/token \
+--header 'Accept: */*' \
+--header 'Content-Type: application/x-www-form-urlencoded' \
+--data grant_type=urn:ietf:params:oauth:grant-type:pre-authorized_code \
+--data pre-authorized_code=${PRE_AUTHORIZED_CODE} | jq '.access_token' -r); echo ${CREDENTIAL_ACCESS_TOKEN}
+```
+
+Use the returned access token to get the actual credential:
+
+```bash
+export VERIFIABLE_CREDENTIAL=$(curl -s -X POST http://keycloak.consumer-raw.local/realms/CitcomAI/protocol/oid4vc/credential \
+--header 'Accept: */*' \
+--header 'Content-Type: application/json' \
+--header "Authorization: Bearer ${CREDENTIAL_ACCESS_TOKEN}" \
+--data '{"credential_identifier":"operator-credential", "format":"jwt_vc"}' | jq '.credential' -r); echo ${VERIFIABLE_CREDENTIAL}
+```
+
+You will receive a jwt-encoded credential to be used within the data space.
+
+Decoded jwt credential for citcom-test user example:
+
+```json
+{
+ "nbf": 1752560178,
+ "jti": "urn:uuid:adfa1e98-28d0-4a6f-8a53-d9c6f0294ca4",
+ "iss": "did:key:zDnaeomDNv18fjUmK6pgfL5fpAkE1LyfHo1oJb6yG6Z7Sf9ft",
+ "vc": {
+ "type": [
+ "OperatorCredential"
+ ],
+ "issuer": "did:key:zDnaeomDNv18fjUmK6pgfL5fpAkE1LyfHo1oJb6yG6Z7Sf9ft",
+ "issuanceDate": 1752560178.136,
+ "credentialSubject": {
+ "firstName": "Citcom",
+ "lastName": "Test",
+ "roles": [
+ {
+ "names": [
+ "OPERATOR"
+ ],
+ "target": "did:key:zDnaeud7zQitUYp1MxZyRGSuweXfDXGdu6hbYpKvrM6Ws1Eva"
+ }
+ ],
+ "email": "citcom-test@example.com"
+ },
+ "@context": [
+ "https://www.w3.org/2018/credentials/v1",
+ "https://www.w3.org/ns/credentials/v1"
+ ]
+ }
+}
+```
+
+##### Register new Provider as client
+
+Each new provider in the data space has to be instantiated in the verifiable credentials issuer (Keycloak) as a `client`.
+
+Currently, it is not possible to fully configure a client for the issuance of Verifiable Credentials via the Keycloak graphical interface. To achieve this, we will leverage the client import functionality and complete the necessary information.
+
+??? note "1. Instantiate the JSON file to be imported from `template.client_import.json`"
+
+ ```shell
+ cp template.client_import.json client_import.json
+ sed -i "s/{% raw %}{{PROVIDER-DID}}{% endraw %}//g" client_import.json
+ ```
+
+??? note "2. Define the types of Verifiable Credentials to be issued."
+
+ ```json
+ "attributes": {
+ ...
+ "vc.verifiable-credential.format": "jwt_vc",
+ "vc.verifiable-credential.scope": "VerifiableCredential"
+ },
+ ```
+ In the code snippet above, a type of Verifiable Credential named *VerifiableCredential* is defined.
+
+ Additional credential types can be defined following the same structure:
+
+ ```json
+ "vc.operator-credential.format": "jwt_vc",
+ "vc.operator-credential.scope": "OperatorCredential"
+ ```
+
+ The above example defines a credential type named *OperatorCredential*.
+
+??? note "3. Adjust the information to be included in the VC JWT via mappers."
+
+ ```json
+ {
+ ...
+ "protocolMappers": [
+ {
+ "name": "first-name",
+ "protocol": "oid4vc",
+ "protocolMapper": "oid4vc-user-attribute-mapper",
+ "consentRequired": false,
+ "config": {
+ "subjectProperty": "firstName",
+ "userAttribute": "firstName",
+ "aggregateAttributes": "false"
+ }
+ },
+ {
+ "name": "target-role-mapper",
+ "protocol": "oid4vc",
+ "protocolMapper": "oid4vc-target-role-mapper",
+ "consentRequired": false,
+ "config": {
+ "subjectProperty": "roles",
+ "clientId": "{% raw %}{{PROVIDER-DID}}{% endraw %}",
+ "supportedCredentialTypes": "VerifiableCredential"
+ }
+ },
+ {
+ "name": "context-mapper",
+ "protocol": "oid4vc",
+ "protocolMapper": "oid4vc-context-mapper",
+ "consentRequired": false,
+ "config": {
+ "context": "https://www.w3.org/2018/credentials/v1",
+ "supportedCredentialTypes": "VerifiableCredential"
+ }
+ }
+ ],
+ ...
+ }
+ ```
+
+ The mappers allow us to configure the information that will be added to the `credentialSubject` of the generated Verifiable Credential.
+
+ In the example provided, the following mappers have been created:
+
+ - `first-name`: adds the user's *firstName* field to the `credentialSubject`.
+ - `target-role-mapper`: adds the user’s roles for the client with ID `PROVIDER_IR`.
+ - `context-mapper`: adds the static context with the specified value.
+
+ For each mapper, a list of `supportedCredentialTypes` is specified, indicating the credential types for which this mapper should be applied.
+
+ Example of the generated `credentialSubject`:
+
+ ```json
+ "credentialSubject": {
+ "firstName": "Citcom",
+ "lastName": "Test",
+ "roles": [
+ {
+ "names": [
+ "OPERATOR"
+ ],
+ "target": "did:key:zDnaeud7zQitUYp1MxZyRGSuweXfDXGdu6hbYpKvrM6Ws1Eva"
+ }
+ ],
+ "email": "citcom-test@example.com"
+ }
+ ```
+
+??? note "4. Import client into Keycloak."
+
+ Navigate to *Clients* > Click the *Import client* button:
+
+ 
+
+ Click *Browse* > Select the `client_import.json` file:
+
+ 
+
+??? note "5. Define *client_roles*."
+ 
+
+??? note "6. Assign *client_role* to a user."
+ 
+
+??? note "7. Generate a VerifiableCredential."
+
+ Get an AccessToken from Keycloak:
+
+ ```shell
+ export ACCESS_TOKEN=$(curl -s -X POST http://keycloak.consumer-raw.local/realms/CitcomAI/protocol/openid-connect/token \
+ --header 'Accept: */*' \
+ --header 'Content-Type: application/x-www-form-urlencoded' \
+ --data grant_type=password \
+ --data client_id=admin-cli \
+ --data username=test-user \
+ --data password=test \
+ --data scope='openid'| jq '.access_token' -r); echo ${ACCESS_TOKEN}
+ ```
+ Get a credential offer uri(for the `verifiable-credential), using the retrieved AccessToken:
+
+ ```shell
+ export OFFER_URI=$(curl -s -X GET 'http://keycloak.consumer-raw.local/realms/CitcomAI/protocol/oid4vc/credential-offer-uri?credential_configuration_id=verifiable-credential' --header "Authorization: Bearer ${ACCESS_TOKEN}" | jq '"\(.issuer)\(.nonce)"' -r); echo ${OFFER_URI}
+ ```
+
+ Use the offer uri(e.g. the issuer and nonce fields), to retrieve the actual offer:
+
+ ```shell
+ export PRE_AUTHORIZED_CODE=$(curl -s -X GET ${OFFER_URI} \
+ --header "Authorization: Bearer ${ACCESS_TOKEN}" | jq '.grants."urn:ietf:params:oauth:grant-type:pre-authorized_code"."pre-authorized_code"' -r); echo ${PRE_AUTHORIZED_CODE}
+ ```
+
+ Exchange the pre-authorized code from the offer with an AccessToken at the authorization server:
+
+ ```shell
+ export CREDENTIAL_ACCESS_TOKEN=$(curl -s -X POST http://keycloak.consumer-raw.local/realms/CitcomAI/protocol/openid-connect/token \
+ --header 'Accept: */*' \
+ --header 'Content-Type: application/x-www-form-urlencoded' \
+ --data grant_type=urn:ietf:params:oauth:grant-type:pre-authorized_code \
+ --data pre-authorized_code=${PRE_AUTHORIZED_CODE} | jq '.access_token' -r); echo ${CREDENTIAL_ACCESS_TOKEN}
+ ```
+
+ Use the returned access token to get the actual credential:
+
+ ```shell
+ export VERIFIABLE_CREDENTIAL=$(curl -s -X POST http://keycloak.consumer-raw.local/realms/CitcomAI/protocol/oid4vc/credential \
+ --header 'Accept: */*' \
+ --header 'Content-Type: application/json' \
+ --header "Authorization: Bearer ${CREDENTIAL_ACCESS_TOKEN}" \
+ --data '{"credential_identifier":"verifiable-credential", "format":"jwt_vc"}' | jq '.credential' -r); echo ${VERIFIABLE_CREDENTIAL}
+ ```
+
+ You will receive a jwt-encoded credential to be used within the data space.
+
+ Decoded jwt credential for citcom-test user example:
+ ```json
+ {
+ "nbf": 1752560178,
+ "jti": "urn:uuid:adfa1e98-28d0-4a6f-8a53-d9c6f0294ca4",
+ "iss": "did:key:zDnaeomDNv18fjUmK6pgfL5fpAkE1LyfHo1oJb6yG6Z7Sf9ft",
+ "vc": {
+ "type": [
+ "OperatorCredential"
+ ],
+ "issuer": "did:key:zDnaeomDNv18fjUmK6pgfL5fpAkE1LyfHo1oJb6yG6Z7Sf9ft",
+ "issuanceDate": 1752560178.136,
+ "credentialSubject": {
+ "firstName": "Citcom",
+ "lastName": "Test",
+ "roles": [
+ {
+ "names": [
+ "OPERATOR"
+ ],
+ "target": "did:key:zDnaeud7zQitUYp1MxZyRGSuweXfDXGdu6hbYpKvrM6Ws1Eva"
+ }
+ ],
+ "email": "citcom-test@example.com"
+ },
+ "@context": [
+ "https://www.w3.org/2018/credentials/v1",
+ "https://www.w3.org/ns/credentials/v1"
+ ]
+ }
+ }
+ ```
\ No newline at end of file
diff --git a/docs/documentation/verifiable_credentials/keycloak/template.default_config_all-realm.json b/docs/documentation/verifiable_credentials/keycloak/template.default_config_all-realm.json
new file mode 100644
index 00000000..7de7cfa9
--- /dev/null
+++ b/docs/documentation/verifiable_credentials/keycloak/template.default_config_all-realm.json
@@ -0,0 +1,267 @@
+{
+ "id": "CitcomAI",
+ "realm": "CitcomAI",
+ "displayName": "Keycloak",
+ "displayNameHtml": "
Keycloak
",
+ "enabled": true,
+ "attributes": {
+ "frontendUrl": "http://keycloak.consumer-raw.local",
+ "issuerDid": "{{ISSUER-DID}}"
+ },
+ "sslRequired": "none",
+ "clients": [
+ {
+ "clientId": "did:key:zDnaeud7zQitUYp1MxZyRGSuweXfDXGdu6hbYpKvrM6Ws1Eva",
+ "enabled": true,
+ "description": "Client to connect test.org",
+ "surrogateAuthRequired": false,
+ "alwaysDisplayInConsole": false,
+ "clientAuthenticatorType": "client-secret",
+ "defaultRoles": [],
+ "redirectUris": [],
+ "webOrigins": [],
+ "notBefore": 0,
+ "bearerOnly": false,
+ "consentRequired": false,
+ "standardFlowEnabled": true,
+ "implicitFlowEnabled": false,
+ "directAccessGrantsEnabled": false,
+ "serviceAccountsEnabled": false,
+ "publicClient": false,
+ "frontchannelLogout": false,
+ "protocol": "oid4vc",
+ "attributes": {
+ "client.secret.creation.time": "1675260539",
+ "vc.user-credential.format": "jwt_vc",
+ "vc.user-credential.scope": "UserCredential",
+ "vc.verifiable-credential.format": "jwt_vc",
+ "vc.verifiable-credential.scope": "VerifiableCredential",
+ "vc.operator-credential.format": "jwt_vc",
+ "vc.operator-credential.scope": "OperatorCredential"
+ },
+ "protocolMappers": [
+ {
+ "name": "target-role-mapper",
+ "protocol": "oid4vc",
+ "protocolMapper": "oid4vc-target-role-mapper",
+ "config": {
+ "subjectProperty": "roles",
+ "clientId": "did:key:zDnaeud7zQitUYp1MxZyRGSuweXfDXGdu6hbYpKvrM6Ws1Eva",
+ "supportedCredentialTypes": "OperatorCredential"
+ }
+ },
+ {
+ "name": "context-mapper",
+ "protocol": "oid4vc",
+ "protocolMapper": "oid4vc-context-mapper",
+ "config": {
+ "context": "https://www.w3.org/2018/credentials/v1",
+ "supportedCredentialTypes": "VerifiableCredential,UserCredential,OperatorCredential"
+ }
+ },
+ {
+ "name": "email-mapper",
+ "protocol": "oid4vc",
+ "protocolMapper": "oid4vc-user-attribute-mapper",
+ "config": {
+ "subjectProperty": "email",
+ "userAttribute": "email",
+ "supportedCredentialTypes": "UserCredential,OperatorCredential"
+ }
+ },
+ {
+ "name": "firstName-mapper",
+ "protocol": "oid4vc",
+ "protocolMapper": "oid4vc-user-attribute-mapper",
+ "config": {
+ "subjectProperty": "firstName",
+ "userAttribute": "firstName",
+ "supportedCredentialTypes": "UserCredential,OperatorCredential"
+ }
+ },
+ {
+ "name": "lastName-mapper",
+ "protocol": "oid4vc",
+ "protocolMapper": "oid4vc-user-attribute-mapper",
+ "config": {
+ "subjectProperty": "lastName",
+ "userAttribute": "lastName",
+ "supportedCredentialTypes": "UserCredential,OperatorCredential"
+ }
+ }
+ ],
+ "authenticationFlowBindingOverrides": {},
+ "fullScopeAllowed": true,
+ "nodeReRegistrationTimeout": -1,
+ "defaultClientScopes": [],
+ "optionalClientScopes": []
+ }
+ ],
+ "clientScopes": [
+ {
+ "name": "roles",
+ "description": "OpenID Connect scope for add user roles to the access token",
+ "protocol": "openid-connect",
+ "attributes": {
+ "include.in.token.scope": "false",
+ "display.on.consent.screen": "true",
+ "consent.screen.text": "${rolesScopeConsentText}"
+ },
+ "protocolMappers": [
+ {
+ "name": "audience resolve",
+ "protocol": "openid-connect",
+ "protocolMapper": "oidc-audience-resolve-mapper",
+ "consentRequired": false,
+ "config": {
+ "introspection.token.claim": "true",
+ "access.token.claim": "true"
+ }
+ },
+ {
+ "name": "realm roles",
+ "protocol": "openid-connect",
+ "protocolMapper": "oidc-usermodel-realm-role-mapper",
+ "consentRequired": false,
+ "config": {
+ "introspection.token.claim": "true",
+ "multivalued": "true",
+ "user.attribute": "foo",
+ "access.token.claim": "true",
+ "claim.name": "realm_access.roles",
+ "jsonType.label": "String"
+ }
+ },
+ {
+ "name": "client roles",
+ "protocol": "openid-connect",
+ "protocolMapper": "oidc-usermodel-client-role-mapper",
+ "consentRequired": false,
+ "config": {
+ "introspection.token.claim": "true",
+ "multivalued": "true",
+ "user.attribute": "foo",
+ "access.token.claim": "true",
+ "claim.name": "resource_access.${client_id}.roles",
+ "jsonType.label": "String"
+ }
+ }
+ ]
+ }
+ ],
+ "defaultDefaultClientScopes": [],
+ "defaultOptionalClientScopes": [],
+ "roles": {
+ "realm": [
+ {
+ "name": "user",
+ "description": "User privileges",
+ "composite": false,
+ "clientRole": false,
+ "containerId": "citcomai-realm",
+ "attributes": {}
+ }
+ ],
+ "client": {
+ "did:key:zDnaeud7zQitUYp1MxZyRGSuweXfDXGdu6hbYpKvrM6Ws1Eva": [
+ {
+ "name": "READER",
+ "description": "Is allowed to see offers etc.",
+ "clientRole": true
+ },
+ {
+ "name": "OPERATOR",
+ "description": "Is allowed to operate clusters.",
+ "clientRole": true
+ }
+ ]
+ }
+ },
+ "users": [
+ {
+ "username": "test-user",
+ "enabled": true,
+ "email": "test@user.org",
+ "firstName": "Test",
+ "lastName": "Reader",
+ "credentials": [
+ {
+ "type": "password",
+ "value": "test"
+ }
+ ],
+ "clientRoles": {
+ "did:key:zDnaeud7zQitUYp1MxZyRGSuweXfDXGdu6hbYpKvrM6Ws1Eva": [
+ "OPERATOR"
+ ],
+ "account": [
+ "view-profile",
+ "manage-account"
+ ]
+ },
+ "groups": []
+ }
+ ],
+ "groups": [],
+ "components": {
+ "org.keycloak.keys.KeyProvider": [
+ {
+ "id": "a4589e8f-7f82-4345-b2ea-ccc9d4366600",
+ "name": "citcomai-key",
+ "providerId": "java-keystore",
+ "subComponents": {},
+ "config": {
+ "keystore": [
+ "/did-material/cert.pfx"
+ ],
+ "keystorePassword": [
+ "{{STORE-PASS}}"
+ ],
+ "keyAlias": [
+ "didPrivateKey"
+ ],
+ "keyPassword": [
+ "{{STORE-PASS}}"
+ ],
+ "kid": [
+ "{{ISSUER-DID}}"
+ ],
+ "active": [
+ "true"
+ ],
+ "priority": [
+ "0"
+ ],
+ "enabled": [
+ "true"
+ ],
+ "algorithm": [
+ "ES256"
+ ]
+ }
+ }
+ ],
+ "org.keycloak.protocol.oid4vc.issuance.signing.VerifiableCredentialsSigningService": [
+ {
+ "id": "jwt-signing",
+ "name": "jwt-signing-service",
+ "providerId": "jwt_vc",
+ "subComponents": {},
+ "config": {
+ "keyId": [
+ "{{ISSUER-DID}}"
+ ],
+ "algorithmType": [
+ "ES256"
+ ],
+ "issuerDid": [
+ "{{ISSUER-DID}}"
+ ],
+ "tokenType": [
+ "JWT"
+ ]
+ }
+ }
+ ]
+ }
+}
\ No newline at end of file
diff --git a/docs/faq/index.md b/docs/faq/index.md
index 0be4a150..7fc6400d 100644
--- a/docs/faq/index.md
+++ b/docs/faq/index.md
@@ -15,7 +15,7 @@ title: Frequently Asked Questions (FAQ)
??? question "What types of elements can I add to Markdown?"
- This website uses pure Markdown as well as the plugin [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/). You can find all the information about the available references [here](https://squidfunk.github.io/mkdocs-material/reference/)."
+ This website uses pure Markdown as well as the plugin [Material for MkDocs](https://squidfunk.github.io/mkdocs-material/). You can find all the information about the available references [here](https://squidfunk.github.io/mkdocs-material/reference/).
??? question "What icons can I use?"
diff --git a/docs/getting_started/data_spaces/deployment.md b/docs/getting_started/data_spaces/deployment.md
deleted file mode 100644
index 490f429c..00000000
--- a/docs/getting_started/data_spaces/deployment.md
+++ /dev/null
@@ -1,30 +0,0 @@
-# Data Space Deployment
-
-As we have seen in the [Data Space - Join one](./join.md) section, it is mandatory to deploy a connector in each organization that wants to share data in a Data Space. This connector is responsible for managing the data sharing process with the rest of the Data Space members.
-
-These are the instructions to deploy a Data Space from scratch with Fiware technologies.
-
-## Minimal Viable Data Space
-
-The minimal viable data space is composed of a Trust Anchor and a Data Space Connector.
-
-### FIWARE Data Space Connector
-
-FIWARE developed a Data Space Connector that can be deployed in a [local](https://github.com/FIWARE/data-space-connector/blob/main/doc/deployment-integration/local-deployment/LOCAL.MD) environment. This connector is a minimal version of the Data Space Connector that can be used to test the Data Space functionalities.
-
-### From scratch
-
-This section describes how to deploy a Data Space from scratch in different scenarios:
-
-- Cloud provider.
-- On-premises infrastructure.
-
-!!! warning
-
- This guide is a work in progress. It will be updated with more detailed in the next months. [Terraform deployment](https://github.com/CitComAI-Hub/Minimum_Viable_DataSpace_Infrastructure/tree/main/examples/kind_minimal_ds_local).
-
-## Data Federation
-
-The Data Federation is a more complex scenario where multiple Data Spaces or data platform are federated to share data. Depending on the technology used, the federation process can be different.
-
-[Reference](../../documentation/data_federation/index.md)
\ No newline at end of file
diff --git a/docs/getting_started/data_spaces/img/citcom_highLevel_DSarch.png b/docs/getting_started/data_spaces/img/citcom_highLevel_DSarch.png
new file mode 100644
index 00000000..aa0b9c12
Binary files /dev/null and b/docs/getting_started/data_spaces/img/citcom_highLevel_DSarch.png differ
diff --git a/docs/getting_started/data_spaces/img/mvds_arch.svg b/docs/getting_started/data_spaces/img/mvds_arch.svg
new file mode 100644
index 00000000..d35c0823
--- /dev/null
+++ b/docs/getting_started/data_spaces/img/mvds_arch.svg
@@ -0,0 +1,3 @@
+
+
+
\ No newline at end of file
diff --git a/docs/getting_started/data_spaces/index.md b/docs/getting_started/data_spaces/index.md
index 3a9c886d..30ece9b6 100644
--- a/docs/getting_started/data_spaces/index.md
+++ b/docs/getting_started/data_spaces/index.md
@@ -1,18 +1,53 @@
-# Data spaces
-Data spaces refer to structured and managed environments where data from various sources is securely stored, shared, and utilized for AI and robotics applications within smart and sustainable cities. These **data spaces are the project's core technology**, enabling participants to **access** and leverage **high-quality data for testing, experimentation, and validation of AI technologies**.
+---
+title: Data Spaces
+---
+
+## Overview
+
+Data spaces (DS) refer to structured and managed environments where data from various sources is securely stored, shared, and utilized for AI and robotics applications within smart and sustainable cities. These **data spaces are the project's core technology**, enabling participants to **access** and leverage **high-quality data for testing, experimentation, and validation of AI technologies**.
Data spaces support **interoperability**, ensuring that data from different sources can be combined and used while **complying with regulations** such as the GDPR and other EU directives. They provide the necessary infrastructure for managing data in a way that supports ethical considerations, cybersecurity, and the broader goals of creating a more digital and sustainable urban environment.
-In CitCom.ai, data spaces are pivotal in accelerating innovation by facilitating collaboration among different stakeholders. They offer a **secure** and compliant framework for data exchange, ensuring that the **AI solutions developed within the project are both reliable and aligned with European standards**.
+
+
+??? info "More information"
+ - [**Data Space Support Center (DSSC):**](https://dssc.eu/)
+ - [Data Space Definition](https://dssc.eu/space/BVE2/1071251613/Introduction+-+Key+Concepts+of+Data+Spaces#1.-What-is-a-data-space?)
+ - [**Data Spaces for Smart Cities (DS4SCC):**](https://www.ds4sscc.eu/)
+ - [Interactive portal for building data spaces in Smart Communities](https://inventory.ds4sscc.eu/)
-Basically, a data space must be formed, at least, by the following components:
+## Minimum Viable Data Space
+
+A Minimum Viable Data Space (MVDS) is a basic configuration of a data space that includes only the essential components required (*Trust Framework* and *Connector*) to ensure interoperability and enable the secure and sovereign exchange of information between organisations. Its minimal approach aims to reduce initial complexity, support technological adoption, and provide a way to test the ecosystem’s functionality before scaling to more comprehensive solutions.
- **Trust Anchor (TA)**: Responsible for managing trust in the data space. It is the manager of the identities of the different elements of the data space and of managing the trust in them. At least one TA shall exist in the data space, managed by the organization in charge of the data space.
+ !!! Tip "More details"
+ Overview of open-source trust frameworks: [here](../../documentation/trust_frameworks/index.md)
+
- **Data Space Connector (DSC)**: Responsible for managing the communication between the different elements of the data space. It oversees managing authentication, authorization and data access control. There must be at least two DSCs, one per organization, to be able to affirm that a data space exists.
-
+ !!! Tip "More details"
+ Overview of open-source data spaces connectors: [here](../../documentation/data_space_connectors/index.md)
+
+This type of data space serves as a testing environment that facilitates the validation of data exchange models and a gradual migration from existing systems. Thanks to its streamlined structure, the MVDS is especially well-suited for demonstrations, pilots, or early implementation stages in collaborative settings where data sharing is expected to be trustworthy and controlled.
+
+!!! Note "Interoperability Levels"
+
+ Following the [Interoperability Levels](../interoperability.md#interoperability-levels), the MVDS aims to provide the **minimal set of tools required to progress from** interoperability **level 1 to level 2**.
+
+## CitCom.ai Data Space
+
+Data spaces are pivotal in accelerating innovation by facilitating collaboration among different stakeholders. They offer a secure and compliant framework for data exchange, ensuring that the **AI solutions developed within the project are both reliable and aligned with European standards**.
+
+The following figure provides a high-level overview of a general architecture illustrating how CitCom.ai integrates with current data platforms, and how data spaces will serve as a linking nexus between: nodes, data platforms, and the AI and robotics solutions.
+
+
+
+At the bottom of the figure, we see the data platforms that exist in many cities and communities throughout the EU ([Nodes & Countries](../nodes_countries.md)). These data platforms are operated by both public and private sector organisations. Much of these data sets are locked in silos, meaning that it is hard to get the data out of the databases in which they are stored, to be shared with other parties.
+
+Data spaces are currently being created to counter this siloed EU data landscape. Multiple data space initiatives at local, national and EU level are currently underway. These data spaces will be contributed to by the CitCom.ai TEF sites and leveraged by the AI and robotics solutions that will be brought to the TEF sites by CitCom.ai customers. The solutions will connect to the data spaces through the principles of the Minimal Interoperability Mechanisms (MIMs).
-!!! Tip "More details"
+In addition to leveraging existing and emerging data space initiatives, **CitCom.ai will also deploy its own experimental data space environment**. This tailored infrastructure will interconnect as many project sites as possible, enabling the seamless exchange of data across diverse platforms. By simulating real-world interoperability scenarios, it will serve as a living lab where AI and robotics solutions can be tested in a multi-site setup, ensuring that interoperability and compliance with Minimal Interoperability Mechanisms (MIMs) are validated and enhanced in practice.
- Overview of open-source data spaces connectors: [Overview section](../../documentation/data_space_connectors/index.md)
\ No newline at end of file
+
\ No newline at end of file
diff --git a/docs/getting_started/data_spaces/join.md b/docs/getting_started/data_spaces/join.md
deleted file mode 100644
index ec1bb6f6..00000000
--- a/docs/getting_started/data_spaces/join.md
+++ /dev/null
@@ -1,34 +0,0 @@
-# Join a Data Space
-
-!!! Info
-
- CitCom.ai uses data spaces based on [FIWARE technology](https://github.com/FIWARE/data-space-connector/tree/main).
-
-The initial adoption of the FIWARE Data Space connector (DSC) within the CitCom.ai project is a strategic decision that aligns with the [Data Space Business Alliance](https://data-spaces-business-alliance.eu/) (DSBA) recommendations, ensuring a robust and interoperable framework for data exchange across Testing and Experimentation Facilities (TEFs). The FIWARE DSC is recognized for its compliance with open standards and ability to facilitate secure, efficient data sharing between diverse platforms and ecosystems.
-
-To access to a data space, you mainly need:
-
-1. **A digital certificate**: To be able to identify yourself as an organization within the data space.
-
-2. **A data space connector**: To be able to communicate with the data space.
-Data Space Deployment.
-
-## Sign Up
-
-!!! warning "Current sign up process"
-
- In general, it will be necessary to **contact via email the managers of the data space** so that they authorize our organization.
-
-Depending on the configuration of the data space, the registration process may vary.
-
-Currently, most commonly, you will need to contact the data space TA administrator for information on the type of certificate you need and how to provide it so that they can authorize you as an authorized entity in the data space.
-
-In the future, this process will be automated, and you will be able to do it directly from the data space platform. Using the European digital identity, you will be able to register in the data space in a simple and secure way.
-
-## Data Space Connector
-
-The Data Space Connector (DSC) is a software component that is responsible for managing the communication between the different elements of the data space. It oversees managing authentication, authorization, and data access control.
-
-Fiware provides a reference implementation of the DSC, which is available in the [Fiware GitHub repository](https://github.com/FIWARE/data-space-connector/tree/main)
-
-See the deployment [instructions](deployment.md) for more details.
diff --git a/docs/getting_started/get_involved.md b/docs/getting_started/get_involved.md
deleted file mode 100644
index db7da89a..00000000
--- a/docs/getting_started/get_involved.md
+++ /dev/null
@@ -1,37 +0,0 @@
-# Get involved
-
-CitCom.ai is a project **open to the European society, allowing any interested company or institution to join this ecosystem**. CitCom.ai's **core technology is data spaces**, which guarantee secure environments for data exchange.
-
-## What data are available?
-There are multiple data spaces grouping different institutions (e.g. cities, companies, etc.) at local, national or European Union level. Depending on each data space, the available data may vary.
-
-
-
-- :material-store-search-outline:{ .lg .middle } __Data catalog__
-
- ---
-
- Explore available data within currently deployed data spaces.
-
- [:octicons-arrow-right-24: Learn more](../data_catalog/index.md)
-
-
-
-## How can I join a data space?
-Basically, you **need to deploy a data space connector** and **request certified access** to the data space trust anchor manager.
-
-The process of joining an existing data space will be outlined, including details on membership requirements, access controls, and collaboration tools within that environment.
-
-In addition, access to a data space does not have to be only as a **consumer** of data, but you can also access it as a **provider**. Depending on the **role** you want to play in the data space, the way you join may vary.
-
-
-
-- :material-database-arrow-right-outline:{ .lg .middle } __Join a Data Space__
-
- ---
-
- Understand the process of joining to an existing data space.
-
- [:octicons-arrow-right-24: Learn more](data_spaces/join.md)
-
-
\ No newline at end of file
diff --git a/docs/getting_started/get_involved/index.md b/docs/getting_started/get_involved/index.md
new file mode 100644
index 00000000..ac3026e9
--- /dev/null
+++ b/docs/getting_started/get_involved/index.md
@@ -0,0 +1,42 @@
+---
+title: Get involved
+---
+
+CitCom.ai is a project **open to the European society, allowing any interested company or institution to join this ecosystem**. CitCom.ai's **core technology is data spaces**, which guarantee secure environments for data exchange.
+
+## What data are available?
+
+CitCom.ai brings together a network of sites distributed across Europe ([Nodes & Countries](../nodes_countries.md)), each contributing data from distinct domains of smart cities — including mobility, power and connect. These datasets originate from a variety of stakeholders such as cities, research centers, and private entities.
+
+To explore what’s currently available, we invite you to browse our data catalog, where you’ll find detailed descriptions and thematic classifications. If a dataset sparks your interest, you can reach out to its owner directly to request access. In many cases, data access can also be facilitated via our experimental data space layer, ensuring secure and standardized retrieval in compliance with European data-sharing protocols.
+
+
+
+- :material-store-search-outline:{ .lg .middle } __Data catalog__
+
+ ---
+
+ Explore available data within currently deployed data spaces.
+
+ [:octicons-arrow-right-24: Learn more](../../data_catalog/index.md)
+
+
+
+## How to join the Citcom.ai data space?
+
+The process of joining an existing data space will be outlined, including details on membership requirements, access controls, and collaboration tools within that environment.
+
+In addition, access to a data space does not have to be only as a **consumer** of data, but you can also access it as a **provider**. Depending on the **role** you want to play in the data space, the way you join may vary. Basically, you **need to deploy a data space connector** and **request certified access** to the data space trust anchor manager.
+
+
+
+- :material-database-arrow-right-outline:{ .lg .middle } __Join a Data Space__
+
+ ---
+
+ Understand the process of joining to an existing data space.
+
+
+ [:octicons-arrow-right-24: Learn more](./join.md)
+
+
\ No newline at end of file
diff --git a/docs/getting_started/get_involved/join.md b/docs/getting_started/get_involved/join.md
new file mode 100644
index 00000000..1d13913a
--- /dev/null
+++ b/docs/getting_started/get_involved/join.md
@@ -0,0 +1,102 @@
+---
+title: Join the Data Space
+---
+
+!!! Warning
+ **CitCom.ai uses [FIWARE technology](https://github.com/FIWARE/data-space-connector/tree/main) for its data spaces**, although in the future it will evolve to a combination of Fiware and [Eclipse technology](https://github.com/eclipse-edc/).
+
+The initial adoption of the FIWARE Data Space technology within the CitCom.ai project is a strategic decision that aligns with the [Data Space Business Alliance](https://data-spaces-business-alliance.eu/) (DSBA) and the [Data Spaces for Smart Cities (DS4SCC)](https://www.ds4sscc.eu/) recommendations, ensuring a robust and interoperable framework for data exchange across Testing and Experimentation Facilities (TEFs).
+
+To access to a data space, you mainly need:
+
+1. **A digital certificate (Verifiable Credential)**: To be able to identify yourself as an organization within the data space.
+
+2. **A data space connector**: To be able to communicate with the data space.
+
+## VC Issuer
+
+The VC Issuer is a component that issues Verifiable Credentials (VCs) to entities within the data space. These credentials are used to authenticate and authorize access to resources in the data space.
+
+
+
+## Trust Anchor
+
+The Trust Anchor (TA) is a critical component in the data space ecosystem. It serves as a trusted entity that issues and manages digital certificates (Verifiable Credentials) for organizations and individuals participating in the data space. The TA ensures that all participants are authenticated and authorized to access the resources within the data space.
+
+!!! Tip "More details"
+ Overview of open-source trust frameworks: [here](../../documentation/trust_frameworks/index.md)
+
+### Sign Up
+
+Depending on the configuration of the data space, the registration process may vary. Currently, most commonly, you will need to **contact the data space TA administrator** for information on the type of certificate you need and how to provide it so that they can authorize you as an authorized entity in the data space.
+
+In the future, this process will be automated, and you will be able to do it directly from the data space platform. Using the European digital identity, you will be able to register in the data space in a simple and secure way.
+
+??? warning "TA Endpoint"
+
+ To be part of the CitCom.ai data space, you need to register in the CitCom.ai Trust Anchor. The endpoint for the CitCom.ai Trust Anchor is: `https://xxxx`
+
+## Data Space Connector
+
+The Data Space Connector (DSC) is a software component that is responsible for managing the communication between the different elements of the data space. It oversees managing authentication, authorization, and data access control. Fiware provides a reference implementation of the DSC, which is available in the [Fiware GitHub repository](https://github.com/FIWARE/data-space-connector/tree/main).
+
+**In all cases, you will need to deploy a Data Space Connector** in your organization to be able to share data in the data space. Depending if you want consume or provide data, you will need to deploy a different type of connector.
+
+!!! Tip "More details"
+ Overview of open-source data spaces connectors: [here](../../documentation/data_space_connectors/index.md)
+
+
+
+- :material-cog-outline:{ .lg .middle } __Consumer__
+
+ ---
+
+ The Consumer Role is responsible for consuming data from the data space. This role requires a Data Space Connector that is configured to access and retrieve data from the data space.
+
+ [:octicons-arrow-right-24: _AWS Deployment_](../../documentation/mv_data_space/fiware/consumer.md)
+
+ [:octicons-arrow-right-24: _Technical Details_](../../documentation/data_space_connectors/fiware/index.md#consumer)
+
+- :material-cog-outline:{ .lg .middle } __Provider__
+
+ ---
+
+ The Provider Role is responsible for providing data to the data space. This role requires a Data Space Connector that is configured to share data with the data space.
+
+ [:octicons-arrow-right-24: _AWS Deployment_](../../documentation/mv_data_space/fiware/provider.md)
+
+ [:octicons-arrow-right-24: _Technical Details_](../../documentation/data_space_connectors/fiware/index.md#provider)
+
+
+
+## Data Federation
+
+The Data Federation is a more complex scenario where multiple Data Spaces or data platform are federated to share data. Depending on the technology used, the federation process can be different. [Reference](../../documentation/data_federation/index.md)
+
+## Verifiable Credentials management
+
+The Verifiable Credentials (VCs) management is a crucial aspect of the data space, as it ensures that all participants are authenticated and authorized to access resources. The management of VCs is typically handled by an Identity Management system, such as Keycloak.
+
+
+
+- :material-cog-outline:{ .lg .middle } __Keycloak Configuration__
+
+ ---
+
+ The Keycloak Configuration is responsible for managing the authentication and authorization of users and services in the data space. This configuration is crucial for issuing Verifiable Credentials.
+
+ [:octicons-arrow-right-24: _About Keycloak_](../../documentation/verifiable_credentials/index.md#identity-management-keycloak)
+
+ [:octicons-arrow-right-24: _Hands-On Configuration_](../../documentation/verifiable_credentials/keycloak/index.md)
+
+
\ No newline at end of file
diff --git a/docs/getting_started/interoperability.md b/docs/getting_started/interoperability.md
index 3139bfb0..7b06c0fc 100644
--- a/docs/getting_started/interoperability.md
+++ b/docs/getting_started/interoperability.md
@@ -3,25 +3,22 @@ Interoperability is a key aspect of CitCom.ai’s approach to ensuring that data
## IDSA Documentation
-- **Key Layers:**
- The IDS documentation emphasizes four primary layers—**technical**, **semantic**, **organizational**, and **legal**—that collectively underpin effective interoperability.
+- **Key Layers:** The IDS documentation emphasizes four primary layers—**technical**, **semantic**, **organizational**, and **legal**—that collectively underpin effective interoperability.
-- **Intra Data Space Interoperability:**
- Within a single Data Space, a unified governance framework ensures that all participants adhere to the same protocols and models.
+- **Intra Data Space Interoperability:** Within a single Data Space, a unified governance framework ensures that all participants adhere to the same protocols and models.
-- **Cross-Data Space Interoperability:**
- When operating across multiple Data Spaces, additional coordination is required to bridge varying protocols and legal frameworks.
+- **Cross-Data Space Interoperability:** When operating across multiple Data Spaces, additional coordination is required to bridge varying protocols and legal frameworks.
-For a comprehensive explanation and additional context, please refer to the original [IDS documentation on Interoperability in Data Spaces](https://docs.internationaldataspaces.org/ids-knowledgebase/idsa-rulebook/idsa-rulebook/3_interoperability).
+For a comprehensive explanation and additional context, please refer to the original [IDS documentation on Interoperability in Data Spaces](https://docs.internationaldataspaces.org/ids-knowledgebase/idsa-rulebook/idsa-rulebook/3.-functional_requirements/3.11-interoperability_in_data_spaces).
## MIMs
-[MIMs](https://mims.oascities.org/) stands for "Minimal Interoperability Mechanisms". These guidelines and standards were developed by the [Open & Agile Smart Cities (OASC)](https://oascities.org/) initiative to promote **interoperability** among different city systems and technologies, such as traffic management systems, waste management systems, and energy distribution systems. CitCom.ai project **embraces minimal interoperability mechanisms (MIMs)** as part of its approach.
+[MIMs](https://mims.oascities.org/NzWXOO1Fttw4wtqv1Wys/) stands for "Minimal Interoperability Mechanisms". These guidelines and standards were developed by the [Open & Agile Smart Cities (OASC)](https://oascities.org/) initiative to promote **interoperability** among different city systems and technologies, such as traffic management systems, waste management systems, and energy distribution systems. CitCom.ai project **embraces minimal interoperability mechanisms (MIMs)** as part of its approach.
## Interoperability levels
Interoperability in data spaces defines how diverse systems can seamlessly exchange, interpret, and use data. Interoperability can be conceptualized as a maturity model with three levels:
### Level 0 - Custom Integration
-At Level 0, no standard exists for data exchange. Each system is integrated via wholly customized solutions. This results in interfaces that are highly specific to each data platform. Although functional, such integration is often brittle and difficult to scale because it lacks a common vocabulary or consistent protocols. The absence of shared standards limits the potential for cross-organizational data reuse.
+At Level 0, **no standard exists for data exchange**. Each system is integrated via wholly customized solutions. This results in interfaces that are highly specific to each data platform. Although functional, such integration is often brittle and difficult to scale because it lacks a common vocabulary or consistent protocols. The absence of shared standards limits the potential for cross-organizational data reuse.
### Level 1 - Pivotal Interoperability Points
At Level 1, the focus shifts to identifying and adopting pivotal interoperability points among different data platforms. Key mechanisms such as **MIM1 NGSI-LD** and **MIM2 Smart Data Models** serve as the foundational standards at this stage.
@@ -45,4 +42,4 @@ This unified interface simplifies plug-and-play integration and ensures that all
!!! Tip "Learn more about this"
- Check [Interoperability in Data Spaces](https://docs.internationaldataspaces.org/ids-knowledgebase/idsa-rulebook/idsa-rulebook/3_interoperability) section from IDSA. Also check [MIMs Toolkit](./../toolbox/mims_toolkit.md) section or [OASC MIMs 2024](https://mims.oascities.org/) for more details.
+ Check [Interoperability in Data Spaces](https://docs.internationaldataspaces.org/ids-knowledgebase/idsa-rulebook/idsa-rulebook/3.-functional_requirements/3.11-interoperability_in_data_spaces) section from IDSA. Also check [MIMs Toolkit](./../toolbox/mims_toolkit.md) section or [OASC MIMs 2024](https://mims.oascities.org/NzWXOO1Fttw4wtqv1Wys/) for more details.
diff --git a/docs/getting_started/roadmap.yaml b/docs/getting_started/roadmap.yaml
index 6dad2607..b41800d1 100644
--- a/docs/getting_started/roadmap.yaml
+++ b/docs/getting_started/roadmap.yaml
@@ -4,7 +4,7 @@
icon: ":material-hub-outline:"
- title: "[Data Spaces](data_spaces/index.md)"
icon: ":material-database-outline:"
-- title: "[Get involved](./get_involved.md)"
+- title: "[Get involved](./get_involved/index.md)"
icon: ":material-handshake:"
- title: "[AI Services](../services/index.md)"
icon: ":material-robot-outline:"
diff --git a/docs/index.md b/docs/index.md
index 09995351..9dd83b7a 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -46,36 +46,35 @@ body > div.md-container > main > div > div > article > a{
[:octicons-arrow-right-24: Learn more](./tef/index.md)
-- :material-toolbox-outline:{ .lg .middle } __Toolbox__
+- :material-file-document-outline:{ .lg .middle } __Documentation__
---
- A set of useful tools compatible with MIMs.
+ Reports and deployment guides of different components.
- [:octicons-arrow-right-24: Learn more](./toolbox/index.md)
+ [:octicons-arrow-right-24: Learn more](./documentation/index.md)
-- :material-robot-outline:{ .lg .middle } __AI Services__
+- :material-store-search-outline:{ .lg .middle } __Data Catalog__
---
- Minimal interoperable AI services for platforms compatible with MIMs.
+ Centralized hub to keep track of available datasets.
- [:octicons-arrow-right-24: Learn more](./services/index.md)
+ [:octicons-arrow-right-24: Learn more](./data_catalog/index.md)
-- :material-file-document-outline:{ .lg .middle } __Documentation__
+- :material-toolbox-outline:{ .lg .middle } __Toolbox__
---
- Reports and deployment guides of different components.
+ A set of useful tools compatible with MIMs.
- [:octicons-arrow-right-24: Learn more](./documentation/index.md)
+ [:octicons-arrow-right-24: Learn more](./toolbox/index.md)
-- :material-store-search-outline:{ .lg .middle } __Data Catalog__
+- :material-robot-outline:{ .lg .middle } __AI Services__
---
- Centralized hub to keep track of available datasets.
-
- [:octicons-arrow-right-24: Learn more](./data_catalog/index.md)
+ Minimal interoperable AI services for platforms compatible with MIMs.
+ [:octicons-arrow-right-24: Learn more](./services/index.md)
diff --git a/docs/tef/south_connect/milano.md b/docs/tef/south_connect/milano.md
index f8ff5dca..04c34007 100644
--- a/docs/tef/south_connect/milano.md
+++ b/docs/tef/south_connect/milano.md
@@ -93,10 +93,10 @@ Finally, visualization and analysis tools (Grafana, Spark, Node-RED, Snap4City)
Milano TEF site offers the following datasets:
-- **[Air quality](/data_catalog/metadata_datasets/south_italy_uptown/)**: air quality readings from sensors deployed across the Milano area, reporting temperature, humidity, and pollutant levels along with time and location.
-- **[Biodiversity](/data_catalog/metadata_datasets/south_italy_uptown/)**: data from biodiversity sensors deployed in UpTown, reporting device status, energy levels, and bird species detections over time.
-- **[Energy distribution](/data_catalog/metadata_datasets/south_italy_uptown/)**: data from heating systems deployed in UpTown, reporting energy use, fluid temperatures, and heating power.
-- **[Archaeological site](/data_catalog/metadata_datasets/south_italy_mithraeum-of-circus-maximus/)**: environmental data from sensors deployed in an underground archaeological site, reporting air quality, temperature, humidity, and vibration levels over time.
+- **[Air quality](../../data_catalog/metadata_datasets/south_italy_uptown.md/#site-uptown-airquality)**: air quality readings from sensors deployed across the Milano area, reporting temperature, humidity, and pollutant levels along with time and location.
+- **[Biodiversity](../../data_catalog/metadata_datasets/south_italy_uptown.md/#site-uptown-biodiversity)**: data from biodiversity sensors deployed in UpTown, reporting device status, energy levels, and bird species detections over time.
+- **[Energy distribution](../../data_catalog/metadata_datasets/south_italy_uptown.md/#site-uptown-energy-distribution)**: data from heating systems deployed in UpTown, reporting energy use, fluid temperatures, and heating power.
+- **[Archaeological site](../../data_catalog/metadata_datasets/south_italy_mithraeum-of-circus-maximus.md/#site-mithraeum-of-circus-maximus-archaeological-site)**: environmental data from sensors deployed in an underground archaeological site, reporting air quality, temperature, humidity, and vibration levels over time.
## Key Stakeholders and Partners
diff --git a/docs/toolbox/index.md b/docs/toolbox/index.md
index 4cf1bf24..e6abedd8 100644
--- a/docs/toolbox/index.md
+++ b/docs/toolbox/index.md
@@ -3,8 +3,38 @@ icon: material/toolbox-outline
title: Toolbox
---
-A set of useful tools compatible with MIMs.
+
-- [AI Toolkit](ai_toolkit.md)
-- [MIMs Toolkit](mims_toolkit.md)
-- [Other Tools](other_toolkit.md)
+- :material-cogs:{ .lg .middle } __AI Toolkit__
+
+ ---
+
+ Here we list a series of projects that can significantly help in managing these services.
+
+ [:octicons-arrow-right-24: _More Details_](./ai_toolkit.md)
+
+- :material-cogs:{ .lg .middle } __AI Logging Monitor__
+
+ ---
+
+ The Logging module tracks the model's performance in production.
+
+ [:octicons-arrow-right-24: _More Details_](./ai_logging_monitor.md)
+
+- :material-cogs:{ .lg .middle } __MIMs Toolkit__
+
+ ---
+
+ List of tools and projects that can help services comply with OASC Minimal Interoperability Mechanisms.
+
+ [:octicons-arrow-right-24: _More Details_](./mims_toolkit.md)
+
+- :material-toolbox:{ .lg .middle } __Others__
+
+ ---
+
+ _Other useful tools and projects._
+
+ [:octicons-arrow-right-24: _More Details_](./other_toolkit.md)
+
+