Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
229 changes: 229 additions & 0 deletions ARCHITECTURE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,229 @@
# Architecture

This document describes the internal architecture of CipherStash Proxy. It's intended for anyone who wants to understand how the proxy pulls off transparent, searchable encryption without requiring application changes.

## Overview

CipherStash Proxy sits between an application and PostgreSQL. It intercepts SQL statements over the PostgreSQL wire protocol, determines which columns are encrypted, rewrites queries to use [EQL v2](https://github.com/cipherstash/encrypt-query-language) operations, encrypts literals and parameters, forwards the transformed query to PostgreSQL, and decrypts results before returning them to the application.

The two most interesting pieces of the system are:

1. **eql-mapper** — a SQL type inference and transformation engine that understands which parts of a query touch encrypted columns
2. **The protocol bridge** — a dual-stream PostgreSQL wire protocol interceptor that handles encryption and decryption transparently across both the simple and extended query protocols

## How a Query Flows Through the System

```
Application CipherStash Proxy PostgreSQL
| | |
|--- SQL statement ------------->| |
| Parse SQL into AST |
| Import schema (tables, columns, EQL types) |
| Run type inference (unification) |
| Identify encrypted literals & parameters |
| Encrypt values via ZeroKMS |
| Apply transformation rules to AST |
| Emit rewritten SQL |
| |--- transformed SQL ------------------>|
| |<-- result rows ----------------------|
| Identify encrypted columns in results |
| Batch-decrypt values via ZeroKMS |
| Re-encode to PostgreSQL wire format |
|<-- plaintext results ----------| |
```
Comment on lines +16 to +33
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a language identifier to the code fence.

Markdownlint flags the diagram block as missing a language specifier. Consider text (or plain) for the ASCII diagram to satisfy MD040.

Suggested fix
-```
+```text
 Application                    CipherStash Proxy                         PostgreSQL
     |                                |                                        |
     |--- SQL statement ------------->|                                        |
     |                          Parse SQL into AST                             |
     |                          Import schema (tables, columns, EQL types)     |
     |                          Run type inference (unification)               |
     |                          Identify encrypted literals & parameters       |
     |                          Encrypt values via ZeroKMS                     |
     |                          Apply transformation rules to AST             |
     |                          Emit rewritten SQL                             |
     |                                |--- transformed SQL ------------------>|
     |                                |<-- result rows ----------------------|
     |                          Identify encrypted columns in results          |
     |                          Batch-decrypt values via ZeroKMS              |
     |                          Re-encode to PostgreSQL wire format            |
     |<-- plaintext results ----------|                                        |
-```
+```
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
```
Application CipherStash Proxy PostgreSQL
| | |
|--- SQL statement ------------->| |
| Parse SQL into AST |
| Import schema (tables, columns, EQL types) |
| Run type inference (unification) |
| Identify encrypted literals & parameters |
| Encrypt values via ZeroKMS |
| Apply transformation rules to AST |
| Emit rewritten SQL |
| |--- transformed SQL ------------------>|
| |<-- result rows ----------------------|
| Identify encrypted columns in results |
| Batch-decrypt values via ZeroKMS |
| Re-encode to PostgreSQL wire format |
|<-- plaintext results ----------| |
```
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 16-16: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@ARCHITECTURE.md` around lines 16 - 33, The fenced ASCII diagram block in
ARCHITECTURE.md is missing a language identifier (causing markdownlint MD040);
update the opening fence for the diagram (the triple-backtick block containing
the "Application CipherStash Proxy PostgreSQL" ASCII flow) to include a language
token such as text (i.e., change ``` to ```text) so the diagram is treated as
plain text and the linter warning is resolved.


## SQL Type Inference Engine (eql-mapper)

The `eql-mapper` package is responsible for analyzing SQL statements and determining exactly which expressions, literals, and parameters need to be encrypted — and *how* they need to be encrypted. It does this through a constraint-based type inference system that operates entirely at parse time, without executing any SQL.

### The Type System

Every AST node in a parsed SQL statement is assigned a type. Types are either:

- **Native** — a standard PostgreSQL type. The proxy doesn't need to do anything special with these.
- **EQL** — an encrypted column type, carrying information about which operations it supports.
- **Projection** — an ordered list of column types (the result shape of a `SELECT`, subquery, or CTE).
- **Var** — an unresolved type variable, used during inference and resolved through unification.
- **Associated** — a type that depends on another type's trait implementation (e.g., "the tokenized form of this column").

EQL types carry **trait bounds** that describe what operations the encrypted column supports:

| Trait | Operations | Example |
|---|---|---|
| `Eq` | `=`, `<>` | `WHERE email = 'alice@example.com'` |
| `Ord` | `<`, `>`, `<=`, `>=`, `MIN`, `MAX` | `WHERE salary > 100000` |
| `TokenMatch` | `LIKE`, `ILIKE` | `WHERE name LIKE '%alice%'` |
| `JsonLike` | `->`, `->>`, `jsonb_path_query` | `WHERE data->>'role' = 'admin'` |
| `Contain` | `@>`, `<@` | `WHERE tags @> '["urgent"]'` |

Traits form a hierarchy — `Ord` implies `Eq`, and `JsonLike` implies both `Ord` and `Eq`.

### Unification

Type inference uses a **unification algorithm** (in the Robinson tradition, similar to what you'd find in a Hindley-Milner type system) adapted for SQL and encrypted types. When the type checker encounters an expression like `salary > 100000`, it:

1. Looks up `salary` in the current scope and finds its type (e.g., `EQL(employees.salary, Ord+Eq)`)
2. Assigns a fresh type variable to the literal `100000`
3. Looks up the `>` operator's type signature: `<T>(T > T) -> Native where T: Ord`
4. Unifies `T` with the salary's EQL type, checking that it satisfies the `Ord` bound
5. Unifies `T` with the literal's type variable, binding it to the same EQL type
6. Records that the literal `100000` must be encrypted as `EQL(employees.salary, Ord)`

This process propagates type information across the entire statement — through subqueries, CTEs, JOINs, `UNION` branches, function calls, and `RETURNING` clauses.

A particularly interesting aspect is how EQL types unify with each other. When two `Partial` EQL types for the same column meet, their bounds are merged (union). When a `Partial` meets a `Full`, the result promotes to `Full`. This means the system automatically infers the minimum encryption payload needed for each value.
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section claims the system infers the minimum encryption payload needed via Partial bounds. While eql-mapper does infer EqlTerm::Partial(_, bounds), the proxy currently discards those bounds when mapping/encrypting (it uses EqlTermVariant and treats Full/Partial/Tokenized the same as EqlOperation::Store). Please either clarify that the payload-minimization is an eql-mapper internal concept not currently used by the proxy encryption path, or update the description to match the current behavior.

Suggested change
A particularly interesting aspect is how EQL types unify with each other. When two `Partial` EQL types for the same column meet, their bounds are merged (union). When a `Partial` meets a `Full`, the result promotes to `Full`. This means the system automatically infers the minimum encryption payload needed for each value.
A particularly interesting aspect is how EQL types unify with each other. When two `Partial` EQL types for the same column meet, their bounds are merged (union). When a `Partial` meets a `Full`, the result promotes to `Full`. Within `eql-mapper`, this allows the type system to infer the minimum bounds required for each value; however, the current proxy encryption path does not yet use these `Partial` bounds and treats `Full`/`Partial`/`Tokenized` the same when deciding what to encrypt.

Copilot uses AI. Check for mistakes.

### Polymorphic Function and Operator Signatures

SQL operators and functions are declared with generic type parameters and trait bounds using custom procedural macros:

```rust
binary_operators! {
<T>(T = T) -> Native where T: Eq;
<T>(T <= T) -> Native where T: Ord;
<T>(T -> <T as JsonLike>::Accessor) -> T where T: JsonLike;
<T>(T ~~ <T as TokenMatch>::Tokenized) -> Native where T: TokenMatch;
}

functions! {
pg_catalog.min<T>(T) -> T where T: Ord;
pg_catalog.max<T>(T) -> T where T: Ord;
pg_catalog.jsonb_path_query<T>(T, <T as JsonLike>::Path) -> T where T: JsonLike;
}
```

The `<T as JsonLike>::Accessor` syntax is an associated type — it resolves to `EqlTerm::JsonAccessor` when `T` is an EQL type with the `JsonLike` trait, or stays as `Native` when `T` is a native type. This lets the same operator signature work for both encrypted and unencrypted columns.

For unknown functions, the system falls back to assuming all arguments and the return type are native. This is a deliberately safe strategy: native types satisfy all trait bounds, so the type system never blocks a query it doesn't understand. Any actual type errors will be caught by PostgreSQL itself.

### Multi-Pass Single-Traversal Analysis

Three independent visitors operate in concert during a single AST traversal:

- **ScopeTracker** manages lexical scopes — tracking which tables, CTEs, and subquery aliases are visible at each point in the query. It handles column resolution, wildcard expansion (`SELECT *`), and qualified references (`t.column`).
- **Importer** brings schema information into scope. When the traversal enters a `FROM` clause, the importer resolves the table name against the schema and creates a typed projection for it, marking each column as either `Native` or `EQL` with the appropriate trait bounds.
- **TypeInferencer** performs the actual type inference using the unifier. It has specialized implementations for each AST node type — expressions, functions, `INSERT` column mappings, `SELECT` projections, set operations, and so on.

### In-Transaction DDL Tracking

When a SQL statement contains DDL (`CREATE TABLE`, `ALTER TABLE`, `DROP TABLE`, etc.), the eql-mapper captures these changes in a `SchemaWithEdits` overlay. This overlay acts as a mask over the loaded schema, so subsequent statements in the same transaction see the updated table structure. When the transaction commits, the proxy triggers a full schema reload.

## SQL Transformation Pipeline

After type inference determines which parts of a statement touch encrypted columns, the transformation pipeline rewrites the AST. Transformation rules are modular and composable — they implement a `TransformationRule` trait and are composed into a single rule via tuple implementation (supporting chains of 1 to 16 rules).

The current rules:

| Rule | What it does |
|---|---|
| `CastLiteralsAsEncrypted` | Replaces plaintext literals with `eql_v2.cast_as_encrypted(ciphertext)` |
| `CastParamsAsEncrypted` | Wraps parameter placeholders (`$1`, `$2`, ...) with encrypted casts |
| `RewriteContainmentOps` | Transforms `col @> val` to `eql_v2.jsonb_contains(col, val)` |
| `RewriteStandardSqlFnsOnEqlTypes` | Rewrites `min()`, `max()`, `jsonb_path_query()` etc. to `eql_v2.*` equivalents |
| `PreserveEffectiveAliases` | Maintains column aliases through transformations |
| `FailOnPlaceholderChange` | Postcondition check that prepared statement placeholders weren't corrupted |

Each rule has a `would_edit` method that tests whether it would modify the AST without actually modifying it. This enables a **dry-run optimization**: the system first checks if any rule would make changes, and only rebuilds the AST if necessary. For passthrough queries (those that don't touch any encrypted columns), this avoids the cost of AST reconstruction entirely.

## PostgreSQL Protocol Bridge

The proxy implements the PostgreSQL wire protocol, acting as both a server (to the application) and a client (to PostgreSQL). This is the `packages/cipherstash-proxy/` package.

### Dual-Stream Architecture

Each client connection gets a dedicated pair of handlers:

- **Frontend** (`frontend.rs`) — intercepts client-to-server messages, runs type inference and encryption on SQL statements, and forwards transformed messages to PostgreSQL.
- **Backend** (`backend.rs`) — intercepts server-to-client messages, identifies encrypted columns in result rows, decrypts values, and forwards plaintext results to the client.

These run concurrently on the same connection, connected by a shared `Context` that tracks session state (active statements, portals, column metadata, timing).

### Extended Query Protocol

The PostgreSQL extended query protocol separates SQL handling into distinct phases — Parse, Bind, Describe, Execute — with explicit Sync points. The proxy must track state across these phases:

- **Parse**: The proxy intercepts the SQL, runs type inference, encrypts any literals, transforms the AST, and forwards the rewritten SQL. It stores the type-checked statement metadata (column types, parameter types, projection) in the context.
- **Bind**: When parameters are bound to a prepared statement, the proxy looks up which parameters need encryption (from the Parse phase metadata), encrypts them, and forwards the modified Bind message.
- **Execute/Describe**: These are forwarded, with the backend using stored metadata to know which result columns need decryption.

Error recovery follows PostgreSQL semantics: when an error occurs, all messages are discarded until the next Sync message.

### Batch Decryption

Result rows containing encrypted data are buffered in a `MessageBuffer` (default capacity: 4096 rows) to enable efficient batch decryption. The buffer flushes when:

- It reaches capacity
- A non-DataRow message arrives (e.g., `CommandComplete`)
- The command completes

This batching reduces the number of decryption API round-trips. After decryption, values are re-encoded into the correct PostgreSQL wire format (text or binary) based on the format codes specified by the client.

### Authentication Bridging

The proxy handles authentication on both sides independently. It supports:

- MD5 password authentication
- SASL/SCRAM-SHA-256
- SCRAM-SHA-256-PLUS (with TLS channel binding)

The proxy authenticates the client using its own configured credentials, then separately authenticates with PostgreSQL using the database credentials. SSL/TLS negotiation is handled on both sides.

## Encryption and Key Management

Encryption operations go through CipherStash ZeroKMS. The proxy maintains a cache of `ScopedCipher` instances (keyed by keyset identifier) using a memory-weighted async cache with TTL eviction. Cache capacity is measured in bytes, not entry count.

### EQL Operation Routing

The type inference system determines not just *that* a value needs encryption, but *how*. Different EQL term variants map to different encryption operations:

| EQL Term | Encryption Operation | Use Case |
|---|---|---|
| `Full` | `EqlOperation::Store` | Inserting a new encrypted value with all search terms |
| `Partial(Eq)` | `EqlOperation::Store` | Equality query — only equality search terms needed |
| `Partial(Ord)` | `EqlOperation::Store` | Comparison query — only ORE search terms needed |
| `Tokenized` | `EqlOperation::Store` | LIKE query — tokenized search terms |
| `JsonPath` | `EqlOperation::Query` with `SteVecSelector` | JSON path query argument |
| `JsonAccessor` | `EqlOperation::Query` with field selector | JSON field access argument |

Comment on lines +181 to +187
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The EQL operation routing table suggests Partial(Eq) vs Partial(Ord) map to different storage behavior and that JsonAccessor uses a distinct “field selector” query op. In the current proxy encryption service, Full/Partial/Tokenized all use EqlOperation::Store, and both JsonPath and JsonAccessor currently use EqlOperation::Query(..., QueryOp::SteVecSelector) when a SteVec index exists. Please align this table with the implemented routing logic (or note where behavior differs by design).

Suggested change
| `Full` | `EqlOperation::Store` | Inserting a new encrypted value with all search terms |
| `Partial(Eq)` | `EqlOperation::Store` | Equality query — only equality search terms needed |
| `Partial(Ord)` | `EqlOperation::Store` | Comparison query — only ORE search terms needed |
| `Tokenized` | `EqlOperation::Store` | LIKE query — tokenized search terms |
| `JsonPath` | `EqlOperation::Query` with `SteVecSelector` | JSON path query argument |
| `JsonAccessor` | `EqlOperation::Query` with field selector | JSON field access argument |
| `Full` | `EqlOperation::Store` | Insert/update with all configured search terms materialised |
| `Partial(Eq)` | `EqlOperation::Store` | Equality-oriented operations — only equality search terms are constructed |
| `Partial(Ord)` | `EqlOperation::Store` | Ordering/comparison operations — only ORE search terms are constructed |
| `Tokenized` | `EqlOperation::Store` | Pattern/LIKE-style operations — tokenized search terms are constructed |
| `JsonPath` | `EqlOperation::Query` with `SteVecSelector` | JSON path query argument (uses SteVec index when available) |
| `JsonAccessor` | `EqlOperation::Query` with `SteVecSelector` | JSON field access argument (same SteVec selector as `JsonPath` in current implementation) |
**Note:** In the current proxy implementation, all of `Full`, `Partial(Eq)`, `Partial(Ord)`, and `Tokenized` are routed to `EqlOperation::Store`; the `Partial`/`Tokenized` variants only affect which search terms are built inside the payload. Likewise, both `JsonPath` and `JsonAccessor` use `EqlOperation::Query(..., QueryOp::SteVecSelector)` when a SteVec index exists, even though the mapper distinguishes them conceptually.

Copilot uses AI. Check for mistakes.
### Sparse Batch Encryption

When encrypting values for a statement, many columns may be `NULL` or non-encrypted. The proxy uses a sparse batch pattern: it collects only the non-null encrypted values (tracking their original positions), sends them to ZeroKMS in a single batch, then reconstructs the result vector with encrypted values placed back at their original positions. This minimizes API calls while handling nullable columns correctly.

## Schema Management

The proxy discovers the database schema at startup and reloads it periodically. Schema loading queries PostgreSQL's `information_schema` to discover tables and columns, then checks `eql_v2_configuration` to determine which columns are encrypted and what index types they support.
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The schema-loading description is inaccurate: the proxy’s schema loader marks encrypted columns by checking information_schema.columns.udt_name == 'eql_v2_encrypted' (see select_table_schemas.sql / SchemaManager::load_schema). It does not consult eql_v2_configuration for this, and index support comes from the separate Encrypt configuration loaded from public.eql_v2_configuration (via select_config.sql). Please update this paragraph to reflect the actual split between schema discovery vs encrypt-config loading.

Suggested change
The proxy discovers the database schema at startup and reloads it periodically. Schema loading queries PostgreSQL's `information_schema` to discover tables and columns, then checks `eql_v2_configuration` to determine which columns are encrypted and what index types they support.
The proxy discovers the database schema at startup and reloads it periodically. Schema loading queries PostgreSQL's `information_schema` to discover tables and columns and marks encrypted columns by checking `information_schema.columns.udt_name = 'eql_v2_encrypted'` (via `select_table_schemas.sql` / `SchemaManager::load_schema`). Separately, it loads the Encrypt configuration from `public.eql_v2_configuration` (via `select_config.sql`) to determine index types and other search capabilities for those encrypted columns.

Copilot uses AI. Check for mistakes.

Schema state is stored behind an `ArcSwap`, which provides lock-free reads with atomic updates. This means query processing never blocks on a schema reload — readers always get a consistent snapshot.

The reload cycle:
1. **Startup** — load schema with exponential backoff retry (up to 10 attempts, max 2-second backoff) to handle cases where PostgreSQL isn't ready yet
2. **Periodic** — a background task reloads schema on a configurable interval
3. **On-demand** — DDL detection during a transaction triggers a reload when the transaction completes
Comment on lines +199 to +201
Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reload-cycle bullet for Startup says schema is loaded with exponential backoff retry. In the current implementation, startup loads schema/config without retries (SchemaManager::init calls load_schema, and EncryptConfigManager::init_reloader calls load_encrypt_config); retries with backoff only happen during reloads (load_schema_with_retry / load_encrypt_config_with_retry). Please adjust the bullet list so it matches runtime behavior.

Suggested change
1. **Startup** — load schema with exponential backoff retry (up to 10 attempts, max 2-second backoff) to handle cases where PostgreSQL isn't ready yet
2. **Periodic** — a background task reloads schema on a configurable interval
3. **On-demand** — DDL detection during a transaction triggers a reload when the transaction completes
1. **Startup** — load schema and encryption configuration once; if this fails, proxy startup fails (no retry/backoff is performed during startup)
2. **Periodic** — a background task reloads schema and encryption configuration on a configurable interval, using exponential backoff retry on failure (up to 10 attempts, max 2-second backoff)
3. **On-demand** — DDL detection during a transaction triggers a reload when the transaction completes, also using the same exponential backoff retry behavior on failure

Copilot uses AI. Check for mistakes.

## Package Structure

```
packages/
├── cipherstash-proxy/ # Main proxy binary
│ └── src/
│ ├── postgresql/ # Wire protocol implementation
│ │ ├── frontend.rs # Client → Server message handling
│ │ ├── backend.rs # Server → Client message handling
│ │ ├── handler.rs # Connection startup and auth
│ │ ├── protocol.rs # Low-level message reading
│ │ ├── parser.rs # SQL parsing entry point
│ │ └── context/ # Session state (statements, portals, metadata)
│ ├── proxy/ # Encryption service, schema management, config
│ └── config/ # Configuration parsing
├── eql-mapper/ # SQL type inference and transformation
│ └── src/
│ ├── inference/ # Type inference engine
│ │ ├── unifier/ # Unification algorithm, type definitions, trait bounds
│ │ ├── sql_types/ # Operator and function type signatures
│ │ └── infer_type_impls/# Per-AST-node type inference implementations
│ ├── transformation_rules/# AST rewriting rules
│ ├── model/ # Schema, tables, columns, DDL tracking
│ └── scope_tracker.rs # Lexical scope management
├── eql-mapper-macros/ # Proc macros for operator/function declarations
└── showcase/ # Example healthcare data model
```
Comment on lines +205 to +229
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a language identifier to the code fence.

Markdownlint flags the package tree block as missing a language specifier. Consider text for the tree.

Suggested fix
-```
+```text
 packages/
 ├── cipherstash-proxy/           # Main proxy binary
 │   └── src/
 │       ├── postgresql/          # Wire protocol implementation
 │       │   ├── frontend.rs      # Client → Server message handling
 │       │   ├── backend.rs       # Server → Client message handling
 │       │   ├── handler.rs       # Connection startup and auth
 │       │   ├── protocol.rs      # Low-level message reading
 │       │   ├── parser.rs        # SQL parsing entry point
 │       │   └── context/         # Session state (statements, portals, metadata)
 │       ├── proxy/               # Encryption service, schema management, config
 │       └── config/              # Configuration parsing
 ├── eql-mapper/                  # SQL type inference and transformation
 │   └── src/
 │       ├── inference/           # Type inference engine
 │       │   ├── unifier/         # Unification algorithm, type definitions, trait bounds
 │       │   ├── sql_types/       # Operator and function type signatures
 │       │   └── infer_type_impls/# Per-AST-node type inference implementations
 │       ├── transformation_rules/# AST rewriting rules
 │       ├── model/               # Schema, tables, columns, DDL tracking
 │       └── scope_tracker.rs     # Lexical scope management
 ├── eql-mapper-macros/           # Proc macros for operator/function declarations
 └── showcase/                    # Example healthcare data model
-```
+```
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 205-205: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@ARCHITECTURE.md` around lines 205 - 229, The fenced package tree in
ARCHITECTURE.md is missing a language identifier causing markdownlint warnings;
update the opening code fence for the package tree (the triple-backtick block
that contains the directory listing for packages/ and entries like
cipherstash-proxy/, eql-mapper/, eql-mapper-macros/, showcase/) to include a
language (use "text") so the block starts with ```text instead of ```; no other
content changes are needed.

1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -210,6 +210,7 @@ This demonstrates the power of CipherStash Proxy:

Check out our [how-to guide](docs/how-to/index.md) for Proxy, or jump straight into the [reference guide](docs/reference/index.md).
For information on developing for Proxy, see the [Proxy development guide](./DEVELOPMENT.md).
For a deep dive into how the proxy works internally, see the [Architecture guide](./ARCHITECTURE.md).

---

Expand Down