From 83c417b0e100bf03c3110b601c8056a1bcc5115c Mon Sep 17 00:00:00 2001 From: Benita Volkmann Date: Mon, 13 Apr 2026 15:29:46 +0200 Subject: [PATCH 1/2] Document new WAL errors functionality --- .../source-db/postgres-maintenance.mdx | 18 +++++++++++ debugging/error-codes.mdx | 27 +++++++++++++++- .../production-readiness-guide.mdx | 6 +++- maintenance-ops/self-hosting/diagnostics.mdx | 31 ++++++++++++------- 4 files changed, 69 insertions(+), 13 deletions(-) diff --git a/configuration/source-db/postgres-maintenance.mdx b/configuration/source-db/postgres-maintenance.mdx index 1ef5ab3a..47c973f1 100644 --- a/configuration/source-db/postgres-maintenance.mdx +++ b/configuration/source-db/postgres-maintenance.mdx @@ -33,6 +33,24 @@ select slot_name, pg_drop_replication_slot(slot_name) from pg_replication_slots Postgres prevents active slots from being dropped. If it does happen (e.g. while a PowerSync instance is disconnected), PowerSync would automatically re-create the slot, and restart replication. +### WAL Slot Invalidation + +Postgres can invalidate a replication slot when the amount of retained WAL data exceeds the `max_slot_wal_keep_size` limit. This is most likely to happen during a long-running initial snapshot — PowerSync must hold the slot open while copying your entire dataset, and WAL accumulates throughout that time. + +If the slot is invalidated mid-snapshot, PowerSync detects this early and aborts with error [`PSYNC_S1146`](/debugging/error-codes#psync_s11xx-postgres-replication-issues) rather than continuing a doomed snapshot. The fix is to increase `max_slot_wal_keep_size` on the source database and then redeploy your sync config to trigger a fresh snapshot. + +To check the current `max_slot_wal_keep_size` value: + +```sql +SELECT setting AS max_slot_wal_keep_size +FROM pg_settings +WHERE name = 'max_slot_wal_keep_size'; +``` + +A value of `-1` means unlimited (no cap on WAL retention). If your database has a cap set, make sure it is large enough to cover the full WAL growth expected during an initial snapshot. See [Managing & Monitoring Replication Lag](/maintenance-ops/production-readiness-guide#managing--monitoring-replication-lag) for guidance on choosing an appropriate value. + +You can monitor slot health in real time using the [Diagnostics API](/maintenance-ops/self-hosting/diagnostics). The `wal_status`, `safe_wal_size`, and `max_slot_wal_keep_size` fields on each connection object show how much WAL budget remains. The PowerSync Service also logs a warning when less than 50% of the WAL budget remains during a snapshot. + ### Maximum Replication Slots Postgres is configured with a maximum number of replication slots per server. Since each PowerSync instance uses one replication slot for replication and an additional one while deploying a new Sync Streams/Rules version, the maximum number of PowerSync instances connected to one Postgres server is equal to the maximum number of replication slots, minus 1\. diff --git a/debugging/error-codes.mdx b/debugging/error-codes.mdx index 05eed7a2..a6c15c67 100644 --- a/debugging/error-codes.mdx +++ b/debugging/error-codes.mdx @@ -62,6 +62,11 @@ This reference documents PowerSync error codes organized by component, with trou This may occur if there is very deep nesting in JSON or embedded documents. +- **PSYNC_S1005**: + Storage version not supported. + + This could be caused by a downgrade to a version that does not support the current storage version. + ## PSYNC_S11xx: Postgres replication issues - **PSYNC_S1101**: @@ -143,6 +148,15 @@ This reference documents PowerSync error codes organized by component, with trou An alternative is to create explicit policies for the replication role. If you have done that, you may ignore this warning. +- **PSYNC_S1146**: + Replication slot invalidated. + + The replication slot was invalidated by PostgreSQL, typically because WAL retention exceeded `max_slot_wal_keep_size` during a long-running snapshot. Increase `max_slot_wal_keep_size` on the source database and redeploy Sync Streams/Sync Rules to trigger a fresh snapshot. + + Other causes: `rows_removed` (catalog rows needed by the slot were removed), `wal_level_insufficient`, or `idle_timeout` (PostgreSQL 18+). + + See [Managing & Monitoring Replication Lag](/maintenance-ops/production-readiness-guide#managing--monitoring-replication-lag) for guidance on sizing `max_slot_wal_keep_size`. + ## PSYNC_S12xx: MySQL replication issues ## PSYNC_S13xx: MongoDB replication issues @@ -235,6 +249,17 @@ This reference documents PowerSync error codes organized by component, with trou Possible causes: - Older data has been cleaned up due to exceeding the retention period. +## PSYNC_S16xx: MSSQL replication issues + +- **PSYNC_S1601**: + A replicated source table's capture instance has been dropped during a polling cycle. + + Possible causes: + - CDC has been disabled for the table. + - The table has been dropped, which also drops the capture instance. + + Replication for the table will only resume once CDC has been re-enabled for the table. + ## PSYNC_S2xxx: Service API - **PSYNC_S2001**: @@ -303,7 +328,7 @@ This does not include auth configuration errors on the service. - **PSYNC_S2203**: IPs in this range are not supported. - Make sure to use a publically-accessible JWKS URI. + Make sure to use a publicly-accessible JWKS URI. - **PSYNC_S2204**: JWKS request failed. diff --git a/maintenance-ops/production-readiness-guide.mdx b/maintenance-ops/production-readiness-guide.mdx index 1ce30c9d..ab4bb909 100644 --- a/maintenance-ops/production-readiness-guide.mdx +++ b/maintenance-ops/production-readiness-guide.mdx @@ -288,7 +288,11 @@ WHERE name = 'max_slot_wal_keep_size' ``` It's recommended to check the current replication slot lag and `max_slot_wal_keep_size` when deploying Sync Streams/Sync Rules changes to your PowerSync Service instance, especially when you're working with large database volumes. -If you notice that the replication lag is greater than the current `max_slot_wal_keep_size` it's recommended to increase value of the `max_slot_wal_keep_size` on the connected source Postgres database to accommodate for the lag and to ensure the PowerSync Service can complete initial replication without further delays. +If you notice that the replication lag is greater than the current `max_slot_wal_keep_size` it's recommended to increase the value of `max_slot_wal_keep_size` on the connected source Postgres database to accommodate for the lag and to ensure the PowerSync Service can complete initial replication without further delays. + +If the slot is invalidated, PowerSync aborts the snapshot early and surfaces error [`PSYNC_S1146`](/debugging/error-codes#psync_s11xx-postgres-replication-issues). After increasing `max_slot_wal_keep_size`, redeploy your sync config to trigger a fresh snapshot. + +You can also monitor slot health in real time using the [Diagnostics API](/maintenance-ops/self-hosting/diagnostics). Each connection object in the response includes `wal_status` (slot status from `pg_replication_slots`), `safe_wal_size` (bytes remaining before potential invalidation), and `max_slot_wal_keep_size` (the configured cap). The PowerSync Service logs a warning when less than 50% of the WAL budget is remaining during a snapshot. ### Managing Replication Slots diff --git a/maintenance-ops/self-hosting/diagnostics.mdx b/maintenance-ops/self-hosting/diagnostics.mdx index 6e6674f6..06ff6b32 100644 --- a/maintenance-ops/self-hosting/diagnostics.mdx +++ b/maintenance-ops/self-hosting/diagnostics.mdx @@ -1,13 +1,9 @@ --- title: "Diagnostics" -description: "How to use the PowerSync Service Diagnostics API" +description: "How to use the PowerSync Service Diagnostics API to inspect replication status, errors, and slot health." --- -All self-hosted PowerSync Service instances ship with a Diagnostics API. -This API provides the following diagnostic information: - -- Connections → Connected backend source database and any active errors associated with the connection. -- Active Sync Streams / Sync Rules → Currently deployed Sync Streams (or legacy Sync Rules) and its status. +All self-hosted PowerSync Service instances ship with a Diagnostics API for inspecting replication state, surfacing errors, and monitoring source database health. ## CLI @@ -17,27 +13,40 @@ If you have the [PowerSync CLI](/tools/cli) installed, use `powersync status` to powersync status # Extract a specific field -powersync status --output=json | jq '.connections[0]' +powersync status --output=json | jq '.data.active_sync_rules' ``` ## Diagnostics API -# Configuration +### Configuration -1. To enable the Diagnostics API, specify an API token in your PowerSync YAML file: +1. Specify an API token in your PowerSync YAML file: ```yaml service.yaml api: tokens: - YOUR_API_TOKEN ``` -Make sure to use a secure API token as part of this configuration + +Use a secure, randomly generated API token. 2. Restart the PowerSync Service. -3. Once configured, send an HTTP request to your PowerSync Service Diagnostics API endpoint. Include the API token set in step 1 as a Bearer token in the Authorization header. +3. Send a POST request to the diagnostics endpoint, passing the token as a Bearer token: ```shell curl -X POST http://localhost:8080/api/admin/v1/diagnostics \ -H "Authorization: Bearer YOUR_API_TOKEN" ``` + +### Response structure + +The response `data` object contains information about: + +**`connections`** — whether PowerSync can reach the configured source database, and any connection-level errors. + +**`active_sync_rules`** — the currently serving sync config (Sync Streams/Sync Rules). Shows which replication slot is in use, whether the initial snapshot has completed, replication lag, which tables are being replicated, and any errors. + +**`deploying_sync_rules`** — only present while a new sync config is being deployed. PowerSync runs the new snapshot in parallel so clients continue to be served by the existing active config. Once the snapshot completes, this section disappears and `active_sync_rules` updates. Errors during deployment (snapshot failures, configuration problems) surface here rather than in `active_sync_rules`. + +For Postgres sources on version 13 or later, each connection entry in `active_sync_rules` also includes `wal_status`, `safe_wal_size`, and `max_slot_wal_keep_size`. These fields show how much WAL budget remains before the replication slot could be invalidated, which is particularly useful to monitor when deploying a new sync config against a large database. See [`PSYNC_S1146`](/debugging/error-codes#psync_s11xx-postgres-replication-issues) for details on slot invalidation and how to resolve it. From 6d0e76a675a79589f0c6bcbe5ac54b9351612f6a Mon Sep 17 00:00:00 2001 From: benitav Date: Thu, 16 Apr 2026 13:44:48 +0200 Subject: [PATCH 2/2] Apply suggestions from code review Co-authored-by: Jose Vargas --- configuration/source-db/postgres-maintenance.mdx | 6 ++---- debugging/error-codes.mdx | 6 ++++-- maintenance-ops/production-readiness-guide.mdx | 2 +- maintenance-ops/self-hosting/diagnostics.mdx | 2 +- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/configuration/source-db/postgres-maintenance.mdx b/configuration/source-db/postgres-maintenance.mdx index 47c973f1..6662b8ac 100644 --- a/configuration/source-db/postgres-maintenance.mdx +++ b/configuration/source-db/postgres-maintenance.mdx @@ -37,14 +37,12 @@ Postgres prevents active slots from being dropped. If it does happen (e.g. while Postgres can invalidate a replication slot when the amount of retained WAL data exceeds the `max_slot_wal_keep_size` limit. This is most likely to happen during a long-running initial snapshot — PowerSync must hold the slot open while copying your entire dataset, and WAL accumulates throughout that time. -If the slot is invalidated mid-snapshot, PowerSync detects this early and aborts with error [`PSYNC_S1146`](/debugging/error-codes#psync_s11xx-postgres-replication-issues) rather than continuing a doomed snapshot. The fix is to increase `max_slot_wal_keep_size` on the source database and then redeploy your sync config to trigger a fresh snapshot. +If the slot is invalidated mid-snapshot, PowerSync detects this early and aborts with error [`PSYNC_S1146`](/debugging/error-codes#psync_s11xx-postgres-replication-issues) rather than continuing a doomed snapshot. The fix is to increase `max_slot_wal_keep_size` on the source database and delete the existing replication slot. PowerSync will automatically create a new slot and restart the snapshot. To check the current `max_slot_wal_keep_size` value: ```sql -SELECT setting AS max_slot_wal_keep_size -FROM pg_settings -WHERE name = 'max_slot_wal_keep_size'; +SHOW max_slot_wal_keep_size ``` A value of `-1` means unlimited (no cap on WAL retention). If your database has a cap set, make sure it is large enough to cover the full WAL growth expected during an initial snapshot. See [Managing & Monitoring Replication Lag](/maintenance-ops/production-readiness-guide#managing--monitoring-replication-lag) for guidance on choosing an appropriate value. diff --git a/debugging/error-codes.mdx b/debugging/error-codes.mdx index a6c15c67..058ca57d 100644 --- a/debugging/error-codes.mdx +++ b/debugging/error-codes.mdx @@ -151,9 +151,11 @@ This reference documents PowerSync error codes organized by component, with trou - **PSYNC_S1146**: Replication slot invalidated. - The replication slot was invalidated by PostgreSQL, typically because WAL retention exceeded `max_slot_wal_keep_size` during a long-running snapshot. Increase `max_slot_wal_keep_size` on the source database and redeploy Sync Streams/Sync Rules to trigger a fresh snapshot. + The replication slot was invalidated by PostgreSQL, typically because WAL retention exceeded `max_slot_wal_keep_size` during a long-running snapshot. Increase `max_slot_wal_keep_size` on the source database and delete the existing replication slot to recover. PowerSync will create a new slot and restart replication automatically. - Other causes: `rows_removed` (catalog rows needed by the slot were removed), `wal_level_insufficient`, or `idle_timeout` (PostgreSQL 18+). +Other causes: `rows_removed` (catalog rows needed by the slot were removed), `wal_level_insufficient`, or `idle_timeout`. + +`idle_timeout` is a PostgreSQL 18+ slot invalidation, in this case increase `idle_replication_slot_timeout` instead of `max_slot_wal_keep_size`. See [Managing & Monitoring Replication Lag](/maintenance-ops/production-readiness-guide#managing--monitoring-replication-lag) for guidance on sizing `max_slot_wal_keep_size`. diff --git a/maintenance-ops/production-readiness-guide.mdx b/maintenance-ops/production-readiness-guide.mdx index ab4bb909..115f6da3 100644 --- a/maintenance-ops/production-readiness-guide.mdx +++ b/maintenance-ops/production-readiness-guide.mdx @@ -290,7 +290,7 @@ WHERE name = 'max_slot_wal_keep_size' It's recommended to check the current replication slot lag and `max_slot_wal_keep_size` when deploying Sync Streams/Sync Rules changes to your PowerSync Service instance, especially when you're working with large database volumes. If you notice that the replication lag is greater than the current `max_slot_wal_keep_size` it's recommended to increase the value of `max_slot_wal_keep_size` on the connected source Postgres database to accommodate for the lag and to ensure the PowerSync Service can complete initial replication without further delays. -If the slot is invalidated, PowerSync aborts the snapshot early and surfaces error [`PSYNC_S1146`](/debugging/error-codes#psync_s11xx-postgres-replication-issues). After increasing `max_slot_wal_keep_size`, redeploy your sync config to trigger a fresh snapshot. +If the slot is invalidated, PowerSync aborts the snapshot early and surfaces error [`PSYNC_S1146`](/debugging/error-codes#psync_s11xx-postgres-replication-issues). After increasing `max_slot_wal_keep_size`, delete the existing replication slot. PowerSync will automatically create a new slot and restart the snapshot. You can also monitor slot health in real time using the [Diagnostics API](/maintenance-ops/self-hosting/diagnostics). Each connection object in the response includes `wal_status` (slot status from `pg_replication_slots`), `safe_wal_size` (bytes remaining before potential invalidation), and `max_slot_wal_keep_size` (the configured cap). The PowerSync Service logs a warning when less than 50% of the WAL budget is remaining during a snapshot. diff --git a/maintenance-ops/self-hosting/diagnostics.mdx b/maintenance-ops/self-hosting/diagnostics.mdx index 06ff6b32..d251fa9e 100644 --- a/maintenance-ops/self-hosting/diagnostics.mdx +++ b/maintenance-ops/self-hosting/diagnostics.mdx @@ -49,4 +49,4 @@ The response `data` object contains information about: **`deploying_sync_rules`** — only present while a new sync config is being deployed. PowerSync runs the new snapshot in parallel so clients continue to be served by the existing active config. Once the snapshot completes, this section disappears and `active_sync_rules` updates. Errors during deployment (snapshot failures, configuration problems) surface here rather than in `active_sync_rules`. -For Postgres sources on version 13 or later, each connection entry in `active_sync_rules` also includes `wal_status`, `safe_wal_size`, and `max_slot_wal_keep_size`. These fields show how much WAL budget remains before the replication slot could be invalidated, which is particularly useful to monitor when deploying a new sync config against a large database. See [`PSYNC_S1146`](/debugging/error-codes#psync_s11xx-postgres-replication-issues) for details on slot invalidation and how to resolve it. +For Postgres sources on version 13 or later, each connection entry in `active_sync_rules` also includes `wal_status`, `safe_wal_size`, and `max_slot_wal_keep_size`. These fields show how much WAL budget remains before the replication slot could be invalidated, which is particularly useful to monitor when deploying a new sync config against a large database. When the WAL budget drops below 50%, a warning appears in the sync rules errors array. If the slot is fully invalidated, the error is reported via `last_fatal_error` with code [`PSYNC_S1146`](/debugging/error-codes#psync_s11xx-postgres-replication-issues).