diff --git a/src/current/v23.2/architecture/replication-layer.md b/src/current/v23.2/architecture/replication-layer.md index 5750dcc4060..5e260b8c3a9 100644 --- a/src/current/v23.2/architecture/replication-layer.md +++ b/src/current/v23.2/architecture/replication-layer.md @@ -19,7 +19,7 @@ Ensuring consistency with nodes offline, though, is a challenge many databases f The number of failures that can be tolerated is equal to *(Replication factor - 1)/2*. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. You can control the replication factor at the cluster, database, and table level using [replication zones]({% link {{ page.version.version }}/configure-replication-zones.md %}). -When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto it, ensuring your load is evenly distributed. +When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto them, ensuring your load is evenly distributed. ### Interactions with other layers diff --git a/src/current/v24.1/architecture/replication-layer.md b/src/current/v24.1/architecture/replication-layer.md index 2544a65ae4c..dcf5a07b624 100644 --- a/src/current/v24.1/architecture/replication-layer.md +++ b/src/current/v24.1/architecture/replication-layer.md @@ -19,7 +19,7 @@ Ensuring consistency with nodes offline, though, is a challenge many databases f The number of failures that can be tolerated is equal to *(Replication factor - 1)/2*. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. You can control the replication factor at the cluster, database, and table level using [replication zones]({% link {{ page.version.version }}/configure-replication-zones.md %}). -When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto it, ensuring your load is evenly distributed. +When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto them, ensuring your load is evenly distributed. ### Interactions with other layers diff --git a/src/current/v24.3/architecture/replication-layer.md b/src/current/v24.3/architecture/replication-layer.md index b1919885659..739878cab80 100644 --- a/src/current/v24.3/architecture/replication-layer.md +++ b/src/current/v24.3/architecture/replication-layer.md @@ -19,7 +19,7 @@ Ensuring consistency with nodes offline, though, is a challenge many databases f The number of failures that can be tolerated is equal to *(Replication factor - 1)/2*. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. You can control the replication factor at the cluster, database, and table level using [replication zones]({% link {{ page.version.version }}/configure-replication-zones.md %}). -When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto it, ensuring your load is evenly distributed. +When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto them, ensuring your load is evenly distributed. ### Interactions with other layers diff --git a/src/current/v25.2/architecture/replication-layer.md b/src/current/v25.2/architecture/replication-layer.md index 6d4594a6337..c62ad2d35be 100644 --- a/src/current/v25.2/architecture/replication-layer.md +++ b/src/current/v25.2/architecture/replication-layer.md @@ -19,7 +19,7 @@ Ensuring consistency with nodes offline, though, is a challenge many databases f The number of failures that can be tolerated is equal to *(Replication factor - 1)/2*. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. You can control the replication factor at the cluster, database, and table level using [replication zones]({% link {{ page.version.version }}/configure-replication-zones.md %}). -When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto it, ensuring your load is evenly distributed. +When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto them, ensuring your load is evenly distributed. ### Interactions with other layers diff --git a/src/current/v25.4/architecture/replication-layer.md b/src/current/v25.4/architecture/replication-layer.md index 96f7a74cbbd..9cda5873a40 100644 --- a/src/current/v25.4/architecture/replication-layer.md +++ b/src/current/v25.4/architecture/replication-layer.md @@ -19,7 +19,7 @@ Ensuring consistency with nodes offline, though, is a challenge many databases f The number of failures that can be tolerated is equal to *(Replication factor - 1)/2*. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. You can control the replication factor at the cluster, database, and table level using [replication zones]({% link {{ page.version.version }}/configure-replication-zones.md %}). -When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto it, ensuring your load is evenly distributed. +When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto them, ensuring your load is evenly distributed. ### Interactions with other layers diff --git a/src/current/v26.1/architecture/replication-layer.md b/src/current/v26.1/architecture/replication-layer.md index 96f7a74cbbd..9cda5873a40 100644 --- a/src/current/v26.1/architecture/replication-layer.md +++ b/src/current/v26.1/architecture/replication-layer.md @@ -19,7 +19,7 @@ Ensuring consistency with nodes offline, though, is a challenge many databases f The number of failures that can be tolerated is equal to *(Replication factor - 1)/2*. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. You can control the replication factor at the cluster, database, and table level using [replication zones]({% link {{ page.version.version }}/configure-replication-zones.md %}). -When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto it, ensuring your load is evenly distributed. +When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto them, ensuring your load is evenly distributed. ### Interactions with other layers diff --git a/src/current/v26.2/architecture/replication-layer.md b/src/current/v26.2/architecture/replication-layer.md index 96f7a74cbbd..9cda5873a40 100644 --- a/src/current/v26.2/architecture/replication-layer.md +++ b/src/current/v26.2/architecture/replication-layer.md @@ -19,7 +19,7 @@ Ensuring consistency with nodes offline, though, is a challenge many databases f The number of failures that can be tolerated is equal to *(Replication factor - 1)/2*. For example, with 3x replication, one failure can be tolerated; with 5x replication, two failures, and so on. You can control the replication factor at the cluster, database, and table level using [replication zones]({% link {{ page.version.version }}/configure-replication-zones.md %}). -When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto it, ensuring your load is evenly distributed. +When failures happen, though, CockroachDB automatically realizes nodes have stopped responding and works to redistribute your data to continue maximizing survivability. This process also works the other way around: when new nodes join your cluster, data automatically rebalances onto them, ensuring your load is evenly distributed. ### Interactions with other layers