Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ All notable changes to this project will be documented in this file.
- Use `--file-log-rotation-period` (or `FILE_LOG_ROTATION_PERIOD`) to configure the frequency of rotation.
- Use `--console-log-format` (or `CONSOLE_LOG_FORMAT`) to set the format to `plain` (default) or `json`.
- Add built-in Prometheus support and expose metrics on `/metrics` path of `native-metrics` port ([#955]).
- BREAKING: Add listener support ([#957]).

### Changed

Expand Down Expand Up @@ -47,6 +48,7 @@ All notable changes to this project will be documented in this file.
[#946]: https://github.com/stackabletech/zookeeper-operator/pull/946
[#950]: https://github.com/stackabletech/zookeeper-operator/pull/950
[#955]: https://github.com/stackabletech/zookeeper-operator/pull/955
[#957]: https://github.com/stackabletech/zookeeper-operator/pull/957

## [25.3.0] - 2025-03-21

Expand Down
20 changes: 5 additions & 15 deletions deploy/helm/zookeeper-operator/crds/crds.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,6 @@ spec:
clusterConfig:
default:
authentication: []
listenerClass: cluster-internal
tls:
quorumSecretClass: tls
serverSecretClass: tls
Expand All @@ -53,20 +52,6 @@ spec:
- authenticationClass
type: object
type: array
listenerClass:
default: cluster-internal
description: |-
This field controls which type of Service the Operator creates for this ZookeeperCluster:

* cluster-internal: Use a ClusterIP service

* external-unstable: Use a NodePort service

This is a temporary solution with the goal to keep yaml manifests forward compatible. In the future, this setting will control which [ListenerClass](https://docs.stackable.tech/home/nightly/listener-operator/listenerclass.html) will be used to expose the service, and ListenerClass names will stay the same, allowing for a non-breaking change.
enum:
- cluster-internal
- external-unstable
type: string
tls:
default:
quorumSecretClass: tls
Expand Down Expand Up @@ -444,11 +429,16 @@ spec:
x-kubernetes-preserve-unknown-fields: true
roleConfig:
default:
listenerClass: cluster-internal
podDisruptionBudget:
enabled: true
maxUnavailable: null
description: This is a product-agnostic RoleConfig, which is sufficient for most of the products.
properties:
listenerClass:
default: cluster-internal
description: This field controls which [ListenerClass](https://docs.stackable.tech/home/nightly/listener-operator/listenerclass.html) is used to expose the ZooKeeper servers.
type: string
podDisruptionBudget:
default:
enabled: true
Expand Down
11 changes: 11 additions & 0 deletions deploy/helm/zookeeper-operator/templates/roles.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,17 @@ rules:
verbs:
- create
- patch
- apiGroups:
- listeners.stackable.tech
resources:
- listeners
verbs:
- get
- list
- watch
- patch
- create
- delete
- apiGroups:
- {{ include "operator.name" . }}.stackable.tech
resources:
Expand Down
14 changes: 7 additions & 7 deletions docs/modules/zookeeper/pages/usage_guide/listenerclass.adoc
Original file line number Diff line number Diff line change
@@ -1,16 +1,16 @@
= Service exposition with ListenerClasses
:description: Configure the ZooKeeper service exposure with listener classes: cluster-internal, external-unstable or external-stable

Apache ZooKeeper offers an API. The Operator deploys a service called `<name>` (where `<name>` is the name of the ZookeeperCluster) through which ZooKeeper can be reached.

This service can have either the `cluster-internal` or `external-unstable` type. `external-stable` is not supported for ZooKeeper at the moment.
Read more about the types in the xref:concepts:service-exposition.adoc[service exposition] documentation at platform level.

This is how the listener class is configured:
The operator deploys a xref:listener-operator:listener.adoc[Listener] for the Server pods.
The listener defaults to only being accessible from within the Kubernetes cluster, but this can be changed by setting `.spec.servers.roleConfig.listenerClass`:

[source,yaml]
----
spec:
clusterConfig:
listenerClass: cluster-internal # <1>
servers:
roleConfig:
listenerClass: external-unstable # <1>
----
<1> The default `cluster-internal` setting.
<1> Specify one of `external-stable`, `external-unstable`, `cluster-internal` (the default setting is `cluster-internal`).
23 changes: 14 additions & 9 deletions rust/operator-binary/src/config/jvm.rs
Original file line number Diff line number Diff line change
@@ -1,13 +1,15 @@
use snafu::{OptionExt, ResultExt, Snafu};
use stackable_operator::{
memory::{BinaryMultiple, MemoryQuantity},
role_utils::{self, GenericRoleConfig, JavaCommonConfig, JvmArgumentOverrides, Role},
role_utils::{self, JavaCommonConfig, JvmArgumentOverrides, Role},
};

use crate::crd::{
JVM_SECURITY_PROPERTIES_FILE, LOG4J_CONFIG_FILE, LOGBACK_CONFIG_FILE, LoggingFramework,
METRICS_PORT, STACKABLE_CONFIG_DIR, STACKABLE_LOG_CONFIG_DIR,
v1alpha1::{ZookeeperCluster, ZookeeperConfig, ZookeeperConfigFragment},
JMX_METRICS_PORT, JVM_SECURITY_PROPERTIES_FILE, LOG4J_CONFIG_FILE, LOGBACK_CONFIG_FILE,
LoggingFramework, STACKABLE_CONFIG_DIR, STACKABLE_LOG_CONFIG_DIR,
v1alpha1::{
ZookeeperCluster, ZookeeperConfig, ZookeeperConfigFragment, ZookeeperServerRoleConfig,
},
};

const JAVA_HEAP_FACTOR: f32 = 0.8;
Expand All @@ -29,15 +31,15 @@ pub enum Error {
/// All JVM arguments.
fn construct_jvm_args(
zk: &ZookeeperCluster,
role: &Role<ZookeeperConfigFragment, GenericRoleConfig, JavaCommonConfig>,
role: &Role<ZookeeperConfigFragment, ZookeeperServerRoleConfig, JavaCommonConfig>,
role_group: &str,
) -> Result<Vec<String>, Error> {
let logging_framework = zk.logging_framework();

let jvm_args = vec![
format!("-Djava.security.properties={STACKABLE_CONFIG_DIR}/{JVM_SECURITY_PROPERTIES_FILE}"),
format!(
"-javaagent:/stackable/jmx/jmx_prometheus_javaagent.jar={METRICS_PORT}:/stackable/jmx/server.yaml"
"-javaagent:/stackable/jmx/jmx_prometheus_javaagent.jar={JMX_METRICS_PORT}:/stackable/jmx/server.yaml"
),
match logging_framework {
LoggingFramework::LOG4J => {
Expand All @@ -63,7 +65,7 @@ fn construct_jvm_args(
/// [`construct_zk_server_heap_env`]).
pub fn construct_non_heap_jvm_args(
zk: &ZookeeperCluster,
role: &Role<ZookeeperConfigFragment, GenericRoleConfig, JavaCommonConfig>,
role: &Role<ZookeeperConfigFragment, ZookeeperServerRoleConfig, JavaCommonConfig>,
role_group: &str,
) -> Result<String, Error> {
let mut jvm_args = construct_jvm_args(zk, role, role_group)?;
Expand Down Expand Up @@ -99,7 +101,10 @@ fn is_heap_jvm_argument(jvm_argument: &str) -> bool {
#[cfg(test)]
mod tests {
use super::*;
use crate::crd::{ZookeeperRole, v1alpha1::ZookeeperConfig};
use crate::crd::{
ZookeeperRole,
v1alpha1::{ZookeeperConfig, ZookeeperServerRoleConfig},
};

#[test]
fn test_construct_jvm_arguments_defaults() {
Expand Down Expand Up @@ -182,7 +187,7 @@ mod tests {
) -> (
ZookeeperCluster,
ZookeeperConfig,
Role<ZookeeperConfigFragment, GenericRoleConfig, JavaCommonConfig>,
Role<ZookeeperConfigFragment, ZookeeperServerRoleConfig, JavaCommonConfig>,
String,
) {
let zookeeper: ZookeeperCluster =
Expand Down
107 changes: 58 additions & 49 deletions rust/operator-binary/src/crd/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,11 @@ use stackable_operator::{
};
use strum::{Display, EnumIter, EnumString, IntoEnumIterator};

use crate::crd::affinity::get_affinity;
use crate::{
crd::{affinity::get_affinity, v1alpha1::ZookeeperServerRoleConfig},
discovery::build_role_group_headless_service_name,
listener::role_listener_name,
};

pub mod affinity;
pub mod authentication;
Expand All @@ -47,8 +51,16 @@ pub const OPERATOR_NAME: &str = "zookeeper.stackable.tech";
pub const ZOOKEEPER_PROPERTIES_FILE: &str = "zoo.cfg";
pub const JVM_SECURITY_PROPERTIES_FILE: &str = "security.properties";

pub const METRICS_PORT: u16 = 9505;
pub const ZOOKEEPER_SERVER_PORT_NAME: &str = "zk";
pub const ZOOKEEPER_LEADER_PORT_NAME: &str = "zk-leader";
pub const ZOOKEEPER_LEADER_PORT: u16 = 2888;
pub const ZOOKEEPER_ELECTION_PORT_NAME: &str = "zk-election";
pub const ZOOKEEPER_ELECTION_PORT: u16 = 3888;

pub const JMX_METRICS_PORT_NAME: &str = "metrics";
pub const JMX_METRICS_PORT: u16 = 9505;
pub const METRICS_PROVIDER_HTTP_PORT_KEY: &str = "metricsProvider.httpPort";
pub const METRICS_PROVIDER_HTTP_PORT_NAME: &str = "native-metrics";
pub const METRICS_PROVIDER_HTTP_PORT: u16 = 7000;

pub const STACKABLE_DATA_DIR: &str = "/stackable/data";
Expand All @@ -74,6 +86,7 @@ pub const MAX_PREPARE_LOG_FILE_SIZE: MemoryQuantity = MemoryQuantity {
pub const DOCKER_IMAGE_BASE_NAME: &str = "zookeeper";

const DEFAULT_SERVER_GRACEFUL_SHUTDOWN_TIMEOUT: Duration = Duration::from_minutes_unchecked(2);
pub const DEFAULT_LISTENER_CLASS: &str = "cluster-internal";

mod built_info {
pub const CARGO_PKG_VERSION: &str = env!("CARGO_PKG_VERSION");
Expand Down Expand Up @@ -141,7 +154,19 @@ pub mod versioned {

// no doc - it's in the struct.
#[serde(skip_serializing_if = "Option::is_none")]
pub servers: Option<Role<ZookeeperConfigFragment, GenericRoleConfig, JavaCommonConfig>>,
pub servers:
Option<Role<ZookeeperConfigFragment, ZookeeperServerRoleConfig, JavaCommonConfig>>,
}

#[derive(Clone, Debug, Deserialize, JsonSchema, PartialEq, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct ZookeeperServerRoleConfig {
#[serde(flatten)]
pub common: GenericRoleConfig,

/// This field controls which [ListenerClass](DOCS_BASE_URL_PLACEHOLDER/listener-operator/listenerclass.html) is used to expose the ZooKeeper servers.
#[serde(default = "default_listener_class")]
pub listener_class: String,
}

#[derive(Clone, Deserialize, Debug, Eq, JsonSchema, PartialEq, Serialize)]
Expand All @@ -166,29 +191,6 @@ pub mod versioned {
skip_serializing_if = "Option::is_none"
)]
pub tls: Option<tls::v1alpha1::ZookeeperTls>,

/// This field controls which type of Service the Operator creates for this ZookeeperCluster:
///
/// * cluster-internal: Use a ClusterIP service
///
/// * external-unstable: Use a NodePort service
///
/// This is a temporary solution with the goal to keep yaml manifests forward compatible.
/// In the future, this setting will control which [ListenerClass](DOCS_BASE_URL_PLACEHOLDER/listener-operator/listenerclass.html)
/// will be used to expose the service, and ListenerClass names will stay the same, allowing for a non-breaking change.
#[serde(default)]
pub listener_class: CurrentlySupportedListenerClasses,
}

// TODO: Temporary solution until listener-operator is finished
#[derive(Clone, Debug, Default, Display, Deserialize, Eq, JsonSchema, PartialEq, Serialize)]
#[serde(rename_all = "PascalCase")]
pub enum CurrentlySupportedListenerClasses {
#[default]
#[serde(rename = "cluster-internal")]
ClusterInternal,
#[serde(rename = "external-unstable")]
ExternalUnstable,
}

#[derive(Clone, Debug, Default, Fragment, JsonSchema, PartialEq)]
Expand Down Expand Up @@ -346,7 +348,7 @@ pub enum ZookeeperRole {
/// Used for service discovery.
pub struct ZookeeperPodRef {
pub namespace: String,
pub role_group_service_name: String,
pub role_group_headless_service_name: String,
pub pod_name: String,
pub zookeeper_myid: u16,
}
Expand All @@ -356,15 +358,18 @@ fn cluster_config_default() -> v1alpha1::ZookeeperClusterConfig {
authentication: vec![],
vector_aggregator_config_map_name: None,
tls: tls::default_zookeeper_tls(),
listener_class: v1alpha1::CurrentlySupportedListenerClasses::default(),
}
}

impl v1alpha1::CurrentlySupportedListenerClasses {
pub fn k8s_service_type(&self) -> String {
match self {
v1alpha1::CurrentlySupportedListenerClasses::ClusterInternal => "ClusterIP".to_string(),
v1alpha1::CurrentlySupportedListenerClasses::ExternalUnstable => "NodePort".to_string(),
fn default_listener_class() -> String {
DEFAULT_LISTENER_CLASS.to_owned()
}

impl Default for ZookeeperServerRoleConfig {
fn default() -> Self {
Self {
listener_class: default_listener_class(),
common: Default::default(),
}
}
}
Expand Down Expand Up @@ -506,11 +511,11 @@ impl HasStatusCondition for v1alpha1::ZookeeperCluster {
}

impl ZookeeperPodRef {
pub fn fqdn(&self, cluster_info: &KubernetesClusterInfo) -> String {
pub fn internal_fqdn(&self, cluster_info: &KubernetesClusterInfo) -> String {
format!(
"{pod_name}.{service_name}.{namespace}.svc.{cluster_domain}",
pod_name = self.pod_name,
service_name = self.role_group_service_name,
service_name = self.role_group_headless_service_name,
namespace = self.namespace,
cluster_domain = cluster_info.cluster_domain
)
Expand Down Expand Up @@ -541,16 +546,16 @@ impl v1alpha1::ZookeeperCluster {
}
}

/// The name of the role-level load-balanced Kubernetes `Service`
pub fn server_role_service_name(&self) -> Option<String> {
self.metadata.name.clone()
}

/// The fully-qualified domain name of the role-level load-balanced Kubernetes `Service`
pub fn server_role_service_fqdn(&self, cluster_info: &KubernetesClusterInfo) -> Option<String> {
/// The fully-qualified domain name of the role-level [Listener]
///
/// [Listener]: stackable_operator::crd::listener::v1alpha1::Listener
pub fn server_role_listener_fqdn(
&self,
cluster_info: &KubernetesClusterInfo,
) -> Option<String> {
Some(format!(
"{role_service_name}.{namespace}.svc.{cluster_domain}",
role_service_name = self.server_role_service_name()?,
"{role_listener_name}.{namespace}.svc.{cluster_domain}",
role_listener_name = role_listener_name(self, &ZookeeperRole::Server),
namespace = self.metadata.namespace.as_ref()?,
cluster_domain = cluster_info.cluster_domain
))
Expand All @@ -560,8 +565,10 @@ impl v1alpha1::ZookeeperCluster {
pub fn role(
&self,
role_variant: &ZookeeperRole,
) -> Result<&Role<v1alpha1::ZookeeperConfigFragment, GenericRoleConfig, JavaCommonConfig>, Error>
{
) -> Result<
&Role<v1alpha1::ZookeeperConfigFragment, ZookeeperServerRoleConfig, JavaCommonConfig>,
Error,
> {
match role_variant {
ZookeeperRole::Server => self.spec.servers.as_ref(),
}
Expand Down Expand Up @@ -602,7 +609,7 @@ impl v1alpha1::ZookeeperCluster {
}
}

pub fn role_config(&self, role: &ZookeeperRole) -> Option<&GenericRoleConfig> {
pub fn role_config(&self, role: &ZookeeperRole) -> Option<&ZookeeperServerRoleConfig> {
match role {
ZookeeperRole::Server => self.spec.servers.as_ref().map(|s| &s.role_config),
}
Expand Down Expand Up @@ -634,8 +641,10 @@ impl v1alpha1::ZookeeperCluster {
for i in 0..rolegroup.replicas.unwrap_or(1) {
pod_refs.push(ZookeeperPodRef {
namespace: ns.clone(),
role_group_service_name: rolegroup_ref.object_name(),
pod_name: format!("{}-{}", rolegroup_ref.object_name(), i),
role_group_headless_service_name: build_role_group_headless_service_name(
rolegroup_ref.object_name(),
),
pod_name: format!("{role_group}-{i}", role_group = rolegroup_ref.object_name()),
zookeeper_myid: i + myid_offset,
});
}
Expand Down
Loading