This document gives a detailed breakdown of the various build processes and options for building the Kroxylicious from source.
- Development Guide for Kroxylicious
- Development Guide for Kroxylicious Operator
- Deprecation Policy
- JDK (version 21 and above) - JDK
mvn(version 3.8.8 and above) - Apache Maven®dockerorpodman- Docker or Podman
⚠️ If you are using Podman please see these notes below
JDK version 21 or newer, and Apache Maven® are required for building this project.
Kroxylicious targets language level 17, except for the integrationtests module
which targets 21 to access some new language features. At production runtime, Java 17 remains supported but is deprecated. Use Java 21 or later.
Build the project like this:
mvn clean verifyThe running of the tests can be controlled with the following Maven properties:
| property | description |
|---|---|
-DskipUTs=true |
skip unit tests |
-DskipKTs=true |
skip container image tests |
-DskipITs=true |
skip integration tests |
-DskipSTs=true |
skip system tests |
-DskipDTs=true |
skip documentation tests |
-DskipTests=true |
skip all tests |
-DfailOnWarnings |
fail on javac warnings (ignored if -Dquick, see below) |
-Derrorprone.skip=true |
Disable ErrorProne static analysis. Semantics: Setting this property (even to false) disables the errorprone-jdk-compatible profile, which activates when the property is undefined (<name>!errorprone.skip</name>). To enable ErrorProne, omit this property entirely. |
-Pdebug |
enables logging so you can see what the Kafka clients, Proxy and in VM brokers are up to. |
The build behavior can be controlled with the following Maven profiles:
| profile | description |
|---|---|
-Pqa (active by default) |
Runs quality assurance checks: dependency analysis, code formatting, import sorting, license headers, checkstyle, spotbugs, japicmp API compatibility, and enforcer rules. Use -P '!qa' to disable. |
-Pci |
CI-specific configuration: validates formatting instead of applying it, runs jacoco code coverage, switches license plugin to check mode instead of format mode. |
-Pdist |
Creates distribution artifacts including tarball, container images. Required for building deployable packages. |
-Pquick |
Fast build mode: skips all tests, QA checks, javadoc, and documentation. Activates with -Dquick. Excludes integration/system test modules from reactor. |
-Psystemtest |
Enables system test module and skips all other test types. Use with -Pdist to run Kubernetes-based system tests. |
-P-withAdditionalFilters |
Excludes Kroxylicious-maintained filter implementations from the distribution. Only use with -Pdist. |
errorprone-jdk-compatible (auto-activated on JDK 17+) |
Runs Error Prone static analysis during compilation to detect bug patterns. Adds ~15-30% to compilation time. Disable with -Derrorprone.skip=true for faster builds (see property table above for semantics). |
The kafka environment used by the integrations tests can be defaulted with these two environment variables.
| env var | default | description |
|---|---|---|
TEST_CLUSTER_EXECUTION_MODE |
IN_VM |
IN_VM or CONTAINER. if IN_VM, kafka will be run same virtual machines as the integration test. Otherwise containers will be used. |
TEST_CLUSTER_KRAFT_MODE |
true |
if true, kafka will be run in kraft mode. |
When the integration-tests are run in CONTAINER mode, the kafka/zookeeper logs are written to a location specified by
the container.logs.dir system property. When run through Maven this is defaulted to integrationtests/target/container-logs.
Pass the -Dquick option to skip all tests and non-essential plug-ins and create the output artifact as quickly as possible:
mvn clean package -DquickRun the following command to format the source code and organize the imports as per the project's conventions:
mvn process-sourcesBuild with the dist profile to create distribution artefacts (see kroxylicious-app).
The distribution includes the Kroxylicious-maintained Filter implementations (located under kroxylicious-additional-filters).
mvn clean package -Pdist -DquickIt is possible to omit the Kroxylicious-maintained Filter implementations by disabling the withAdditionalFilters profile.
mvn clean package -Pdist -Dquick -P-withAdditionalFiltersRun the following to add missing license headers e.g. when adding new source files:
mvn org.commonjava.maven.plugins:directory-maven-plugin:highest-basedir@resolve-rootdir license:formatNo one likes to argue about code formatting in pull requests, as project we take the stance that if we can't automate the formatting we are not going to argue about it either. Having said that we don't want a mishmash of conflicting styles! So we attack this from multiple angles.
- Shared Code formatter settings. Included in the repo are code formatter settings for
Eclipse,InjtellJand.editorconfig. - The Continuous Integration (CI) job building Pull Requests will fail if there is formatting which doesn't pass our agreed conventions
- We apply Checkstyle validation to the project as well. You can find our agreed ruleset in the
etcfolder. We bind checkstyle to theverifyphase of the build somvn clean verifywill validate the code is acceptable. - We also employ impsort-maven-plugin to keep import order consistent which will re-order imports as part of the maven build.
- We also have formatter-maven-plugin which will apply the project code style rules, this is driven from the Eclipse code formatter, as part of the maven build cycle.
See .claude/rules/logging.md for our logging conventions.
Build with the dist profile as shown above, then execute this:
kroxylicious-app/target/kroxylicious-app-*-bin/kroxylicious-app-*/bin/kroxylicious-start.sh --config ${path_to_kroxylicious_config}Or, to run with your own class path, run this instead:
KROXYLICIOUS_CLASSPATH="${additional_classpath_entries}" kroxylicious-app/target/kroxylicious-app-*-bin/kroxylicious-app-*/bin/kroxylicious-start.sh --config ${path_to_kroxylicious_config}for example:
KROXYLICIOUS_CLASSPATH="/path/to/any.jar:/path/to/libs/dir/*" kroxylicious-app/target/kroxylicious-app-*-bin/kroxylicious-app-*/bin/kroxylicious-start.sh --config kroxylicious-app/example-proxy-config.yamlBuild with the dist profile as shown above.
To start in debug mode, listening on port 5005:
JAVA_ENABLE_DEBUG=true kroxylicious-app/target/kroxylicious-app-*-bin/kroxylicious-app-*/bin/kroxylicious-start.sh -c kroxylicious-app/example-proxy-config.yaml
To suspend until debugger attaches:
JAVA_ENABLE_DEBUG=true JAVA_DEBUG_SUSPEND=true kroxylicious-app/target/kroxylicious-app-*-bin/kroxylicious-app-*/bin/kroxylicious-start.sh -c kroxylicious-app/example-proxy-config.yaml
To change the debug port
JAVA_ENABLE_DEBUG=true JAVA_DEBUG_PORT=1234 kroxylicious-app/target/kroxylicious-app-*-bin/kroxylicious-app-*/bin/kroxylicious-start.sh -c kroxylicious-app/example-proxy-config.yaml
To change the root logger level
KROXYLICIOUS_ROOT_LOG_LEVEL=DEBUG kroxylicious-app/target/kroxylicious-app-*-bin/kroxylicious-app-*/bin/kroxylicious-start.sh -c kroxylicious-app/example-proxy-config.yaml
To customise the log4j2 config file edit:
vim kroxylicious-app/target/kroxylicious-app-*-bin/kroxylicious-app-*/config/log4j2.yaml
Low level network and frame logging is turned off by default for better performance. In case you want to debug, logging should be turned on in the example-proxy-config.yaml file:
logNetwork: true
logFrames: trueTo build the proxy and operator image, first build the project using :
mvn -Pdist packageas Maven will be responsible to build the container images as tgz files.
Once the project is built you should be able to see kroxylicious-operator.img.tar.gz and kroxylicious-proxy.img.tar.gz in the target folder of kroxylicious-kubernetes/kroxylicious-operator and kroxylicious-app directories.
Now if you want to push the Kroxylicious container and operator image to a specific registry like quay.io or docker.io, you can follow these steps:
NOTE: Container runtime are illustrated using podman CLI. If you are using docker, replace podman with docker.
First load the image from tar.gz file into podman daemon:
podman load <kroxylicious-operator.img.tar.gz-or-kroxylicious-proxy.img.tar.gz>You can check the loaded image using:
podman imagesNow you can tag the loaded image with the appropriate quay.io registry and username:
podman tag <loaded-image-name-or-id> quay.io/<your-username>/<repository-name>:<tag>Once you have tagged the image, now you can push the image to quay.io:
podman push quay.io/<your-username>/<repository-name>:<tag>Alternatively, to test locally made changes, push the built operator and proxy images into your Minikube.
minikube image load kroxylicious-kubernetes/kroxylicious-operator/target/kroxylicious-operator.img.tar.gz --alsologtostderr=true 2>&1 | tail -n1
minikube image load kroxylicious-app/target/kroxylicious-proxy.img.tar.gz --alsologtostderr=true 2>&1 | tail -n1
⚠️ Some minikube container runtimes may not be able to load a gzipped tar (kubernetes/minikube#21678), if the above commands report a failure likecache_images.go:265] failed pushing to: minikube, then run:gunzip --to-stdout kroxylicious-kubernetes/kroxylicious-operator/target/kroxylicious-operator.img.tar.gz | minikube image load - --alsologtostderr=true 2>&1 | tail -n1 gunzip --to-stdout kroxylicious-app/target/kroxylicious-proxy.img.tar.gz | minikube image load - --alsologtostderr=true 2>&1 | tail -n1
The project requires JDK-21 to build and run the integrationtests module and the IDEA project is configured to build against an SDK
named temurin-21. A suggested way to install this is with sdkman using sdk install java 21-tem.
Run mvn clean install -DskipTests to install the project into your local maven repository (in ~/.m2). This is necessary because
IDEA fails to synchronise the project if the kroxylicious maven plugin isn't available to maven.
Open the root pom.xml as a project.
Then navigate to File > Project Structure > Project Settings and update SDK to point at your install JDK 21 (it should be populated
as a suggestion if you used sdkman to install it).
In the IDEA Maven dialogue click on Generate Sources and Update Folders For All Projects.
Build the entire project by running Build > Build Project and then check that you can run io.kroxylicious.it.FilterIT
If you encounter any further issues with generated sources, you can try running mvn clean install -DskipTests again or running
Generate Sources and Update Folders for the specific module that is having problems.
While Kroxylicious is a java application we've had reports of issues running the build natively on Windows and thus suggest using the Windows Subsystem for Linux (WSL) for development.
- Enable the Windows Subsystem for Linux feature: To enable WSL, you need to enable the Windows Subsystem for Linux feature in the Windows Features dialog.
- Install a Linux distribution from the Microsoft Store. The rest of these instructions assume a distribution (such as
Ubuntu) which uses
aptfor package management, but the process should be similar for distributions using other package managers/ such as Fedora/dnf. - Launch the Linux distribution and Configure the Linux environment: After launching the Linux distribution, you can configure the environment by creating a user account and setting the password. With these steps, you should now have WSL installed and configured on your Windows system. For further assistance please see the Microsoft documentation
- Open the WSL window.
- Update the packages using
sudo apt update sudo apt upgrade
-
- Check the Java version by typing
Expect output similar to:java --version
> java --version openjdk 21.0.8 2025-07-15 LTS OpenJDK Runtime Environment Temurin-21.0.8+9 (build 21.0.8+9-LTS) OpenJDK 64-Bit Server VM Temurin-21.0.8+9 (build 21.0.8+9-LTS, mixed mode, sharing- Update if needed: sample update command like:
sudo apt update sudo apt upgrade sudo apt install openjdk-21-jre-headless
- Ensure GIT is available
-
Expect a version string similar to
git --version
git version 2.37.1 (Apple Git-137.1) - Follow the WSL-git tutorial if needed.
-
- Checkout Kroxylicious to
${kroxylicious-checkout} - Build & develop following the standard build instructions
On Linux, it may be necessary to configure the DOCKER_HOST environment variable to allow the tests to correctly use test containers.
DOCKER_HOST=unix://$(podman info --format '{{.Host.RemoteSocket.Path}}')
export DOCKER_HOSTThere is an incompatibility between HTTP connection timeout expectations of testcontainers-java and the Podman API. This can result in sporadic test failures when running the Integration Tests under Podman. It manifests as failed or hanging REST API calls that leads to test failures and test hangs.
It affects Linux and macOS.
On Linux it manifests as Http calls failing with a Broken Pipe exception.
Similarly on macOS we see a localhost:XXX failed to respond.
To workaround around the issue, tune the service_timeout so that the timeout is in sympathy with the
expectations of the underlying HttpClient defaults.
Do so by following these instructions.
Start the podman machine as normal, then:
echo 'mkdir -p /etc/containers/containers.conf.d && printf "[engine]\nservice_timeout=91\n" > /etc/containers/containers.conf.d/service-timeout.conf && systemctl restart podman.socket' | podman machine ssh --username root --As a privileged user:
mkdir -p /etc/containers/containers.conf.d && printf "[engine]\nservice_timeout=91\n" > /etc/containers/containers.conf.d/service-timeout.conf && systemctl restart podman.socketOn Linux, start this command:
socat - UNIX-CONNECT:$(podman info --format '{{.Host.RemoteSocket.Path}}')On macOS, start this command:
time socat - UNIX-CONNECT:/var/run/docker.sockthe send this input (including the empty line):
GET /version HTTP/1.1
Host: www.example.com
You'll see an API response. If the service_timeout change is effective, the socat
will continue for 3 minutes. If socat terminates after about 10 seconds, the workaround
has been applied ineffectively.
THe integration test suite enables IO_Uring un-conditionally which may trigger issues with memory limits. Certain platforms e.g. Fedora default to running with RLIMIT_MEMLOCK set.
If you see test failures such as
[ERROR] Errors:
[ERROR] MockServerTest.testClientCanSendAndReceiveRPCToMock:47 » IllegalState failed to create a child event loopor
java.lang.IllegalStateException: failed to create a child event loop
...
Caused by: java.lang.RuntimeException: failed to allocate memory for io_uring ring; try raising memlock limit (see getrlimit(RLIMIT_MEMLOCK, ...) or ulimit -l): Cannot allocate memoryRaise the RLIMIT_MEMLOCK (see https://lwn.net/Articles/876288/ for a discussion on the merits or otherwise of the default) by adding entries to /etc/security/limits.conf (see https://access.redhat.com/solutions/61334 for details on the file) the updates will take effect in the next login shell.
example config entry:
* hard memlock unlimited
* soft memlock unlimited
- minikube (install guide)
- helm (install guide)
- User must have access to a container registry such as quay.io or docker.io.
Create a public accessible repository within the registry named
kroxylicious. - [OPTIONAL] aws cli (install guide): in case an AWS Cloud account is used for KMS.
KROXYLICIOUS_OPERATOR_REGISTRY: url to the registry where the image of kroxylicious operator is located. Default value:quay.ioKROXYLICIOUS_OPERATOR_ORG: name of the organisation in the registry where kroxylicious operator is located. Default value:kroxyliciousKROXYLICIOUS_OPERATOR_IMAGE_NAME: name of the image of kroxylicious operator to be used. Default value:operatorARCHITECTURE: architecture of the cluster where the test clients are deployed. Default value:System.getProperty("os.arch")KROXYLICIOUS_OPERATOR_VERSION: version of kroxylicious operator to be used. Default value:${project.version}in pom fileKROXYLICIOUS_OPERATOR_INSTALL_DIR: directory of the operator install files. Used for operator yaml installation. Default value:System.getProperty("user.dir") + "/target/kroxylicious-operator-dist/install/"KROXYLICIOUS_IMAGE: image location of the kroxylicious (proxy) image. Defaults toquay.io/kroxylicious/kroxylicious:${project.version}KAFKA_VERSION: kafka version to be used. Default value:${kafka.version}in pom fileSTRIMZI_VERSION: strimzi version to be used. Default value:${strimzi.version}in pom fileSTRIMZI_NAMESPACE: namespace used for strimzi cluster operator installation. Useful when strimzi is previously installed. Default value:kafkaSKIP_TEARDOWN: variable for development purposes to avoid keep deploying and deleting deployments each run. Default value:falseSYNC_RESOURCES_DELETION: variable for test diagnosis purposes to delete the resources synchronously. Default value:falseCONTAINER_CONFIG_PATH: directory whereconfig.jsonfile is located. This file contains the pull secrets to be used by the container engine. Default value:$HOME/.docker/config.jsonSKIP_STRIMZI_INSTALL: skip strimzi installation. Default value:falseKAFKA_CLIENT: client used to produce/consume messages. Default value:strimzi_test_client. Currently supported values:strimzi_test_client,kaf,kcat,python_test_clientTEST_CLIENTS_IMAGE: strimzi test client image to be used when running the tests. It is useful when running regression tests. Default value:quay.io/strimzi-test-clients/test-clients:latest-kafka-${kafka.version}USE_CLOUD_KMS: set totruein case AWS/Azure Cloud is used for Record Encryption System Tests. LocalStack/Lowkey-Vault will be used by default. Default value:falseAWS_REGION: region of the AWS Cloud account to be used for KMS management. Default value:us-east-2AWS_ACCESS_KEY_ID: key id of the aws account with admin permissions to be used for KMS management. Mandatory whenAWS_USE_CLOUDistrue. Default value:testAWS_SECRET_ACCESS_KEY: secret access key of the aws account with admin permissions to be used for KMS management. Mandatory whenAWS_USE_CLOUDistrue. Default value:testAWS_KROXYLICIOUS_ACCESS_KEY_ID: key id of the aws account to be used for Kroxylicious config Map to encrypt/decrypt the messages. Mandatory whenAWS_USE_CLOUDistrue. Default value:testAWS_KROXYLICIOUS_SECRET_ACCESS_KEY: secret access key of the aws account to be used for Kroxylicious config Map to encrypt/decrypt the messages. Mandatory whenAWS_USE_CLOUDistrue. Default value:testCURL_IMAGE: curl image to be used in the corresponding arch for metrics tests. Default value:mirror.gcr.io/curlimages/curl:8.18.0
First of all, start Minikube:
minikube start --cpus 4By default, the system tests will pull the Operator from quay.io/kroxylicious/operator:${project.version} and the Proxy from quay.io/kroxylicious/kroxylicious:${project.version}.
These will be the latest images built by CI. You can change this behaviour by setting the environment variables shown in the table above.
Alternatively, to run system tests against locally made changes, push the built operator and proxy images into your Minikube. Refer to section Building and pushing Kroxylicious Container Images.
Run the system tests like this:
mvn clean verify -Psystemtest -Pdist -DskipDocs=true
For information on updating and rendering the documentation, see the kroxylicious-docs directory README.
There are some helper scripts that can reduce the manual work when producing an asciinema terminal cast. There are a couple of scripts/programs that are used in consort.
- extract-markdown-fencedcodeblocks.sh extracts fenced code blocks from a
Markdown document. The list of commands is sent to stdout. The script also understands a non-standard extension to
the fenced code-block declaration:
promptassignments are treated as a comment that will proceed the command specified by fenced code-block. This can be used to provide narration.Lorem ipsum dolor sit amet ```shell { prompt="let's install the starnet client" } dnf install starnet-client ``` - demoizer.sh takes a list of commands and executes each one. It uses expect(1) to simulate a human typing the commands. It is designed to executed within the asciinema session.
- asciinema-edit used to quantise the periods of inactivity
The whole process looks like this:
# Extract the commands and narration
./scripts/extract-markdown-fencedcodeblocks.sh < path/to/some/markdown.md > /tmp/cmds
asciinema rec --overwrite --command './scripts/demoizer.sh /tmp/cmds .' demo.cast
# Uses quantize to reduce lengthy periods of inactivity resulting from awaits for resource to come ready etc.
asciinema-edit quantize --range 5 demo.cast > demo_processed.cast
asciinema upload demo_processed.castWe use Github actions for our build and release workflows. See .github/AUTOMATION_README.md for information about working with the actions.
All CI workflows defined by the project are expected to execute within the context of a fork, apart from docker workflow. To enable the docker workflow, you need to configure three repository variables and one repository secret.
REGISTRY_SERVERvariable - the server of the container registry service e.g.quay.ioordocker.ioREGISTRY_ORGANISATIONvariable - the organization of the container registry service e.g.kroxyliciousoryourusernamePROXY_IMAGE_NAMEvariable - the image name e.g.kroxyliciousOPERATOR_IMAGE_NAMEvariable - the image name e.g.operatorREGISTRY_USERNAMEvariable - your username on the service (or username of your robot account)REGISTRY_TOKENsecret - the access token that corresponds toREGISTRY_USERNAME
The workflow will push the container image to ${REGISTRY_DESTINATION} so ensure that the ${REGISTRY_USERNAME} user has sufficient write privileges.
The project requires that all commits are signed-off, indicating that you certify the changes with the Developer Certificate of Origin (DCO).
A prepare-commit-msg hook automatically adds the Signed-off-by: trailer to your commits. To set this up:
-
Copy the hook template to your local git hooks directory:
cp scripts/git-hooks/prepare-commit-msg .git/hooks/prepare-commit-msg chmod +x .git/hooks/prepare-commit-msg
-
Verify the hook is working:
git commit -m "test commit" git log -1 --format=%BYou should see
Signed-off-by: Your Name <your.email@example.com>at the end.
If you prefer not to use the hook, you can sign off commits manually:
- Use
git commit -sfor each commit in your pull request - Or use
git rebase --signoff _your-branch_to sign off multiple commits
When AI tools (like Claude Code, GitHub Copilot, etc.) assist with your code changes, add an Assisted-by: trailer to document this:
Format: Assisted-by: <AI-name> <model> <email>
Examples:
Assisted-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Assisted-by: Claude Opus 4.6 <noreply@anthropic.com>
Placement: After the commit message body, before the Signed-off-by: trailer.
Complete example:
feat(filters): add request throttling
Implements configurable rate limiting with per-client quotas.
Assisted-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: Jane Developer <jane@example.com>
This practice maintains transparency about AI assistance in our development process while preserving the human developer's accountability for the final code.
This is the development guide for Kroxylicious operator for Kubernetes.
If you want to iterate quickly on the operator the simplest way is to run it as a process on your host (i.e. not running it within a Kubernetes cluster).
Note: The Integration Tests will only run if your kubectl context is pointing at a cluster. For development, we recommend using minikube, for example:
minikube start --kubernetes-version=latestYou should now be able to run the tests using mvn.
If you want to run the OperatorMain (e.g. from your IDE, maybe for debugging) then you'll need to install the CRD:
kubectl apply -f kroxylicious-kubernetes/kroxylicious-kubernetes-api/src/main/resources/META-INF/fabric8You should now be able to play around with KafkaProxy CRs; read the "Creating a KafkaProxy" section.
Alternatively you can build the operator and run it within Kubernetes by following the instructions below.
Refer to section Building and pushing Kroxylicious Container Images.
Spin up a minikube custer:
minikube start --kubernetes-version=latestkubectl apply -f kroxylicious-kubernetes/kroxylicious-operator/target/packaged/install You can check that worked with something like
kubectl logs -n kroxylicious-operator deployment/kroxylicious-operator -c operatorSometimes we want to use the Operator to deploy a different Proxy image than the default. We can control this by setting the KROXYLICIOUS_IMAGE environment variable on the operators container.
kubectl set env deployment/kroxylicious-operator -nkroxylicious-operator KROXYLICIOUS_IMAGE=${YOUR IMAGE}kubectl apply -f kroxylicious-kubernetes/kroxylicious-operator/target/packaged/examples/simple/You can check that worked with something like
kubectl get proxy simple -n my-proxy -o jsonpath='{.status}'To test things properly you'll need to point your virtual clusters at a running Kafka and also run a Kafka client so the proxy is handling some load.
The Kroxylicious system test suite uses the operator to deploy resources, so tests can be written in java and executed locally in one's IDE. They do however require access to a Kubernetes clusters (usually minikube) and helm, with the appropriate RBAC permissions to install operators and provision resources.
System tests are slow running things and are often difficult to diagnose issues just from external observation. To
support developers working with the operator while the tests execute the system test framework will enable remote debug
connections to the Kroxylicious Operator and create a LoadBalancer service (debug-kroxylicious-operator) to expose it.
Running minikube tunnel will make that available to the IDE, thus allowing developers to connect to the operator and
add breakpoints and step through execution. Note if we find ourselves doing this regularly we should look at improving
our unit test coverage and logging to make the diagnosis and avoidance of such issues much easier in less accessible
environments.
To help simplify local testing we also have a simple composefile in compose/kafa-compose.yaml. See the compose/README.md for details about how to use the proxy deployed.
We want to let users know about upcoming changes to APIs and give them sufficient time to adapt. The following policy describes how we'll do that. It will apply until the project reaches its 1.0 release.
When there is an API deprecation, it must be announced in the CHANGELOG of the coming release under a section title "Changes, deprecations and removals".
Deprecated features become eligible for removal in the third minor release made following the release with the deprecation announcement. There is an additional condition that at least three months must have elapsed too. When a deprecated feature is removed in a release, the removal should be documented under "Changes, deprecations and removals" in the changelog.
Where technically possible, the production code should emit a warning if it detects the use of deprecated feature. This will serve to prompt the user to migrate to the new API.