KiT-RT is an open-source, multi-fidelity C++ PDE solver for radiative transport in fusion, fission, and radiation therapy simulations. Designed from the ground up for AI-accelerated scientific computing, it supports hybrid neural–PDE workflows, enabling high-resolution reference solutions, scalable data generation for scientific foundation models, and outer-loop workflows such as uncertainty quantification and design optimization.
- Modular, HPC-ready architecture. Supports hybrid MPI/OpenMP distributed parallelism.
- Containerized for portable deployment across HPC systems (Docker & Singularity).
- Python-wrapped via charm_kit
- Downstream applications:
- Data generation for scientific foundation models.
- high-resolution reference solutions for AI-based surrogate modeling.
- Plug-and-play integration for outer-loop workflows (UQ, optimization).
KiT-RT has been adopted by:
- Sandia National Labs
- Lawrence Livermore National Labs
- Oak Ridge National Labs
- Karlsruhe Institute of Technology
- TU Delft
- NMBU Norway
- Chung-Ang University, Korea
- RIKEN Japan
Applications include:
- Foundation model training for fusion simulation.
- Large-scale UQ experiments.
- Radiation therapy planning.
- Accelerated neural network inference.
- Reference dataset generation for ML-based surrogate models.
- KiT-RT: An Extendable Framework for Radiative Transfer and Therapy — ACM Transactions on Mathematical Software [DOI]
- Reference solutions for linear radiation transport: the Hohlraum and Lattice benchmarks — arXiv:2505.17284 [PDF]
- Structure-Preserving Operator Learning — arXiv:2402.16613 [PDF]
- Structure-preserving neural networks for entropy-based closure — Journal of Computational Physics [DOI], ICML [Link], AIAA [DOI]
- PhD Thesis: Synergies between Numerical Methods for Kinetic Equations and Neural Networks
One-time setup:
git clone https://github.com/KiT-RT/kitrt_code.git
cd kitrt_code
git submodule update --init --recursiveThen run all commands from the repository root.
mkdir -p build_omp
cd build_omp
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_MPI=OFF -DBUILD_CUDA_HPC=OFF -DBUILD_ML=OFF ..
make -j
cd ..
./build_omp/KiT-RT tests/input/validation_tests/SN_solver/checkerboard_SN.cfgdocker run --rm -it -v $(pwd):/mnt -w /mnt kitrt/test:latest /bin/bash
mkdir -p build_docker_omp
cd build_docker_omp
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_MPI=OFF -DBUILD_CUDA_HPC=OFF -DBUILD_ML=OFF ..
make -j
cd ..
./build_docker_omp/KiT-RT tests/input/validation_tests/SN_solver/checkerboard_SN.cfgcd tools/singularity
sudo singularity build kit_rt.sif kit_rt.def
cd ../..
mkdir -p build_singularity_omp
cd build_singularity_omp
singularity exec ../tools/singularity/kit_rt.sif \
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_MPI=OFF -DBUILD_CUDA_HPC=OFF -DBUILD_ML=OFF ..
singularity exec ../tools/singularity/kit_rt.sif make -j
cd ..
singularity exec tools/singularity/kit_rt.sif \
./build_singularity_omp/KiT-RT tests/input/validation_tests/SN_solver/checkerboard_SN.cfgmkdir -p build_mpi
cd build_mpi
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_MPI=ON -DBUILD_CUDA_HPC=OFF -DBUILD_ML=OFF ..
make -j
cd ..
mpirun -np 4 ./build_mpi/KiT-RT tests/input/validation_tests/SN_solver_hpc/lattice_hpc_200_cpu_order2.cfgcd tools/singularity
sudo singularity build kit_rt_MPI.sif kit_rt_MPI.def
cd ../..
mkdir -p build_singularity_mpi
cd build_singularity_mpi
singularity exec ../tools/singularity/kit_rt_MPI.sif \
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_MPI=ON -DBUILD_CUDA_HPC=OFF -DBUILD_ML=OFF ..
singularity exec ../tools/singularity/kit_rt_MPI.sif make -j
cd ..
singularity exec tools/singularity/kit_rt_MPI.sif \
mpirun -np 4 ./build_singularity_mpi/KiT-RT tests/input/validation_tests/SN_solver_hpc/lattice_hpc_200_cpu_order2.cfgcd tools/singularity
sudo singularity build kit_rt_MPI_cuda.sif kit_rt_MPI_cuda.def
cd ../..
mkdir -p build_singularity_cuda
cd build_singularity_cuda
singularity exec --nv ../tools/singularity/kit_rt_MPI_cuda.sif \
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_MPI=OFF -DBUILD_CUDA_HPC=ON -DBUILD_ML=OFF ..
singularity exec --nv ../tools/singularity/kit_rt_MPI_cuda.sif make -j
cd ..
singularity exec --nv tools/singularity/kit_rt_MPI_cuda.sif \
./build_singularity_cuda/KiT-RT tests/input/validation_tests/SN_solver_hpc/lattice_hpc_200_cuda_order2.cfgWhen compiled with -DBUILD_CUDA_HPC=ON, HPC runs use the CUDA backend if a GPU is visible, and fall back to CPU if no GPU is detected.
FILENAME=libtensorflow-cpu-linux-x86_64-2.7.0.tar.gz
wget -q --no-check-certificate https://storage.googleapis.com/tensorflow/libtensorflow/${FILENAME}
sudo tar -C /usr/local -xzf ${FILENAME}
sudo ldconfig /usr/local/lib
rm ${FILENAME}
mkdir -p build_ml
cd build_ml
cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_MPI=OFF -DBUILD_CUDA_HPC=OFF -DBUILD_ML=ON ..
make -j
cd ..
./build_ml/KiT-RT tests/input/validation_tests/MN_solver/checkerboard_MN_neural.cfgmkdir -p build_debug
cd build_debug
cmake -DCMAKE_BUILD_TYPE=Debug -DBUILD_MPI=OFF -DBUILD_CUDA_HPC=OFF -DBUILD_ML=OFF ..
make -j
cd ..
./build_debug/KiT-RT tests/input/validation_tests/SN_solver/checkerboard_SN.cfgmkdir -p build_coverage
cd build_coverage
cmake -DCMAKE_BUILD_TYPE=Debug -DBUILD_TESTING=ON -DBUILD_CODE_COV=ON -DBUILD_UNITY=OFF ..
make -j
./unit_tests
ctest --output-on-failure
gcovr -r .. --html-details coverage.htmlThe Python interface is provided via charm_kit, allowing seamless integration into AI and outer-loop (UQ, Optimization) workflows. Check the corresponding readme for further info
Performance benchmarks and scaling plots can be found [here].
KiT-RT is released under the MIT License. See LICENSE for details.
Cite this software using
@article{kitrt2023,
title={KiT-RT: An extendable framework for radiative transfer and therapy},
author={Kusch, Jonas and Schotth{\"o}fer, Steffen and Stammer, Pia and Wolters, Jannick and Xiao, Tianbai},
journal={ACM Transactions on Mathematical Software},
volume={49},
number={4},
pages={1--24},
year={2023},
publisher={ACM New York, NY}
}