A comprehensive framework for ONNX model generation, optimization, and deployment for Deeploy.
Onnx4Deeploy provides a unified interface for exporting PyTorch models to ONNX format with specialized optimizations for inference and on-device training on Deeploy hardware accelerators.
- Unified model export β single API for both inference and training-mode ONNX graph generation
- 15 model exporters β MLperf Tiny, ViT-family, Mamba and simple reference models (see Supported models)
- 38 operator test generators β every Deeploy-supported ONNX op has its own reference test
- Training graph optimization β custom passes (
fuse_mse_loss,fuse_global_average_pool_grad, GEMM conversion, gradient-node cleanup, shape simplification, β¦) specialized for on-device training - Type-safe API β full type annotations and docstrings
- Pytest-based test suite (inference + training mode, per model)
- ONNX Runtime validation of every exported graph
inputs.npz/outputs.npzlayout checks to keep the training harness in sync
This project has strict version pins in requirements.txt because the training-graph generation path is very sensitive to torch/onnxruntime-training interactions. Deviating from these versions is not supported.
| Dependency | Version | Notes |
|---|---|---|
| Python | 3.10 only (==3.10.*) |
Enforced via pyproject.toml |
torch |
2.7.0 |
Pinned exactly |
onnx |
1.16.0 |
Pinned exactly |
onnxruntime-training |
1.19.2 |
Pinned exactly β not the regular onnxruntime package |
onnx-graphsurgeon |
0.5.8 |
Pinned exactly |
onnxscript |
0.5.7 |
Pinned exactly |
onnxsim |
0.4.36 |
Pinned exactly |
numpy |
1.26.4 |
Pinned exactly |
pyyaml |
6.0.2 |
Pinned exactly |
onnxruntime-training is only published as a prebuilt wheel for x86_64 Linux (amd64). There is no official aarch64 / Apple Silicon / Windows wheel. On Apple Silicon Macs you need to run the installation and all generation scripts inside a linux/amd64 Docker container (Docker Desktop supports this via QEMU / Rosetta). On other platforms you must build onnxruntime-training from source.
git clone https://github.com/pulp-platform/Onnx4Deeploy.git
cd Onnx4Deeploy
# (Recommended) create a clean Python 3.10 environment
conda create -n onnx4deeploy python=3.10 -y
conda activate onnx4deeploy
# Install pinned runtime dependencies
pip install -r requirements.txt
# Install the package itself
pip install -e .python -c "import onnx4deeploy; print(onnx4deeploy.__version__)"
python -c "from onnxruntime.training import artifacts; print('ORT training OK')"Both commands must succeed. If the second one raises ModuleNotFoundError: No module named 'onnxruntime.training', you have the regular onnxruntime package installed instead of onnxruntime-training β uninstall it and reinstall from requirements.txt.
Onnx4Deeploy provides two main features: operator-level generation and model-level export.
Use the unified CLI tool Onnx4Deeploy.py:
# Generate an operator test
python Onnx4Deeploy.py -operator Relu -o ./onnx
# Generate a model inference graph
python Onnx4Deeploy.py -model CCT -mode infer -o ./onnx
# Generate a model training graph
python Onnx4Deeploy.py -model CCT -mode train -o ./onnx
# List available options
python Onnx4Deeploy.py --list-models
python Onnx4Deeploy.py --list-operators
python Onnx4Deeploy.py --examplesAvailable arguments:
-operator NAMEβ generate an operator test (e.g.,Relu,Add,Gemm,ConvGradXW)-model NAMEβ generate a model ONNX (see Supported models)-mode {infer,train}β model export mode (default:infer)-o PATHβ output directory path--n-epochs,--n-steps,--n-batches,--n-accum,--batch-size,--dataset,--data-path,--data-size,--lr,--classesβ training-mode knobs--list-models,--list-operators,--examplesβ help / discovery
| Category | Model | Inference | Training | Notes |
|---|---|---|---|---|
| MLperf Tiny | ResNet-8 | β | β | Image classification (CIFAR-10) |
| MLperf Tiny | MobileNetV2-0.35 | β | β | Visual Wake Words |
| MLperf Tiny | DS-CNN-XS / DS-CNN-S | β | β | Keyword spotting |
| MLperf Tiny | Autoencoder-tiny / -MLperf | β | β | Anomaly detection |
| BMI / EEG | EpiDeNet | β | β | Epilepsy detection |
| BMI / EEG | MIBMInet | β | β | Motor-imagery BMI |
| Sleep staging | SleepConViT | β | β | Sleep stage classification |
| Transformer | CCT | β | β | Compact Convolutional Transformer |
| Transformer | TinyTransformer | β | β | Patch-based Transformer (MNIST) |
| Transformer | TinyViT (5M/11M/21M) | β | β | Compact ViT variants |
| Transformer | MobileViT (XXS/XS/S) | β | β | Mobile-friendly hybrid ViT |
| SSM | Mamba | β | β | Selective SSM; training export not yet supported |
| Reference / demo | SimpleMLP | β | β | Minimal MLP |
| Reference / demo | SimpleCNN | β | β | Minimal strided-conv CNN |
| Reference / demo | LightweightCNN | β | β | Compact image classifier |
--list-models is the authoritative source; this table is for orientation.
We welcome contributions! Please see our Contributing Guide for details.
# Install with development dependencies
pip install -e ".[dev]"
# Install pre-commit hooks
pre-commit install
# Run the full test suite (skip slow MNIST training test)
pytest tests/ -m "not slow"
# Format code
black --line-length=100 .
isort --profile=black --line-length=100 .All licenses used in this repository are listed under the LICENSES folder. Unless specified otherwise in the respective file headers, all code checked into this repository is made available under a permissive license.
- Most software sources and tool scripts are licensed under the MIT license.
- Markdown, JSON, text files, pictures, PDFs, are licensed under the Creative Commons Attribution 4.0 International license (CC BY 4.0).
To extract license information for all files, you can use the reuse tool and by running reuse spdx in the root directory of this repository.
- Built with ONNX
- Tested with ONNX Runtime Training
- Optimized for Deeploy
- Issues: GitHub Issues
- Documentation: docs/
