Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 17 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ It is also an official implementation of the following papers (sorted by the tim
- **TeFlow: Enabling Multi-frame Supervision for Self-Supervised Feed-forward Scene Flow Estimation**
*Qingwen Zhang, Chenhan Jiang, Xiaomeng Zhu, Yunqi Miao, Yushan Zhang, Olov Andersson, Patric Jensfelt*
Conference on Computer Vision and Pattern Recognition (**CVPR**) 2026
[ Strategy ] [ Self-Supervised ] - [ [arXiv](https://arxiv.org/abs/2602.19053) ] [ [Project]() ]
[ Strategy ] [ Self-Supervised ] - [ [arXiv](https://arxiv.org/abs/2602.19053) ] [ [Project]() ]→ [here](#teflow)

- **DeltaFlow: An Efficient Multi-frame Scene Flow Estimation Method**
*Qingwen Zhang, Xiaomeng Zhu, Yushan Zhang, Yixi Cai, Olov Andersson, Patric Jensfelt*
Expand Down Expand Up @@ -149,7 +149,9 @@ Train DeltaFlow with the leaderboard submit config. [Runtime: Around 18 hours in

```bash
# total bz then it's 10x2 under above training setup.
python train.py model=deltaFlow optimizer.lr=2e-3 epochs=20 batch_size=2 num_frames=5 loss_fn=deflowLoss train_aug=True "voxel_size=[0.15, 0.15, 0.15]" "point_cloud_range=[-38.4, -38.4, -3, 38.4, 38.4, 3]" +optimizer.scheduler.name=WarmupCosLR +optimizer.scheduler.max_lr=2e-3 +optimizer.scheduler.total_steps=20000
python train.py model=deltaflow optimizer.lr=2e-3 epochs=20 batch_size=2 num_frames=5 \
loss_fn=deflowLoss train_aug=True "voxel_size=[0.15, 0.15, 0.15]" "point_cloud_range=[-38.4, -38.4, -3, 38.4, 38.4, 3]" \
optimizer.lr=2e-4 +optimizer.scheduler.name=WarmupCosLR +optimizer.scheduler.max_lr=2e-3 +optimizer.scheduler.warmup_epochs=2

# Pretrained weight can be downloaded through (av2), check all other datasets in the same folder.
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/deltaflow/deltaflow-av2.ckpt
Expand Down Expand Up @@ -206,6 +208,19 @@ Train Feed-forward SSL methods (e.g. SeFlow/SeFlow++/VoteFlow etc), we needed to
1) process auto-label process for training. Check [dataprocess/README.md#self-supervised-process](dataprocess/README.md#self-supervised-process) for more details. We provide these inside the demo dataset already.
2) specify the loss function, we set the config here for our best model in the leaderboard.

#### TeFlow

```bash
# [Runtime: Around ? hours in 10x GPUs.]
python train.py model=deltaflow epochs=15 batch_size=2 num_frames=5 train_aug=True \
loss_fn=teflowLoss "voxel_size=[0.15, 0.15, 0.15]" "point_cloud_range=[-38.4, -38.4, -3, 38.4, 38.4, 3]" \
+ssl_label=seflow_auto "+add_seloss={chamfer_dis: 1.0, static_flow_loss: 1.0, dynamic_chamfer_dis: 1.0, cluster_based_pc0pc1: 1.0}" \
optimizer.name=Adam optimizer.lr=2e-3 +optimizer.scheduler.name=StepLR +optimizer.scheduler.step_size=9 +optimizer.scheduler.gamma=0.5

# Pretrained weight can be downloaded through:
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/teflow/teflow-av2.ckpt
```

#### SeFlow

```bash
Expand Down
31 changes: 28 additions & 3 deletions assets/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,30 @@ Then follow [this stackoverflow answers](https://stackoverflow.com/questions/596
```bash
cd OpenSceneFlow && docker build -f Dockerfile -t zhangkin/opensf .
```


### To Apptainer container

If you want to build a **minimal** training env for Apptainer container, you can use the following command:
```bash
apptainer build opensf.sif assets/opensf.def
# zhangkin/opensf:full is created by Dockerfile
```

Then run as a Python env with:
```bash
PYTHON="apptainer run --nv --writable-tmpfs opensf.sif"
$PYTHON train.py
```

<!--
In case the compile package not working for your CUDA cability, add following code to the `assets/opensf.def` file before `exec`:
```bash
echo "Running pip install for local CUDA modules..."
/opt/conda/bin/pip install /workspace/assets/cuda/chamfer3D
/opt/conda/bin/pip install /workspace/assets/cuda/mmcv
``` -->


## Installation

We will use conda to manage the environment with mamba for faster package installation.
Expand All @@ -77,10 +100,11 @@ Checking important packages in our environment now:
```bash
mamba activate opensf
python -c "import torch; print(torch.__version__); print(torch.cuda.is_available()); print(torch.version.cuda)"
python -c "import lightning.pytorch as pl; print(pl.__version__)"
python -c "import lightning.pytorch as pl; print('pl version:', pl.__version__)"
python -c "import spconv.pytorch as spconv; print('spconv import successfully')"
python -c "from assets.cuda.mmcv import Voxelization, DynamicScatter;print('successfully import on our lite mmcv package')"
python -c "from assets.cuda.chamfer3D import nnChamferDis;print('successfully import on our chamfer3D package')"
python -c "from av2.utils.io import read_feather; print('av2 package ok')"
python -c "from av2.utils.io import read_feather; print('av2 package ok') "
```


Expand All @@ -98,6 +122,7 @@ python -c "from av2.utils.io import read_feather; print('av2 package ok')"
2. In cluster have error: `pandas ImportError: /lib64/libstdc++.so.6: version 'GLIBCXX_3.4.29' not found`
Solved by `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/proj/berzelius-2023-154/users/x_qinzh/mambaforge/lib`

4. nvidia channel cannot put into env.yaml file otherwise, the cuda-toolkit will always be the latest one, for me (2025-04-30) I struggling on an hour and get nvcc -V also 12.8 at that time. py=3.10 for cuda >=12.1. (seems it's nvidia cannot be in the channel list???); py<3.10 for cuda <=11.8.0: otherwise 10x, 20x series GPU won't work on cuda compiler. (half precision)

3. torch_scatter problem: `OSError: /home/kin/mambaforge/envs/opensf-v2/lib/python3.10/site-packages/torch_scatter/_version_cpu.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE`
Solved by install the torch-cuda version: `pip install https://data.pyg.org/whl/torch-2.0.0%2Bcu118/torch_scatter-2.1.2%2Bpt20cu118-cp310-cp310-linux_x86_64.whl`
Expand Down
Loading