UniSH: Unifying Scene and Human Reconstruction in a Feed-Forward Pass
Mengfei Li1, Peng Li1, Zheng Zhang2, Jiahao Lu1, Chengfeng Zhao1, Wei Xue1,
Qifeng Liu1, Sida Peng3, Wenxiao Zhang1, Wenhan Luo1, Yuan Liu1†, Yike Guo1†
1HKUST, 2BUPT, 3ZJU
TL;DR
Given a monocular video as input, our UniSH is capable of jointly reconstructing scene and human in a single forward pass, enabling effective estimation of scene geometry, camera parameters and SMPL parameters.
🛠️ Installation
We provide a sudo-free installation method that works on most Linux servers (including headless ones).
Step 1: Clone Repository
git clone https://github.com/murphylmf/UniSH.git
cd UniSHStep 2: Create Conda Environment
This installs Python, system compilers, and OpenGL drivers.conda env create -f environment.yml
conda activate unish
Step 3: Compile Dependencies
This script compiles PyTorch3D, MMCV, and SAM2 from source using the compilers installed in Step 2.The environment has been tested on CUDA 12.1 and CUDA 11.8. You can specify the CUDA version by passing it as an argument to the installation script.
# Default (Auto-detect or 12.1)
bash install.shFor CUDA 11.8
bash install.sh 11.8For CUDA 12.1
bash install.sh 12.1Step 4: Download SMPL Models
Please download the SMPL models and place them in thebody_models folder.
The directory structure should be organized as follows:UniSH/
├── body_models/
│ └── smpl/
│ └── smpl/
│ ├── SMPL_FEMALE.pkl
│ ├── SMPL_MALE.pkl
│ └── SMPL_NEUTRAL.pkl🚀 Quick Start (Inference)
Run Inference
Run the following command to reconstruct the scene and human from the video:python inference.py --output_dir inference_results/example --video_path examples/example_video.mp4 Please refer to inference.py for more information about additional parameters.
📝 Citation
If you find this code useful for your research, please consider citing our paper:
@misc{li2026unishunifyingscenehuman,
title={UniSH: Unifying Scene and Human Reconstruction in a Feed-Forward Pass},
author={Mengfei Li and Peng Li and Zheng Zhang and Jiahao Lu and Chengfeng Zhao and Wei Xue and Qifeng Liu and Sida Peng and Wenxiao Zhang and Wenhan Luo and Yuan Liu and Yike Guo},
year={2026},
eprint={2601.01222},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.01222},
}🙏 Acknowledgements
We acknowledge the excellent contributions from the following projects:
📄 License
This project is licensed under the Apache 2.0 License. See LICENSE for details.--- Tranlated By Open Ai Tx | Last indexed: 2026-03-24 ---