Skip to content

Official Implementation of Nautilus: Locality-aware Autoencoder for Scalable Mesh Generation

Notifications You must be signed in to change notification settings

Yuxuan-W/Nautilus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Nautilus: Locality-aware Autoencoder for Scalable Mesh Generation

Yuxuan Wang*1, Xuanyu Yi*1, Haohan Weng*2, Qingshan Xu1, Xiaokang Wei3,
Xianghui Yang2, Chunchao Guo2, Long Chen4, Hanwang Zhang1
*Equal contribution
1Nanyang Technological University, 2Tencent Hunyuan,
3The Hong Kong Polytechnic University, 4Hong Kong University of Science and Technology,

image

Preparation

Trained Model

We plan to release the checkpoints upon the acceptance of the paper.

Due to company confidentiality policies, we are unable to release the model trained on the full dataset with the 1024-dimension Michelangelo encoder. Instead, we provide a version trained with the 256-dimension Michelangelo encoder and a filtered dataset solely based on Objaverse. The performance of this model is moderately lower compared to the full version.

Installation

Install the packages in requirements.txt. The code is tested under CUDA version 11.8.

# clone the repository
git clone https://github.com/Yuxuan-W/nautilus.git
cd nautilus
# create a new conda environment
conda create -n nautilus python=3.9 -y
conda activate nautilus
# install pytorch, we use cuda 11.8
conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0  pytorch-cuda=11.8 -c pytorch -c nvidia -y
# install other dependencies
pip install -r requirements.txt
# install torch-scatter and torch-cluster based on your cuda version
pip install torch-scatter torch-cluster -f https://data.pyg.org/whl/torch-2.4.0+cu118.html

Inference

The generation inference will be available upon the release of our checkpoints. Generating a 5000-face mesh asset typically takes around 3 to 4 minutes on a single A100 GPU.

bash infer.sh /your/path/to/checkpoint /your/path/to/pointcloud

Citation

If you find our work helps, please cite our paper:

@article{wang2025nautilus,
  title={Nautilus: Locality-aware Autoencoder for Scalable Mesh Generation},
  author={Wang, Yuxuan and Yi, Xuanyu and Weng, Haohan and Xu, Qingshan and Wei, Xiaokang and Yang, Xianghui and Guo, Chunchao and Chen, Long and Zhang, Hanwang},
  journal={arXiv preprint arXiv:2501.14317},
  year={2025}
}

Acknowledgement

Our code is based on the wonderful repo of meshgpt-pytorch. We thank the authors for their great work.

About

Official Implementation of Nautilus: Locality-aware Autoencoder for Scalable Mesh Generation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published