Wavelet Latent Diffusion (WaLa): Billion-Parameter 3D Generative Model with Compact Wavelet Encodings

Aditya Sanghi1
Aliasghar Khani2
Pradyumna Reddy1
Arianna Rampini1
Derek Cheung2
Kamal Rahimi Malekshan2
Kanika Madan1
Hooman Shayani1

1 Autodesk AI Lab
2 Autodesk Research
[arxiv] [Code] [Weights] [Demo]

Abstract

Large-scale 3D generative models require substantial computational resources yet often fall short in capturing fine details and complex geometries at high resolutions. We attribute this limitation to the inefficiency of current representations, which lack the compactness required to model the generative models effectively. To address this, we introduce a novel approach called Wavelet Latent Diffusion, or WaLa, that encodes 3D shapes into a wavelet-based, compact latent encodings. Specifically, we compress a 2563 signed distance field into a 123 × 4 latent grid, achieving an impressive 2,427× compression ratio with minimal loss of detail. This high level of compression allows our method to efficiently train large-scale generative networks without increasing the inference time. Our models, both conditional and unconditional, contain approximately one billion parameters and successfully generate high-quality 3D shapes at 2563 resolution. Moreover, WaLa offers rapid inference, producing shapes within two to four seconds depending on the condition, despite the model’s scale. We demonstrate state-of-the-art performance across multiple datasets, with significant improvements in generation quality, diversity, and computational efficiency. We open-source our code and, to the best of our knowledge, release the largest pretrained 3D generative models across different modalities.

.




Overview of the WaLa network architecture and 2-stage training process and inference method. Top Left: Stage 1 autoencoder training, compressing Wavelet Tree (W) shape representation into a compact latent space. Right: Conditional/unconditional diffusion training. Bottom: Inference pipeline, illustrating sampling from the trained diffusion model and decoding the sampled latent into a Wavelet Tree (W), then into a mesh.




Single-view Image Conditioned Generation


Click for More Results 🕺 🕺


Voxels Conditioned Generation (16^3)

Click for More Results 🕺 🕺

Text Conditioned Generation

The cat is sleeping comfortably on the chair
A bag
A cap hat
A cartoony Santa Claus
A baseball bat with a batting helmet upsidedown
High heels
A poison-dart frog
An owl
An umbrella
A bunch of apples stacked on a plate
A cow
Click for More Results 🕺 🕺


Point Cloud Conditioned Generation

Click for More Results 🕺 🕺

Depth Conditioned Generation


Click for More Results 🕺 🕺


Depth 6 Conditioned Generation


Click for More Results(Depth 6 condition) 🕺
Click for More Results(Depth 4 condition) 🕺


Sketch Conditioned Generation

Click for More Results 🕺 🕺


Multi-view Image Conditioned Generation


Click for More Results 🕺 🕺





Paper

Wavelet Latent Diffusion(WaLa): Billion-Parameter 3D Generative Model with Compact Wavelet Encodings
[arxiv] [Code] [Weights] [Demo]


BibTeX

@misc{sanghi2024waveletlatentdiffusionwala,
      title={Wavelet Latent Diffusion (Wala): Billion-Parameter 3D Generative Model with Compact Wavelet Encodings}, 
      author={Aditya Sanghi and Aliasghar Khani and Pradyumna Reddy and Arianna Rampini and Derek Cheung and Kamal Rahimi Malekshan and Kanika Madan and Hooman Shayani},
      year={2024},
      eprint={2411.08017},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2411.08017}, 
}