DiffusionGS: Baking Gaussian Splatting into Diffusion Denoiser for Fast and Scalable Single-stage Image-to-3D Generation and Reconstruction

1 Johns Hopkins University   2 Adobe   3 HKUST   4 Shanghai Jiao Tong University  

A 3DGS-based diffusion generates objects and reconstructs scenes from a single view in 6 seconds.


Visual results of our method. For objects, the prompt views are in the left dashed box. The generated novel views and Gaussian point clouds are on the right. For scenes, our model can handle hard cases with occlusion and rotation, as shown in the dashed boxes of the third row. The text-to-3D demos are prompted by stable diffusion and Sora for objects and scenes.

Method Overview

Existing feedforward image-to-3D methods mainly rely on 2D multi-view diffusion models that cannot guarantee 3D consistency. These methods easily collapse when changing the prompt view direction and mainly handle object-centric cases. In this paper, we propose a novel single-stage 3D diffusion model, DiffusionGS, for object generation and scene reconstruction from a single view. DiffusionGS directly outputs 3D Gaussian point clouds at each timestep to enforce view consistency and allow the model to generate robustly given prompt views of any directions, beyond object-centric inputs. Plus, to improve the capability and generality of DiffusionGS, we scale up 3D training data by developing a scene-object mixed training strategy. Experiments show that DiffusionGS yields improvements of 2.20 dB/23.25 and 1.34 dB/19.16 in PSNR/FID for objects and scenes than the state-of-the-art methods, without using 2D diffusion prior and depth estimator. In addition, our method enjoys over 5x faster inference speed (~6 seconds on a single A100 GPU). Code will be made publicly available.

The Overall Framework of Our DiffusionGS Pipeline. (a) When selecting the data for our scene-object mixed training, we impose two angle constraints on the positions and orientations of the viewpoint vectors to guarantee the convergence of the training process. (b) The denoiser of DiffusionGS in a single timestep, which directly outputs pixel-aligned 3D Gaussian point clouds.

Single-view 3D Object Generation Results

ABO Hard Cases

Thumbnail
Thumbnail

Thumbnail
Thumbnail



GSO Hard Cases

Thumbnail
Thumbnail

Thumbnail
Thumbnail



Open Illumination (Real Camera)

Thumbnail
Thumbnail

Thumbnail
Thumbnail



Text-to-Image (Prompted by Stable Diffusion)

Thumbnail
Thumbnail

Thumbnail
Thumbnail



Text-to-Image (Prompted by FLUX)

Thumbnail
Thumbnail

Thumbnail
Thumbnail



Single-view 3D Scene Reconstruction Results

(The first frame is the prompt view and other frames are rendered by our DiffusionGS)

Indoor Scene Reconstruction

Thumbnail
Thumbnail

Thumbnail
Thumbnail


Outdoor Scene Reconstruction

Thumbnail
Thumbnail

Thumbnail
Thumbnail


Text-to-Image (The first frame is prompted by Sora)

Thumbnail
Thumbnail

Thumbnail
Thumbnail