Multi-Scale 3D Gaussian Splatting for Anti-Aliased Rendering

Multi-Scale 3D Gaussian Splatting for Anti-Aliased Rendering

Arxiv Preprint

National University of Singapore

Abstract

3D Gaussians have recently emerged as a highly efficient representation for 3D reconstruction and rendering. Despite its high rendering quality and speed at high resolutions, they both deteriorate drastically when rendered at lower resolutions or from far away camera position. During low resolution or far away rendering, the pixel size of the image can fall below the Nyquist frequency compared to the screen size of each splatted 3D Gaussian and leads to aliasing effect. The rendering is also drastically slowed down by the sequential alpha blending of more splatted Gaussians per pixel. To address these issues, we propose a multi-scale 3D Gaussian splatting algorithm, which maintains Gaussians at different scales to represent the same scene. Higher-resolution images are rendered with more small Gaussians, and lower-resolution images are rendered with fewer larger Gaussians. With similar training time, our algorithm can achieve 13%-66% PSNR and 160%-2400% rendering speed improvement at 4x-128x scale rendering on Mip-NeRF360 dataset compared to the single scale 3D Gaussian splatting.

Speed and Quality Comparison between 3D-GS and Our Method in "Garden" Scene
Speed and Quality Comparison between 3D-GS and Our Method in "Bicycle" Scene

Our algorithm effectively addresses aliasing issues by utilizing multi-scale Gaussians to represent various Levels of Detail (LOD) within a scene. This approach involves using larger Gaussians for coarser, low-resolution renderings, and smaller Gaussians for finer, high-resolution details. During the early stages of training, we aggregate small Gaussians from the finer levels to construct larger Gaussians for the coarser levels, as illustrated on the left side of the above figure. In the rendering phase, we selectively use Gaussians based on their screen size(or pixel coverage) at the current resolution, as shown on the right side of the figure. This multi-scale representation strategy is pivotal in achieving both high-quality visuals and efficient rendering performance, effectively accommodating a wide range of resolutions.


The selective rendering is based on the screen size(or pixel coverage) of each Gaussian at the current resolution. During training, the minimum and maximum pixel coverage of each Gaussian are updated during the rendering of the corresponding resolution level. During rendering, a Gaussian is filtered out if its pixel coverage is much smaller than the minimum pixel coverage of the current resolution level.