EigenGS Representation:

From Eigenspace to Gaussian Image Space

CVPR 2025

  • 1National Tsing Hua University
    • 2Academia Sinica

Abstract

Principal Component Analysis (PCA), a classical dimensionality reduction technique, and 2D Gaussian representation, an adaptation of 3D Gaussian Splatting for image representation, offer distinct approaches to modeling visual data. We present EigenGS, a novel method that bridges these paradigms through an efficient transformation pipeline connecting eigenspace and image-space Gaussian representations. Our approach enables instant initialization of Gaussian parameters for new images without requiring per-image optimization from scratch, dramatically accelerating convergence. EigenGS introduces a frequency-aware learning mechanism that encourages Gaussians to adapt to different scales, effectively modeling varied spatial frequencies and preventing artifacts in high-resolution reconstruction. Extensive experiments demonstrate that EigenGS not only achieves superior reconstruction quality compared to direct 2D Gaussian fitting but also reduces the necessary parameter count and training time. The results highlight EigenGS's effectiveness and generalization ability across images with varying resolutions and diverse categories, making Gaussian-based image representation both high-quality and viable for real-time applications.

Better Initialization and Faster Convergence

convergence
Fast convergence speed on Cats dataset. Our method achieves perceptually significant improvements within the first 100 iterations. The rapid visual convergence demonstrates the effectiveness of our PCA-based initialization, particularly on well-aligned datasets with moderate resolution. Additionally, our result also indicates that traditional PCA reconstruction, which is basically similar to our result at iteration zero, can be further improved by our approach.
convergence

convergence

convergence