top of page

OUTCOME SPOTLIGHT

Interactive Large-Scale 3D Reconstruction with Gaussian Splatting

An interactive 3D reconstruction system enabling scalable, immersive AR/VR experiences under real-world hardware constraints.

Attended PBL

NVIDIA Project

Computer 3D Graphics and Deep Learning

Create and innovate with cutting-edge computer graphics and deep learning techniques to produce advanced 3D models and simulations.

DEMONSTRATED CAPABILITY

Interactive System Architecture

System-Level Insight

Designed and evaluated an interactive 3D reconstruction system by balancing neural rendering fidelity, geometric consistency, and hardware constraints to support scalable AR/VR experiences.

Analyzed and compared alternative reconstruction approaches to select methods that balance visual quality, computational efficiency, and interactivity under constrained resources.

Structured a multi-stage system that integrates data acquisition, geometric alignment, and user interaction while maintaining spatial coherence at scale.

Evaluated system behavior across performance, usability, and stability constraints, identifying limitations and articulating transferable design trade-offs.

What This Project Achieves

This project advances interactive 3D reconstruction for AR and VR applications by combining Gaussian Splatting with geometric alignment and interface design. The system enables large-scale scene reconstruction with high visual fidelity while operating under limited hardware resources. By integrating interactive elements, scene merging, and user trajectory tracking, the project demonstrates how neural rendering can support immersive, real-time exploration of reconstructed environments beyond static visualization.

How This Was Built — Key Highlights

This project implemented an end-to-end 3D reconstruction and interaction pipeline designed for immersive AR/VR use cases. The workflow combined neural rendering, geometric alignment, and interface-level controls to support scalable and interactive exploration.

  • Applied Gaussian Splatting for high-resolution 3D reconstruction with lower computational overhead than NeRF-based methods.

  • Conducted data collection and theoretical analysis to support large-scale scene reconstruction.

  • Implemented Iterative Closest Point (ICP) to stitch point clouds and maintain geometric consistency across scenes.

  • Developed an interactive viewer with trajectory tracking, portals, and region-based triggers.

  • Integrated external images and 3D models with position-aware windows to manage occlusion and spatial coherence.

Challenges

Developing an interactive neural reconstruction system introduced several technical and system-level challenges.

  • Scaling reconstruction while operating under limited hardware resources required careful algorithm selection.

  • Maintaining geometric coherence across stitched scenes demanded robust alignment and error handling.

  • Balancing rendering fidelity with real-time interactivity constrained both system architecture and interface design.

  • Managing occlusion and scene navigation complexity increased with large, merged reconstructions.

Insights

Project experimentation revealed key insights relevant to immersive system design.

  • Gaussian Splatting offers a strong balance between visual quality and computational efficiency for interactive applications.

  • Geometric alignment remains critical even in neural rendering pipelines to ensure spatial consistency.

  • Interactive elements such as portals and trajectory tracking significantly enhance user understanding of reconstructed spaces.

  • System-level constraints must be considered early to enable scalable and deployable AR/VR experiences.

Project Gallery

Academic Team Feedback

Feedback from the Project Lead—a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory specializing in computer graphics, computational design, and machine learning—highlighted the project as an outstanding example of advancing Gaussian Splatting toward interactive AR/VR applications. Drawing on her experience across industry research labs including NVIDIA, Meta Reality Labs, and Google, she noted the team’s strong system-level thinking in merging multiple large-scale reconstructions into a unified, navigable environment. The project demonstrated steady and disciplined execution, with early identification of technical challenges, thoughtful algorithm selection, and effective engineering solutions. Toshihiro was specifically recognized for leading the development of the interactive user interface, trajectory visualization, and virtual portal mechanisms, contributing substantially to both the technical robustness and usability of the final system. The Academic Team further emphasized that the final outcome was not only technically rigorous but also creatively designed and genuinely engaging to use.

Project Contributor(s)

Shuyi Li.jpg

Toshihiro Koizumi

The University of Tokyo, Japan

Project Reflection

This project allowed me to explore how neural rendering and geometric reasoning can be combined to create interactive AR and VR experiences. By working across reconstruction, alignment, and interface design, I gained a deeper understanding of how technical and creative decisions together shape immersive systems.

bottom of page