tttLRM: Test-Time Training for Long Context and Autoregressive 3D Reconstruction

Chen Wang1*     Hao Tan2     Wang Yifan 2     Zhiqin Chen2     Yuheng Liu3*     Kalyan Sunkavalli2     Sai Bi2     Lingjie Liu1$     Yiwei Hu2$    
1University of Pennsylvania      2Adobe Research      3UCI
(*: Work done as interns at Adobe Research, $: Equal Advising)

CVPR, 2026

Abstract

We propose tttLRM, a novel large 3D reconstruction model that leverages a Test-Time Training (TTT) layer to enable long-context, autoregressive 3D reconstruction with linear computational complexity, further scaling the model's capability. Our framework efficiently compresses multiple image observations into the fast weights of the TTT layer, forming an implicit 3D representation in the latent space that can be decoded into various explicit formats, such as Gaussian Splats (GS) for downstream applications. The online learning variant of our model supports progressive 3D reconstruction and refinement from streaming observations. We demonstrate that pretraining on novel view synthesis tasks effectively transfers to explicit 3D modeling, resulting in improved reconstruction quality and faster convergence. Extensive experiments show that our method achieves superior performance in feedforward 3D Gaussian reconstruction compared to state-of-the-art approaches on both objects and scenes.

Pipeline

Teaser image

Given a set of posed input images, tttLRM encodes them into tokens (green boxes) after patchifying. The input tokens are fed into the LaCT block (shown in the blue frame) where fast weights are updated accordingly. Another set of virtual tokens (blue boxes) are used to query the updated fast weights, and decoded into 3D representations like 3DGS for high-quality novel view synthesis.

FeedForward GS Reconstruction







Autoregressive GS Reconstruction

Comparisons

3DGS
LongLRM
vs. Ours

BibTeX


@article{wang2026tttlrm,
  Author = {Chen Wang and Hao Tan and Wang Yifan and Zhiqin Chen and Yuheng Liu and Kalyan Sunkavalli and Sai Bi and Lingjie Liu and Yiwei Hu},
  Title = {tttLRM: Test-Time Training for Long Context and Autoregressive 3D Reconstruction},
  Year = {2026},
  booktitle={CVPR},
}