Abstract
A geometry-free framework using pre-trained diffusion transformers lifts perspective images and videos to 360° panoramas without requiring camera metadata, achieving state-of-the-art performance through token sequence processing and addressing seam artifacts via circular latent encoding.
Lifting perspective images and videos to 360° panoramas enables immersive 3D world generation. Existing approaches often rely on explicit geometric alignment between the perspective and the equirectangular projection (ERP) space. Yet, this requires known camera metadata, obscuring the application to in-the-wild data where such calibration is typically absent or noisy. We propose 360Anything, a geometry-free framework built upon pre-trained diffusion transformers. By treating the perspective input and the panorama target simply as token sequences, 360Anything learns the perspective-to-equirectangular mapping in a purely data-driven way, eliminating the need for camera information. Our approach achieves state-of-the-art performance on both image and video perspective-to-360° generation, outperforming prior works that use ground-truth camera information. We also trace the root cause of the seam artifacts at ERP boundaries to zero-padding in the VAE encoder, and introduce Circular Latent Encoding to facilitate seamless generation. Finally, we show competitive results in zero-shot camera FoV and orientation estimation benchmarks, demonstrating 360Anything's deep geometric understanding and broader utility in computer vision tasks. Additional results are available at https://360anything.github.io/.
Community
360Anything lifts arbitrary perspective images and videos to seamless, gravity-aligned 360° panoramas, without using any camera or 3D information.
Project page: https://360anything.github.io/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LaVR: Scene Latent Conditioned Generative Video Trajectory Re-Rendering using Large 4D Reconstruction Models (2026)
- GimbalDiffusion: Gravity-Aware Camera Control for Video Generation (2025)
- Beyond Inpainting: Unleash 3D Understanding for Precise Camera-Controlled Video Generation (2026)
- Unified Camera Positional Encoding for Controlled Video Generation (2025)
- Gen3R: 3D Scene Generation Meets Feed-Forward Reconstruction (2026)
- GeoVideo: Introducing Geometric Regularization into Video Generation Model (2025)
- ReCamDriving: LiDAR-Free Camera-Controlled Novel Trajectory Video Generation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper