3D-Aware Implicit Motion Control for View-Adaptive Human Video Generation Paper • 2602.03796 • Published Feb 3 • 62
Research on World Models Is Not Merely Injecting World Knowledge into Specific Tasks Paper • 2602.01630 • Published Feb 2 • 46
CoF-T2I: Video Models as Pure Visual Reasoners for Text-to-Image Generation Paper • 2601.10061 • Published Jan 15 • 31
GARDO: Reinforcing Diffusion Models without Reward Hacking Paper • 2512.24138 • Published Dec 30, 2025 • 29
MemFlow: Flowing Adaptive Memory for Consistent and Efficient Long Video Narratives Paper • 2512.14699 • Published Dec 16, 2025 • 28
SVG-T2I: Scaling Up Text-to-Image Latent Diffusion Model Without Variational Autoencoder Paper • 2512.11749 • Published Dec 12, 2025 • 39
Ground Slow, Move Fast: A Dual-System Foundation Model for Generalizable Vision-and-Language Navigation Paper • 2512.08186 • Published Dec 9, 2025 • 23
MultiShotMaster: A Controllable Multi-Shot Video Generation Framework Paper • 2512.03041 • Published Dec 2, 2025 • 66
OmniX: From Unified Panoramic Generation and Perception to Graphics-Ready 3D Scenes Paper • 2510.26800 • Published Oct 30, 2025 • 22
VFXMaster: Unlocking Dynamic Visual Effect Generation via In-Context Learning Paper • 2510.25772 • Published Oct 29, 2025 • 33
CodePlot-CoT: Mathematical Visual Reasoning by Thinking with Code-Driven Images Paper • 2510.11718 • Published Oct 13, 2025 • 14
AdaViewPlanner: Adapting Video Diffusion Models for Viewpoint Planning in 4D Scenes Paper • 2510.10670 • Published Oct 12, 2025 • 19
UniMMVSR: A Unified Multi-Modal Framework for Cascaded Video Super-Resolution Paper • 2510.08143 • Published Oct 9, 2025 • 20
VideoCanvas: Unified Video Completion from Arbitrary Spatiotemporal Patches via In-Context Conditioning Paper • 2510.08555 • Published Oct 9, 2025 • 64
LongLive: Real-time Interactive Long Video Generation Paper • 2509.22622 • Published Sep 26, 2025 • 188