Papers
arxiv:2512.16229

LoPA: Scaling dLLM Inference via Lookahead Parallel Decoding

Published on Dec 18
· Submitted by
xuchenkai
on Dec 23
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

LoPA, a training-free algorithm, enhances the parallelism of diffusion large language models, doubling the tokens per forward pass and boosting throughput with multi-GPU deployment.

AI-generated summary

Diffusion Large Language Models (dLLMs) have demonstrated significant potential for high-speed inference. However, current confidence-driven decoding strategies are constrained by limited parallelism, typically achieving only 1--3 tokens per forward pass (TPF). In this work, we identify that the degree of parallelism during dLLM inference is highly sensitive to the Token Filling Order (TFO). Then, we introduce Lookahead PArallel Decoding LoPA, a training-free, plug-and-play algorithm, to identify a superior TFO and hence accelerate inference. LoPA concurrently explores distinct candidate TFOs via parallel branches, and selects the one with the highest potential for future parallelism based on branch confidence. We apply LoPA to the state-of-the-art D2F model and observe a substantial enhancement in decoding efficiency. Notably, LoPA increases the TPF of D2F-Dream to 10.1 on the GSM8K while maintaining performance superior to the Dream baseline. Furthermore, to facilitate this unprecedented degree of parallelism, we develop a specialized multi-device inference system featuring Branch Parallelism (BP), which achieves a single-sample throughput of 1073.9 tokens per second under multi-GPU deployment. The code is available at https://github.com/zhijie-group/LoPA.

Community

arXiv lens breakdown of this paper 👉 https://arxivlens.com/PaperView/Details/lopa-scaling-dllm-inference-via-lookahead-parallel-decoding-875-bf705008

  • Executive Summary
  • Detailed Breakdown
  • Practical Applications

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.16229 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.16229 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.16229 in a Space README.md to link it from this page.

Collections including this paper 1