Autoregressive (AR) models have recently achieved state-of-the-art performance in text and image generation. However, their primary limitation is slow generation speed due to the token-by-token process. We ask an ambitious question: can a pre-trained AR model be adapted to generate outputs in just one or two steps? If successful, this would significantly advance the development and deployment of AR models. We notice that existing works that attempt to speed up AR generation by generating multiple tokens at once fundamentally cannot capture the output distribution due to the conditional dependencies between tokens, limiting their effectiveness for few-step generation. To overcome this, we propose Distilled Decoding (DD), which leverages flow matching to create a deterministic mapping from Gaussian distribution to the output distribution of the pre-trained AR model. We then train a network to distill this mapping, enabling few-step generation. The entire training process of DD does not need the training data of the original AR model (as opposed to some other methods), thus making DD more practical. We evaluate DD on state-of-the-art image AR models and present promising results. For VAR, which requires 10-step generation (680 tokens), DD enables one-step generation (6.3 speed-up), with an acceptable increase in FID from 4.19 to 9.96 on ImageNet-256. Similarly, for LlamaGen, DD reduces generation from 256 steps to 1, achieving an 217.8 speed-up with a comparable FID increase from 4.11 to 11.35 on ImageNet-256. In both cases, baseline methods completely fail with FID scores >100. DD also excels on text-to-image generation, reducing the generation from 256 steps to 2 for LlamaGen with minimal FID increase from 25.70 to 28.95. As the first work to demonstrate the possibility of one-step generation for image AR models, DD challenges the prevailing notion that AR models are inherently slow, and opens up new opportunities for efficient AR generation.
Highlight
Method
Training Few-step AR is non-trivial
Simultaneously predicting the probabilities of a set of tokens is a common method for reducing the number of autoregressive (AR) steps. However, we demonstrate that this method fails when sampling with very few steps.
The target of this method, , completely ignores the correlations between the predicted tokens, creating a gap with the ground truth distribution .
When the number of steps is small, the size of the set becomes large, resulting in significant performance degradation. Below are examples of one-step generation using this method.
The Core Idea of DD
(1) DD uses a pre-trained AR model and flow-matching to construct the mapping from a noise token to a data token (see figure left).
(2) Next, DD constructs a trajectory that gradually transforms a sequence where all tokens are noise into a sequence where all tokens are generated data (see figure right). Each noise sequence corresponds to a unique traejctory.
(3) Based on this trajectory, DD distills a model to perform few-step sampling (see figure right).
The detailed workflow of DD
As discussed above, DD uses a pre-trained AR model and flow-matching to construct a trajecotry and distills the model based on it. The detailed workflow of DD consists of three parts:
(1) Dataset generation. We first construct a dataset of noise-data pairs. Specifically, we randomly sample noise sequences from a standard Gaussian distribution and calculate the endpoint of the trajectory.
(2) Training. We then distill a model to predict the endpoint of the trajectory given a specific intermediate point, including the starting point, as input.
(3) Sampling. Finally, we conduct sampling from a pure noise sequence. We can revert to a closer intermediate point after obtaining the final value and re-predict it for higher performance. Alternatively, we can involve the pre-trained AR model in this process to achieve a finer quality-time trade-off.
Quantitative Results
More Qualitative Results
We demonstrate more generated examples here.
Lable-conditional Generation
Text-to-Image Generation
Prompts without * are from the LAION-COCO dataset, while those marked with * were created by us.
DD-2steps
LlamaGen-256steps
@article{liu2024distilleddecoding1onestep,
title={Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching},
author={Enshu Liu and Xuefei Ning and Yu Wang and Zinan Lin},
year={2024},
journal={arXiv preprint arXiv:2412.17153},
}