We present DyMU, an efficient, training-free framework that dynamically reduces the computational burden of vision-language models (VLMs) while maintaining high task performance. Our approach comprises two key components. First, Dynamic Token Merging (DToMe) reduces the number of visual token embeddings by merging similar tokens based on image complexity, addressing the inherent inefficiency of fixed-length outputs in vision transformers. Second, Virtual Token Unmerging (VTU) simulates the expected token sequence for large language models (LLMs) by efficiently reconstructing the attention dynamics of a full sequence, thus preserving the downstream performance without additional fine-tuning. Unlike previous approaches, our method dynamically adapts token compression to the content of the image and operates completely training-free, making it readily applicable to most state-of-the-art VLM architectures. Extensive experiments on image and video understanding tasks, demonstrate that DyMU can reduce the average visual token count by 32%-85% while achieving comparable performance to full-length models, across diverse VLM architectures, including the recently popularized AnyRes-based visual encoders. Furthermore, through qualitative analyses we demonstrate that DToMe effectively adapts token reduction based on image complexity, and unlike existing systems, provides users more control over computational costs.
Dynamic Token Merging (DToMe): DToMe first determines per-layer thresholds (Left) by feeding a large batch of images into the vision transformer and computing bipartite token similarities. We rank these edges across the entire batch and choose the top-Br (r = desired average number of tokens, batch size B). This leads to more edges from simpler images (with more redundancy) being chosen, while complex images remain less merged. During inference, DToMe merges tokens on a per-image basis using these pre-computed thresholds.
Virtual Token Unmerging (VTU): We then apply VTU (Right) in the self-attention layers of the pretrained VLM to efficiently expand the attention matrices to the standard token count—ensuring the model's original weights and outputs remain compatible—before re-merging the tokens for the next layer. (See paper for detailed derivations.) The overall process is training-free and utilizes crucial image information by allocating the token budget more effectively for both simple and complex images.
@misc{wang2025dymudynamicmergingvirtual,
title={DyMU: Dynamic Merging and Virtual Unmerging for Efficient VLMs},
author={Zhenhailong Wang and Senthil Purushwalkam and Caiming Xiong and Silvio Savarese and Heng Ji and Ran Xu},
year={2025},
eprint={2504.17040},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.17040},
}