Icon DyMU: Dynamic Merging and Virtual Unmerging for Efficient VLMs

1University of Illinois Urbana-Champaign, 2Salesforce Research
*Equal Contribution
Dymu teaser.
Dynamic Merging and Virtual Unmerging (DyMU) adaptively reduces visual token lengths based on image complexity, as shown on the left where simpler images are represented using fewer tokens. In contrast, existing representations (like CLIP) always use the same number of tokens regardless of image content. DyMU applied to VLMs (right) maintains competitive performance across different token compression levels while significantly reducing FLOPs. This training-free approach preserves key semantic information, offering a more efficient plug-and-play alternative to VLMs with fixed-length visual tokens.

Abstract

We present DyMU, an efficient, training-free framework that dynamically reduces the computational burden of vision-language models (VLMs) while maintaining high task performance. Our approach comprises two key components. First, Dynamic Token Merging (DToMe) reduces the number of visual token embeddings by merging similar tokens based on image complexity, addressing the inherent inefficiency of fixed-length outputs in vision transformers. Second, Virtual Token Unmerging (VTU) simulates the expected token sequence for large language models (LLMs) by efficiently reconstructing the attention dynamics of a full sequence, thus preserving the downstream performance without additional fine-tuning. Unlike previous approaches, our method dynamically adapts token compression to the content of the image and operates completely training-free, making it readily applicable to most state-of-the-art VLM architectures. Extensive experiments on image and video understanding tasks, demonstrate that DyMU can reduce the average visual token count by 32%-85% while achieving comparable performance to full-length models, across diverse VLM architectures, including the recently popularized AnyRes-based visual encoders. Furthermore, through qualitative analyses we demonstrate that DToMe effectively adapts token reduction based on image complexity, and unlike existing systems, provides users more control over computational costs.

Method Overview

method

Dynamic Token Merging (DToMe): DToMe first determines per-layer thresholds (Left) by feeding a large batch of images into the vision transformer and computing bipartite token similarities. We rank these edges across the entire batch and choose the top-Br (r = desired average number of tokens, batch size B). This leads to more edges from simpler images (with more redundancy) being chosen, while complex images remain less merged. During inference, DToMe merges tokens on a per-image basis using these pre-computed thresholds.

Virtual Token Unmerging (VTU): We then apply VTU (Right) in the self-attention layers of the pretrained VLM to efficiently expand the attention matrices to the standard token count—ensuring the model's original weights and outputs remain compatible—before re-merging the tokens for the next layer. (See paper for detailed derivations.) The overall process is training-free and utilizes crucial image information by allocating the token budget more effectively for both simple and complex images.

Qualitative Results


img_to_tok_grid
Dynamic token length consistent with image complexity.
qualitative_v2
More flexible control of visual token length via combining with additional vision tools.

Quantative Results


dymu_result_table_llava1.5
Comparison with state-of-the-art methods for improving efficiency on LLaVA 1.5. DyMU-low achieves 97.7% of the original full-length LLaVA baseline's performance while using only ~15% of the tokens. Importantly, DyMU is entirely training-free and generally outperforms previous fixed-length, training-free methods, while also enabling variable-length outputs. For more results on different vision encoders and VLMs, please refer to the paper.

BibTeX


    @misc{wang2025dymudynamicmergingvirtual,
      title={DyMU: Dynamic Merging and Virtual Unmerging for Efficient VLMs}, 
      author={Zhenhailong Wang and Senthil Purushwalkam and Caiming Xiong and Silvio Savarese and Heng Ji and Ran Xu},
      year={2025},
      eprint={2504.17040},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2504.17040}, 
    }