The quadratic complexity of Multimodal Large Language Models (MLLMs) with respect to sequence length poses significant computational and memory challenges, hindering their real-world deployment. While existing training-free token reduction methods aim to address these inefficiencies, how to precisely identify redundant visual tokens and recover the essential information from the discarded tokens remain unclear. In this paper, we propose a ''filter-correlate-compress'' framework that decomposes the token reduction into three stages: filtering redundant tokens, correlating discarded information to preserved tokens, and compressing tokens to minimize redundancy. % resolves critical "what-where-how" questions at each stage Following the framework, we propose a solution FiCoCo to identify limitations in single redundancy assessment, propose adaptive strategies to retain critical information from discarded tokens, and mitigate semantic dilution during token fusion. Two specialized variants, FiCoCo-V (for vision encoders) and FiCoCo-L (for LLM decoders), further optimize efficiency across MLLM architectures. Extensive experiments demonstrate that FiCoCo achieves up to 5.7x/14.7x FLOPs reduction with 92.8%/93.6% performance retention on LLaVA-1.5-7B/LLaVA-NeXT-7B. % and scales to video tasks (11.4x FLOPs reduction, 92.8% performance retention). Our methods consistently outperform state-of-the-art training-free approaches, showcasing effectiveness and generalizability across model architectures, sizes, and tasks without requiring retraining.
Based on the paradigm, we develop a series of methods named FiCoCo that efficiently reduce the amount of visual token without re-training. FiCoCo-V reduces tokens in the visual encoder, and FiCoCo-L reduces tokens in the LLM decoder.
Despite the above figure, we further provide the algorithm illustration for FiCoCo-V and FiCoCo-L to clarify their distinct solutions across three stages.
We illustate the average performance of three TFLOPs on six benchmarks, where our FiCoCo-V and FiCoCo-L are significantly superior to other methods, especially when reaching the lowest TFLOPs=1.5:
Please refer to our paper for detailed experimental results.
@article{FiCoCo2024,
title={Filter, Correlate, Compress: Training-Free Token Reduction for MLLM Acceleration},
author={Yuhang Han and Xuyang Liu and Zihan Zhang and Pengxiang Ding and Donglin Wang and Honggang Chen and Qingsen Yan and Siteng Huang},
year={2024},
eprint={2411.17686},
archivePrefix={arXiv},
primaryClass={cs.CV}
}