Skip to content

hustvl/MaTVLM

Repository files navigation

MaTVLM

MaTVLM: Hybrid Mamba-Transformer for Efficient Vision-Language Modeling

Yingyue Li1, Bencheng Liao2,1, Wenyu Liu1, Xinggang Wang1 📧

1 School of EIC, HUST

2 Institute of Artificial Intelligence, HUST

(📧) corresponding author.

MaTVLM  MaTVLM  huggingface weights 

Abstract

With the advancement of RNN models with linear complexity, the quadratic complexity challenge of transformers has the potential to be overcome. Notably, the emerging Mamba-2 has demonstrated competitive performance, bridging the gap between RNN models and transformers. However, due to sequential processing and vanishing gradients, RNN models struggle to capture long-range dependencies, limiting contextual understanding. This results in slow convergence, high resource demands, and poor performance on downstream understanding and complex reasoning tasks. In this work, we present a hybrid model MaTVLM by substituting a portion of the transformer decoder layers in a pre-trained VLM with Mamba-2 layers. Leveraging the inherent relationship between attention and Mamba-2, we initialize Mamba-2 with corresponding attention weights to accelerate convergence. Subsequently, we employ a single-stage distillation process, using the pre-trained VLM as the teacher model to transfer knowledge to the \name, further enhancing convergence speed and performance. Furthermore, we investigate the impact of differential distillation loss within our training framework. We evaluate the MaTVLM on multiple benchmarks, demonstrating competitive performance against the teacher model and existing VLMs while surpassing both Mamba-based VLMs and models of comparable parameter scales. Remarkably, the MaTVLM achieves up to $3.6\times$ faster inference than the teacher model while reducing GPU memory consumption by 27.5%, all without compromising performance.

Contents

Install

  1. Clone this repository and navigate to MaTVLM folder
git clone https://github.com/hustvl/MaTVLM
cd MaTVLM
  1. Install Package
# Install PyTorch (with CUDA 11.8) before everything else. those assume you are using cu118
conda create -n matvlm python=3.10 -y
conda activate matvlm
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
wandb login

Demo

Run the following command:

python -m serve.app --model-path hustvl/MaTVLM_0_25_Mamba2

Prepare data

Please refer to the document of the TinyLLaVA to download the ShareGPT4V Dataset.

Start training!

CUDA_VISIBLE_DEVICES=0,1,2,3 ACCELERATE_LOG_LEVEL=info PYTHONPATH=. accelerate launch --main_process_port=9999 --config_file multi_gpu.yaml train_tinyllava_mamba2/train_hybrid.py mamba2_tinyllava/tinyllava_0.25_mamba2_665k.yaml 2>&1 | tee -a output_train.txt

It takes around 48 hours for MaTVLM_0.25_Mamba2 on 4x 3090 (24G).

Evaluate

CUDA_VISIBLE_DEVICES=0,1,2,3 PYTHONPATH=. bash scripts/eval/test_all_benchmark.sh /data/yingyueli/MambaInLlama/output/MaTVLM_0_25_Mamba2 MaTVLM_0_25_Mamba2 phi 0

Acknowledgements

This code is developed on the top of TinyLLaVA, MambaInLLaMA. Thanks for their great works.

Citation

If you find MaTVLM is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@misc{li2025matvlmhybridmambatransformerefficient,
      title={MaTVLM: Hybrid Mamba-Transformer for Efficient Vision-Language Modeling}, 
      author={Yingyue Li and Bencheng Liao and Wenyu Liu and Xinggang Wang},
      year={2025},
      eprint={2503.13440},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2503.13440}, 
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published