Papers
arxiv:2411.19930

On Domain-Specific Post-Training for Multimodal Large Language Models

Published on Nov 29, 2024
ยท Submitted by daixuancheng on Dec 2, 2024
Authors:
,
,
,
Bo Dai ,

Abstract

Recent years have witnessed the rapid development of general multimodal large language models (MLLMs). However, adapting general MLLMs to specific domains, such as scientific fields and industrial applications, remains less explored. This paper systematically investigates domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation. (1) Data Synthesis: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs. (2) Training Pipeline: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training. (3) Task Evaluation: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks. To support further research in MLLM domain adaptation, we will open-source our implementations.

Community

Paper author Paper submitter
โ€ข
edited Mar 25

AdaMLLM represents our latest advancement in building domain-specific foundation models through post-training.

๐ŸŒŸ Project Page: Adapt-MLLM-to-Domains

๐Ÿ”ง Code: https://github.com/bigai-ai/QA-Synthesizer

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment

Models citing this paper 22

Browse 22 models citing this paper

Datasets citing this paper 9

Browse 9 datasets citing this paper

Spaces citing this paper 36

Collections including this paper 14