Liang Zhang, Ludan Ruan, Anwen Hu, Qin Jin. Multimodal Pretraining from Monolingual to Multilingual[J]. Machine Intelligence Research, 2023, 20(2): 220-232. DOI: 10.1007/s11633-022-1414-4
Citation: Liang Zhang, Ludan Ruan, Anwen Hu, Qin Jin. Multimodal Pretraining from Monolingual to Multilingual[J]. Machine Intelligence Research, 2023, 20(2): 220-232. DOI: 10.1007/s11633-022-1414-4

Multimodal Pretraining from Monolingual to Multilingual

  • Multimodal pretraining has made convincing achievements in various downstream tasks in recent years. However, since the majority of the existing works construct models based on English, their applications are limited by language. In this work, we address this issue by developing models with multimodal and multilingual capabilities. We explore two types of methods to extend multimodal pretraining model from monolingual to multilingual. Specifically, we propose a pretraining-based model named multilingual multimodal pretraining (MLMM), and two generalization-based models named multilingual CLIP (M-CLIP) and multilingual acquisition (MLA). In addition, we further extend the generalization-based models to incorporate the audio modality and develop the multilingual CLIP for vision, language, and audio (CLIP4VLA). Our models achieve state-of-the-art performances on multilingual vision-text retrieval, visual question answering, and image captioning benchmarks. Based on the experimental results, we discuss the pros and cons of the two types of models and their potential practical applications.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return