Qi Zheng, Chao-Yue Wang, Dadong Wang, Da-Cheng Tao. Visual Superordinate Abstraction for Robust Concept Learning[J]. Machine Intelligence Research, 2023, 20(1): 79-91. DOI: 10.1007/s11633-022-1360-1
Citation: Qi Zheng, Chao-Yue Wang, Dadong Wang, Da-Cheng Tao. Visual Superordinate Abstraction for Robust Concept Learning[J]. Machine Intelligence Research, 2023, 20(1): 79-91. DOI: 10.1007/s11633-022-1360-1

Visual Superordinate Abstraction for Robust Concept Learning

  • Concept learning constructs visual representations that are connected to linguistic semantics, which is fundamental to vision-language tasks. Although promising progress has been made, existing concept learners are still vulnerable to attribute perturbations and out-of-distribution compositions during inference. We ascribe the bottleneck to a failure to explore the intrinsic semantic hierarchy of visual concepts, e.g., red, blue,··· \in “color” subspace yet cube \in “shape”. In this paper, we propose a visual superordinate abstraction framework for explicitly modeling semantic-aware visual subspaces (i.e., visual superordinates). With only natural visual question answering data, our model first acquires the semantic hierarchy from a linguistic view and then explores mutually exclusive visual superordinates under the guidance of linguistic hierarchy. In addition, a quasi-center visual concept clustering and superordinate shortcut learning schemes are proposed to enhance the discrimination and independence of concepts within each visual superordinate. Experiments demonstrate the superiority of the proposed framework under diverse settings, which increases the overall answering accuracy relatively by 7.5% for reasoning with perturbations and 15.6% for compositional generalization tests.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return