Jiacong Hu, Kun Liu, Yuheng Peng, Ming Zeng, Wenxiong Kang. Exploring Salient Embeddings for Gait Recognition[J]. Machine Intelligence Research, 2025, 22(5): 888-899. DOI: 10.1007/s11633-025-1545-5
Citation: Jiacong Hu, Kun Liu, Yuheng Peng, Ming Zeng, Wenxiong Kang. Exploring Salient Embeddings for Gait Recognition[J]. Machine Intelligence Research, 2025, 22(5): 888-899. DOI: 10.1007/s11633-025-1545-5

Exploring Salient Embeddings for Gait Recognition

  • Gait recognition aims to identify individuals by distinguishing unique walking patterns based on video-level pedestrian silhouettes. Previous studies have focused on designing powerful feature extractors to model the spatio-temporal dependencies of gait, thereby obtaining gait features that contain rich semantic information. However, they have overlooked the potential of feature maps to construct discriminative gait embeddings. In this work, we propose a novel model, EmbedGait, which is designed to learn salient gait embeddings for improved recognition results. Specifically, our framework starts with a frame-level spatial alignment to maintain inter-sequence consistency. Then, horizontal salient mapping (HSM) module is designed to extract the representative embeddings and discard the background information by a designed pooling operation. The subsequent adaptive embedding weighting (AEW) module is used to adaptively highlight the salient embeddings of different body parts and channels. Extensive experiments on the Gait3D, GREW and SUSTech1K datasets demonstrate that our approach improves comparable performance in several benchmarks tests. For example, our proposed EmbedGait achieves rank-1 accuracies of 77.3%, 79.0% and 79.6% on Gait3D, GREW and SUSTech1K, respectively.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return