Xiao Wang, Yao Rong, Shiao Wang, Yuan Chen, Zhe Wu, Bo Jiang, Yonghong Tian, Jin Tang. Unleashing the Power of CNN and Transformer for Balanced RGB-event Video Recognition[J]. Machine Intelligence Research, 2025, 22(6): 1031-1047. DOI: 10.1007/s11633-025-1555-3
Citation: Xiao Wang, Yao Rong, Shiao Wang, Yuan Chen, Zhe Wu, Bo Jiang, Yonghong Tian, Jin Tang. Unleashing the Power of CNN and Transformer for Balanced RGB-event Video Recognition[J]. Machine Intelligence Research, 2025, 22(6): 1031-1047. DOI: 10.1007/s11633-025-1555-3

Unleashing the Power of CNN and Transformer for Balanced RGB-event Video Recognition

  • Pattern recognition based on RGB-event data is a newly arising research topic and previous works usually learn their features via convolutional neural network (CNN) or transformer. As we know, CNN captures local features well and the cascaded self-attention mechanisms are good at extracting long-range global relations. It is intuitive to combine them for high-performance RGB-event based video recognition, however, existing works fail to achieve a good balance between the accuracy and model parameters. In this work, we propose a novel RGB-event based recognition framework termed TSCFormer, which is a relatively lightweight CNN-Transformer model. Specifically, we mainly adopt the CNN as the backbone network to first encode both RGB and event data. Moreover, we initialize global tokens as the input and fuse them with RGB and event features using the BridgeFormer module. It captures the global long-range relations well between both modalities, and maintains the simplicity of the whole model architecture at the same time. The enhanced features will be projected and fused into the RGB and event CNN blocks, respectively, in an interactive manner using feature to event (F2E) and feature to vision (F2V) modules. Similar operations are conducted for other CNN blocks to achieve adaptive fusion and local-global feature enhancement under different resolutions. Finally, we concatenate these three features and feed them into the classification head for pattern recognition. Extensive experiments on two large-scale RGB-event benchmark datasets (PokerEvent and human activity recognition with dynamic vision sensors (HARDVS)) fully validate the effectiveness of our proposed TSCFormer. The source code will be released at https://github.com/Event-AHU/TSCFormer.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return