Published Online

Display Method:
Research Article
Pedestrian Attribute Recognition in Video Surveillance Scenarios Based on View-attribute Attention Localization
Wei-Chen Chen, Xin-Yi Yu, Lin-Lin Ou
doi: 10.1007/s11633-022-1321-8
Abstract:

Pedestrian attribute recognition in surveillance scenarios is still a challenging task due to the inaccurate localization of specific attributes. In this paper, we propose a novel view-attribute localization method based on attention (VALA), which utilizes view information to guide the recognition process to focus on specific attributes and attention mechanism to localize specific attribute-corresponding areas. Concretely, view information is leveraged by the view prediction branch to generate four view weights that represent the confidences for attributes from different views. View weights are then delivered back to compose specific view-attributes, which will participate and supervise deep feature extraction. In order to explore the spatial location of a view-attribute, regional attention is introduced to aggregate spatial information and encode inter-channel dependencies of the view feature. Subsequently, a fine attentive attribute-specific region is localized, and regional weights for the view-attribute from different spatial locations are gained by the regional attention. The final view-attribute recognition outcome is obtained by combining the view weights with the regional weights. Experiments on three wide datasets (richly annotated pedestrian (RAP), annotated pedestrian v2 (RAPv2), and PA-100K) demonstrate the effectiveness of our approach compared with state-of-the-art methods.

A Dynamic Resource Allocation Strategy with Reinforcement Learning for Multimodal Multi-objective Optimization
Qian-Long Dang, Wei Xu, Yang-Fei Yuan
doi: 10.1007/s11633-022-1314-7
Abstract:
Many isolation approaches, such as zoning search, have been proposed to preserve the diversity in the decision space of multimodal multi-objective optimization (MMO). However, these approaches allocate the same computing resources for subspaces with different difficulties and evolution states. In order to solve this issue, this paper proposes a dynamic resource allocation strategy (DRAS) with reinforcement learning for multimodal multi-objective optimization problems (MMOPs). In DRAS, relative contribution and improvement are utilized to define the aptitude of subspaces, which can capture the potentials of subspaces accurately. Moreover, the reinforcement learning method is used to dynamically allocate computing resources for each subspace. In addition, the proposed DRAS is applied to zoning searches. Experimental results demonstrate that DRAS can effectively assist zoning search in finding more and better distributed equivalent Pareto optimal solutions in the decision space.