Display Method:
Research Article
A Novel Attention-based Global and Local Information Fusion Neural Network for Group Recommendation
Song Zhang, Nan Zheng, Dan-Li Wang
doi: 10.1007/s11633-022-1336-1
Abstract PDF SpringerLink
Abstract:
Due to the popularity of group activities in social media, group recommendation becomes increasingly significant. It aims to pursue a list of preferred items for a target group. Most deep learning-based methods on group recommendation have focused on learning group representations from single interaction between groups and users. However, these methods may suffer from data sparsity problem. Except for the interaction between groups and users, there also exist other interactions that may enrich group representation, such as the interaction between groups and items. Such interactions, which take place in the range of a group, form a local view of a certain group. In addition to local information, groups with common interests may also show similar tastes on items. Therefore, group representation can be conducted according to the similarity among groups, which forms a global view of a certain group. In this paper, we propose a novel global and local information fusion neural network (GLIF) model for group recommendation. In GLIF, an attentive neural network (ANN) activates rich interactions among groups, users and items with respect to forming a group′s local representation. Moreover, our model also leverages ANN to obtain a group′s global representation based on the similarity among different groups. Then, it fuses global and local representations based on attention mechanism to form a group′s comprehensive representation. Finally, group recommendation is conducted under neural collaborative filtering (NCF) framework. Extensive experiments on three public datasets demonstrate its superiority over the state-of-the-art methods for group recommendation.
Effective and Robust Detection of Adversarial Examples via Benford-Fourier Coefficients
Cheng-Cheng Ma, Bao-Yuan Wu, Yan-Bo Fan, Yong Zhang, Zhi-Feng Li
doi: 10.1007/s11633-022-1328-1
Abstract PDF SpringerLink
Abstract:
Adversarial example has been well known as a serious threat to deep neural networks (DNNs). In this work, we study the detection of adversarial examples based on the assumption that the output and internal responses of one DNN model for both adversarial and benign examples follow the generalized Gaussian distribution (GGD) but with different parameters (i.e., shape factor, mean, and variance). GGD is a general distribution family that covers many popular distributions (e.g., Laplacian, Gaussian, or uniform). Therefore, it is more likely to approximate the intrinsic distributions of internal responses than any specific distribution. Besides, since the shape factor is more robust to different databases rather than the other two parameters, we propose to construct discriminative features via the shape factor for adversarial detection, employing the magnitude of Benford-Fourier (MBF) coefficients, which can be easily estimated using responses. Finally, a support vector machine is trained as an adversarial detector leveraging the MBF features. Extensive experiments in terms of image classification demonstrate that the proposed detector is much more effective and robust in detecting adversarial examples of different crafting methods and sources compared to state-of-the-art adversarial detection methods.
Display Method:
Review
Paradigm Shift in Natural Language Processing
Tian-Xiang Sun, Xiang-Yang Liu, Xi-Peng Qiu,  Xuan-Jing Huang
2022,  vol. 19,  no. 3, pp. 169-183,  doi: 10.1007/s11633-022-1331-6
Abstract PDF SpringerLink
Abstract:
In the era of deep learning, modeling for most natural language processing (NLP) tasks has converged into several mainstream paradigms. For example, we usually adopt the sequence labeling paradigm to solve a bundle of tasks such as POS-tagging, named entity recognition (NER), and chunking, and adopt the classification paradigm to solve tasks like sentiment analysis. With the rapid progress of pre-trained language models, recent years have witnessed a rising trend of paradigm shift, which is solving one NLP task in a new paradigm by reformulating the task. The paradigm shift has achieved great success on many tasks and is becoming a promising way to improve model performance. Moreover, some of these paradigms have shown great potential to unify a large number of NLP tasks, making it possible to build a single model to handle diverse tasks. In this paper, we review such phenomenon of paradigm shifts in recent years, highlighting several paradigms that have the potential to solve different NLP tasks.1
Machine Learning for Cataract Classification/Grading on Ophthalmic Imaging Modalities: A Survey
Xiao-Qing Zhang, Yan Hu, Zun-Jie Xiao, Jian-Sheng Fang, Risa Higashita, Jiang Liu
2022,  vol. 19,  no. 3, pp. 184-208,  doi: 10.1007/s11633-022-1329-0
Abstract PDF SpringerLink
Abstract:
Cataracts are the leading cause of visual impairment and blindness globally. Over the years, researchers have achieved significant progress in developing state-of-the-art machine learning techniques for automatic cataract classification and grading, aiming to prevent cataracts early and improve clinicians′ diagnosis efficiency. This survey provides a comprehensive survey of recent advances in machine learning techniques for cataract classification/grading based on ophthalmic images. We summarize existing literature from two research directions: conventional machine learning methods and deep learning methods. This survey also provides insights into existing works of both merits and limitations. In addition, we discuss several challenges of automatic cataract classification/grading based on machine learning techniques and present possible solutions to these challenges for future research.
Research Article
Towards Interpretable Defense Against Adversarial Attacks via Causal Inference
Min Ren, Yun-Long Wang, Zhao-Feng He
2022,  vol. 19,  no. 3, pp. 209-226,  doi: 10.1007/s11633-022-1330-7
Abstract PDF SpringerLink
Abstract:
Deep learning-based models are vulnerable to adversarial attacks. Defense against adversarial attacks is essential for sensitive and safety-critical scenarios. However, deep learning methods still lack effective and efficient defense mechanisms against adversarial attacks. Most of the existing methods are just stopgaps for specific adversarial samples. The main obstacle is that how adversarial samples fool the deep learning models is still unclear. The underlying working mechanism of adversarial samples has not been well explored, and it is the bottleneck of adversarial attack defense. In this paper, we build a causal model to interpret the generation and performance of adversarial samples. The self-attention/transformer is adopted as a powerful tool in this causal model. Compared to existing methods, causality enables us to analyze adversarial samples more naturally and intrinsically. Based on this causal model, the working mechanism of adversarial samples is revealed, and instructive analysis is provided. Then, we propose simple and effective adversarial sample detection and recognition methods according to the revealed working mechanism. The causal insights enable us to detect and recognize adversarial samples without any extra model or training. Extensive experiments are conducted to demonstrate the effectiveness of the proposed methods. Our methods outperform the state-of-the-art defense methods under various adversarial attacks.
TwinNet: Twin Structured Knowledge Transfer Network for Weakly Supervised Action Localization
Xiao-Yu Zhang, Hai-Chao Shi, Chang-Sheng Li, Li-Xin Duan
2022,  vol. 19,  no. 3, pp. 227-246,  doi: 10.1007/s11633-022-1333-4
Abstract PDF SpringerLink
Abstract:
Action recognition and localization in untrimmed videos is important for many applications and have attracted a lot of attention. Since full supervision with frame-level annotation places an overwhelming burden on manual labeling effort, learning with weak video-level supervision becomes a potential solution. In this paper, we propose a novel weakly supervised framework to recognize actions and locate the corresponding frames in untrimmed videos simultaneously. Considering that there are abundant trimmed videos publicly available and well-segmented with semantic descriptions, the instructive knowledge learned on trimmed videos can be fully leveraged to analyze untrimmed videos. We present an effective knowledge transfer strategy based on inter-class semantic relevance. We also take advantage of the self-attention mechanism to obtain a compact video representation, such that the influence of background frames can be effectively eliminated. A learning architecture is designed with twin networks for trimmed and untrimmed videos, to facilitate transferable self-attentive representation learning. Extensive experiments are conducted on three untrimmed benchmark datasets (i.e., THUMOS14, ActivityNet1.3, and MEXaction2), and the experimental results clearly corroborate the efficacy of our method. It is especially encouraging to see that the proposed weakly supervised method even achieves comparable results to some fully supervised methods.
Dense Face Network: A Dense Face Detector Based on Global Context and Visual Attention Mechanism
Lin Song, Jin-Fu Yang, Qing-Zhen Shang, Ming-Ai Li
2022,  vol. 19,  no. 3, pp. 247-256,  doi: 10.1007/s11633-022-1327-2
Abstract PDF SpringerLink
Abstract:
Face detection has achieved tremendous strides thanks to convolutional neural networks. However, dense face detection remains an open challenge due to large face scale variation, tiny faces, and serious occlusion. This paper presents a robust, dense face detector using global context and visual attention mechanisms which can significantly improve detection accuracy. Specifically, a global context fusion module with top-down feedback is proposed to improve the ability to identify tiny faces. Moreover, a visual attention mechanism is employed to solve the problem of occlusion. Experimental results on the public face datasets WIDER FACE and FDDB demonstrate the effectiveness of the proposed method.
Feature Selection and Feature Learning for High-dimensional Batch Reinforcement Learning: A Survey
De-Rong Liu, Hong-Liang, Li Ding Wang
2015,  vol. 12,  no. 3, pp. 229-242,  doi: 10.1007/s11633-015-0893-y
Abstract PDF SpringerLink
Second-order Sliding Mode Approaches for the Control of a Class of Underactuated Systems
Sonia Mahjoub, Faiçal Mnif, Nabil Derbel
2015,  vol. 12,  no. 2, pp. 134-141,  doi: 10.1007/s11633-015-0880-3
Abstract PDF SpringerLink
Genetic Algorithm with Variable Length Chromosomes for Network Intrusion Detection
Sunil Nilkanth Pawar, Rajankumar Sadashivrao Bichkar
2015,  vol. 12,  no. 3, pp. 337-342,  doi: 10.1007/s11633-014-0870-x
Abstract PDF SpringerLink
Recent Progress in Networked Control Systems-A Survey
Yuan-Qing Xia, Yu-Long Gao, Li-Ping Yan, Meng-Yin Fu
2015,  vol. 12,  no. 4, pp. 343-367,  doi: 10.1007/s11633-015-0894-x
Abstract PDF SpringerLink
Grey Qualitative Modeling and Control Method for Subjective Uncertain Systems
Peng Wang, Shu-Jie Li, Yan Lv, Zong-Hai Chen
2015,  vol. 12,  no. 1, pp. 70-76,  doi: 10.1007/s11633-014-0820-7
Abstract PDF SpringerLink
Cooperative Formation Control of Autonomous Underwater Vehicles: An Overview
Bikramaditya Das, Bidyadhar Subudhi, Bibhuti Bhusan Pati
2016,  vol. 13,  no. 3, pp. 199-225,  doi: 10.1007/s11633-016-1004-4
Abstract PDF SpringerLink
A Wavelet Neural Network Based Non-linear Model Predictive Controller for a Multi-variable Coupled Tank System
Kayode Owa, Sanjay Sharma, Robert Sutton
2015,  vol. 12,  no. 2, pp. 156-170,  doi: 10.1007/s11633-014-0825-2
Abstract PDF SpringerLink
An Unsupervised Feature Selection Algorithm with Feature Ranking for Maximizing Performance of the Classifiers
Danasingh Asir Antony Gnana Singh, Subramanian Appavu Alias Balamurugan, Epiphany Jebamalar Leavline
2015,  vol. 12,  no. 5, pp. 511-517,  doi: 10.1007/s11633-014-0859-5
Abstract PDF SpringerLink
Bounded Real Lemmas for Fractional Order Systems
Shu Liang, Yi-Heng Wei, Jin-Wen Pan, Qing Gao, Yong Wang
2015,  vol. 12,  no. 2, pp. 192-198,  doi: 10.1007/s11633-014-0868-4
Abstract PDF SpringerLink
Robust Face Recognition via Low-rank Sparse Representation-based Classification
Hai-Shun Du, Qing-Pu Hu, Dian-Feng Qiao, Ioannis Pitas
2015,  vol. 12,  no. 6, pp. 579-587,  doi: 10.1007/s11633-015-0901-2
Abstract PDF SpringerLink
Sliding Mode and PI Controllers for Uncertain Flexible Joint Manipulator
Lilia Zouari, Hafedh Abid, Mohamed Abid
2015,  vol. 12,  no. 2, pp. 117-124,  doi: 10.1007/s11633-015-0878-x
Abstract PDF SpringerLink
Advances in Vehicular Ad-hoc Networks (VANETs): Challenges and Road-map for Future Development
Elias C. Eze, Si-Jing Zhang, En-Jie Liu, Joy C. Eze
2016,  vol. 13,  no. 1, pp. 1-18,  doi: 10.1007/s11633-015-0913-y
Abstract PDF SpringerLink
Distributed Control of Chemical Process Networks
Michael J. Tippett, Jie Bao
2015,  vol. 12,  no. 4, pp. 368-381,  doi: 10.1007/s11633-015-0895-9
Abstract PDF SpringerLink
Appropriate Sub-band Selection in Wavelet Packet Decomposition for Automated Glaucoma Diagnoses
Chandrasekaran Raja, Narayanan Gangatharan
2015,  vol. 12,  no. 4, pp. 393-401,  doi: 10.1007/s11633-014-0858-6
Abstract PDF SpringerLink
Analysis of Fractional-order Linear Systems with Saturation Using Lyapunov s Second Method and Convex Optimization
Esmat Sadat Alaviyan Shahri, Saeed Balochian
2015,  vol. 12,  no. 4, pp. 440-447,  doi: 10.1007/s11633-014-0856-8
Abstract PDF SpringerLink
Extracting Parameters of OFET Before and After Threshold Voltage Using Genetic Algorithms
Imad Benacer, Zohir Dibi
2016,  vol. 13,  no. 4, pp. 382-391,  doi: 10.1007/s11633-015-0918-6
Abstract PDF SpringerLink
Generalized Norm Optimal Iterative Learning Control with Intermediate Point and Sub-interval Tracking
David H. Owens, Chris T. Freeman, Bing Chu
2015,  vol. 12,  no. 3, pp. 243-253,  doi: 10.1007/s11633-015-0888-8
Abstract PDF SpringerLink
Backstepping Control of Speed Sensorless Permanent Magnet Synchronous Motor Based on Slide Model Observer
Cai-Xue Chen, Yun-Xiang Xie, Yong-Hong Lan
2015,  vol. 12,  no. 2, pp. 149-155,  doi: 10.1007/s11633-015-0881-2
Abstract PDF SpringerLink
Flexible Strip Supercapacitors for Future Energy Storage
Rui-Rong Zhang, Yan-Meng Xu, David Harrison, John Fyson, Fu-Lian Qiu, Darren Southee
2015,  vol. 12,  no. 1, pp. 43-49,  doi: 10.1007/s11633-014-0866-6
Abstract PDF SpringerLink
A High-order Internal Model Based Iterative Learning Control Scheme for Discrete Linear Time-varying Systems
Wei Zhou, Miao Yu, De-Qing Huang
2015,  vol. 12,  no. 3, pp. 330-336,  doi: 10.1007/s11633-015-0886-x
Abstract PDF SpringerLink
Finite-time Control for a Class of Networked Control Systems with Short Time-varying Delays and Sampling Jitter
Chang-Chun Hua, Shao-Chong Yu, Xin-Ping Guan
2015,  vol. 12,  no. 4, pp. 448-454,  doi: 10.1007/s11633-014-0849-7
Abstract PDF SpringerLink

2022 Vol.19 No. 3

Editor-in-chief
Tieniu TAN

ISSN 2731-538X

Global Visitors