Latest Accepted Articles

Display Method:
Research Article
Evolutionary Computation for Expensive Optimization: A Survey
Jian-Yu Li, Zhi-Hui Zhan, Jun Zhang
, Available online  
Abstract:
Expensive optimization problem (EOP) widely exists in various significant real-world applications. However, EOP requires expensive or even unaffordable costs for evaluating candidate solutions, which is expensive for the algorithm to find a satisfactory solution. Moreover, due to the fast-growing application demands in the economy and society, such as the emergence of the smart cities, the Internet of things, and the big data era, solving EOP more efficiently has become increasingly essential in various fields, which poses great challenges on the problem-solving ability of optimization approach for EOP. Among various optimization approaches, evolutionary computation (EC) is a promising global optimization tool widely used for solving EOP efficiently in the past decades. Given the fruitful advancements of EC for EOP, it is essential to review these advancements in order to synthesize and give previous research experiences and references to aid the development of relevant research fields and real-world applications. Motivated by this, this paper aims to provide a comprehensive survey to show why and how EC can solve EOP efficiently. For this aim, this paper firstly analyzes the total optimization cost of EC in solving EOP. Then, based on the analysis, three promising research directions are pointed out for solving EOP, which are problem approximation and substitution, algorithm design and enhancement, and parallel and distributed computation. Note that, to the best of our knowledge, this paper is the first that outlines the possible directions for efficiently solving EOP by analyzing the total expensive cost. Based on this, existing works are reviewed comprehensively via a taxonomy with four parts, including the above three research directions and the real-world application part. Moreover, some future research directions are also discussed in this paper. It is believed that such a survey can attract attention, encourage discussions, and stimulate new EC research ideas for solving EOP and related real-world applications more efficiently.
Multi-Dimensional Classification via Selective Feature Augmentation
Bin-Bin Jia, Min-Ling Zhang
, Available online  
Abstract:
In multi-dimensional classification (MDC), the semantics of objects are characterized by multiple class spaces from different dimensions. Most MDC approaches try to explicitly model the dependencies among class spaces in output space. In contrast, the recently proposed feature augmentation strategy, which aims at manipulating feature space, has also been shown to be an effective solution for MDC. However, existing feature augmentation approaches only focus on designing holistic augmented features to be appended with the original features, while better generalization performance could be achieved by exploiting multiple kinds of augmented features. In this paper, we propose the selective feature augmentation strategy that focuses on synergizing multiple kinds of augmented features. Specifically, by assuming that only part of the augmented features is pertinent and useful for each dimension′s model induction, we derive a classification model which can fully utilize the original features while conduct feature selection for the augmented features. To validate the effectiveness of the proposed strategy, we generate three kinds of simple augmented features based on standard kNN, weighted kNN, and maximum margin techniques, respectively. Comparative studies show that the proposed strategy achieves superior performance against both state-of-the-art MDC approaches and its degenerated versions with either kind of augmented features.
A Framework for Distributed Semi-supervised Learning Using Single-layer Feedforward Networks
Jin Xie, San-Yang Liu, Jia-Xi Chen
, Available online  
Abstract:
This paper aims to propose a framework for manifold regularization (MR) based distributed semi-supervised learning (DSSL) using single layer feed-forward neural network (SLFNN). The proposed framework, denoted as DSSL-SLFNN is based on the SLFNN, MR framework, and distributed optimization strategy. Then, a series of algorithms are derived to solve DSSL problems. In DSSL problems, data consisting of labeled and unlabeled samples are distributed over a communication network, where each node has only access to its own data and can only communicate with its neighbors. In some scenarios, DSSL problems cannot be solved by centralized algorithms. According to the DSSL-SLFNN framework, each node over the communication network exchanges the initial parameters of the SLFNN with the same basis functions for semi-supervised learning (SSL). All nodes calculate the global optimal coefficients of the SLFNN by using distributed datasets and local updates. During the learning process, each node only exchanges local coefficients with its neighbors rather than raw data. It means that DSSL-SLFNN based algorithms work in a fully distributed fashion and are privacy preserving methods. Finally, several simulations are presented to show the efficiency of the proposed framework and the derived algorithms.