Surrogate models for multi-objective evolutionary algorithms, a survey and current research trends
TÓM TẮT
MÔ HÌNH ĐẠI DIỆN CHO GIẢI THUẬT TIẾN HÓA ĐA MỤC TIÊU.
KHẢO SÁT VÀ VẤN ĐỀ NGHIÊN CỨU
Bài toán tối ưu đa mục tiêu là một lớp bài toán tối ưu trong thực tế, có
nhiều mục tiêu xung đột với nhau. Giải thuật tiến hóa tối ưu đa mục tiêu
thường khá hiệu quả để giải các bài toán tối ưu đa mục tiêu khó. Với giải
thuật tiến hóa đa mục tiêu, dựa trên nguyên lý quần thể, chúng ta đạt được
một tập giải pháp tối ưu (tập giải pháp khả dụng) sau quá trình tìm kiếm.
Chúng ta thường sử dụng quan hệ trội trong quần thể và không khó khăn để
xác định được các giải pháp Pareto qua mỗi thế hệ. Tuy nhiên, với các bài
toán tối ưu khó trong thực tế, cần phải có nhiều đánh giá của hàm thích nghi
trong quá trình tìm kiếm. Để tránh các thí nghiệm tốn kém, chúng ta có thể
dùng phương pháp mô phỏng trên máy tính để giải quyết các bài toán khó
này. Thực tế, phương pháp này đòi hỏi chi phí tính toán và thời gian lớn cho
quá trình mô phỏng. Vì thế, có nhiều nghiên cứu thảo luận về việc sử dụng
mô hình đại diện cho giải thuật tiến hóa, đặc biệt là tiến hóa đa mục tiêu để
giảm số lượng tính toán thích nghi. Bài báo này đưa ra một khảo sát ngắn về
mô hình đại diện cho giải thuật tiến hóa tối ưu đa mục tiêu, những kết quả
hiện tại và vấn đề nghiên cứu đặt ra.
Từ khóa: Mô hình đại diện; Mô hình xấp xỉ; Mô hình meta; Giải thuật tiến hóa đa mục tiêu.
Tóm tắt nội dung tài liệu: Surrogate models for multi-objective evolutionary algorithms, a survey and current research trends
Công nghệ thông tin
N. D. Dinh, , N. X. Hoai, “Surrogate models for multi-objective research trends.” 120
SURROGATE MODELS FOR MULTI-OBJECTIVE
EVOLUTIONARY ALGORITHMS, A SURVEY
AND CURRENT RESEARCH TRENDS
Nguyen Duc Dinh1, Nguyen Long2*, Nguyen Hong Nghi3, Nguyen Xuan Hoai4
Abstract: Multi-objective problems (MOPs), a class of optimization problems in
the real-word, which has multiple conflicting objectives. Multi-objective
evolutionary algorithms (MOEAs) are known as great potential algorithms to solve
difficult MOPs. With MOEAs, based on principle of population, we have a set of
optimal solutions (feasible solution set) after the search. We often use concept of
dominance relationship in population, it is not difficult to find out set of Pareto
optimal solutions during generations. However, with expensive optimization
problems in the real world, it has to use a lot of fitness function evaluations during
the search. To avoid the expensive physical experiments, we can use computer
simulations methods to solve the difficult MOPs. In fact, this way often costs
expensive in computation and times for the simulation. In these cases, researchers
discussed on the usage of surrogate models for evolutionary algorithms, especially
for MOEAs to minimize the number of fitness callings. This paper we will take a
short overview about surrogate models for MOEAs, state of the art and current
research trends.
Keywords: Surrogate models; Approximation models; Meta-models; MOEAs.
1. COMMON CONCEPTS
1.1. Expensive multi-objective problems
A multi-objective optimization problem involves at least two conflicting
objectives and it has a set of Pareto optimal solutions. MOEAs use a population of
solutions to approximate the Pareto optimal set in a single run. MOEAs have
attracted a lot of research attention during the past decade. They are still one of the
hottest research areas in the field of Computational Intelligence, especially to solve
difficult MOPs in the real world.
A multi-objective problem is formed as follows [16]:
minimize {f1(x), f2(x), , fk(x)} (1)
subject to x S,
where k ( 2) is the number of objectives, fi: R
n R are objective functions. The
vector of objective functions are denoted by f(x) = (f1(x), f2 (x),..., fk(x))
T. The decision
(variable) vector x = (x1, x2,..., xn)
T belongs to the feasible region (set) S, which is a
subset of decision variable space Rn. The term “minimize” means all objective
functions are minimized simultaneously.
If there is no conflict between objective functions, then a solution can be found
where every objective function attains its optimum. In this case, no special
methods are needed. To avoid such trivial cases we assume that there does not
exist a single solution that is optimal with respect to every objective function. This
means that the objective functions are at least partly conflicting.
In multi-objective optimization, a problem is defined as a expensive problem
when it has fitness functions which require time-consuming experiments or
computer simulations.
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 04 - 2019 121
1.2. Surrogate models
Surrogate models are used to approximate in simulation way to reduce the
computational cost for expensive problems. The models are described as below:
If we call ( )f x
is an origin fitness function of a MOP, then, we have '( )f x
is a
meta function, which is indicated as below:
'( ) ( ) ( )f x f x e x
(2)
function ( )e x
is the approximated error. In this case, the fitness function ( )f x
is not to be known, the values (input or output) are cared. Based on the responses
of the simulator from a chosen dataset, a surrogate is constructed, then the model
generates easy representations that describe the relations between preference
information of input and output variables. There are some approaches for the
surrogate models, which are divided into some kinds as below:
1.2.1. The radial basis function (RBF)
In the proposal [8], the authors suggest an approach to develop equations of
topography and other irregular surfaces. To describe the method, the term “multi
quadric analysis” is used. The radial basis function (RBF) is a real value function
which is depended on the distance from a center point of the neuron to the input
point. A specified point in the neuron may used as the center point. The radial
function is formed as below:
( )x x
(3)
The RBF often includes of three different layers, such as:
• Input layer: works on an identify function.
• Hidden layer: works on non-linear RBF activation function.
• Output layer: works on 1: ( )
Nn
i ii
R R x w x c
In this approach, some parameters are used: the central vector i
c
, the weights wi
and the RBF widths i . The optimization process is the task of tuning these
parameters. Based on the model, there are some proposals recently [11, 26, 5, 17],
for details:
In [11], the authors proposed an algorithm which performs actual analysis for
the initial population and periodically every few generations. An external archive
of the unique solutions evaluated using the actual analysis is maintained to train
the surrogate models. The data points in the archive are split into multiple
partitions using k-Means clustering. A RBF network surrogate model is built for
each partition using a fraction of the points in that partition. The rest of the points
in the partition are used as a validation data to decide the prediction accuracy of the
surrogate model. Prediction of a new candidate solution is done by the surrogate
model with the least prediction error in the neighborhood of that point.
The authors in [26] introduced a modified version of MOEA/D which is assisted
by cooperative RBF networks, with the aim of improving the prediction of the
function value. The RBF networks employed here, use different kernels in order to
have different shapes of the fitness landscape. With that, each RBF network
Công nghệ thông tin
N. D. Dinh, , N. X. Hoai, “Surrogate models for multi-objective research trends.” 122
provides information which is used to improve the value of the objective function.
In [14] the authors presented a meta-model assisted memetic algorithm, using
online trained local meta-models supporting both the inexact pre-evaluation of
evolving populations and the local search. The use of RBF networks to screen out
non-promising individuals in each generation, so as to avoid evaluating them with
the exact and costly evaluation tool, is currently a well established technique; it
considerably reduces the CPU cost of evolutionary computations and makes them
competitive for engineering applications with computationally expensive
evaluation tools.
In [17], the authors proposed a new MOEA called MODE-LD+SS, which
combines differential evolution with local dominance and scalar selection
mechanisms. Local dominance aims to improve the convergence rate and the scalar
selection mechanism intends to improve the distribution of solutions along the
Pareto front.
1.2.2. The polynomial response surface (PRS)
The authors in [21] suggested using concept of statics to regress and analyst the
variances to find out the minimum responsive variance. This methodology called
The response surface methodology (RSM). Based on the RSM and polynomials,
and popular approach is proposed, which called polynomial response surfaces
(PRS). In PRS, a function is a linear aggregate (or combination) of powers and
products of the training set. The model is formed as below:
( )ˆ
pp Ty x
(4)
Here, dataset sized n is presented as 1 2, ,..., ,nx x x
β is the vector of coefficients
to be estimated,
p
x
is the vector corresponding to the form of a pair of x1
(p) and
x2
(p).
In the proposal, there are two common methods are used to estimate unspecified
coefficients: gradient and least squares methods. In this case, the number of
coefficients is the number of samples which we need.
In [3], the authors used stepwise regression to build up PRS as below:
• Determine an initial model.
• In a loop, the model is altered with one of two actions (add or remove a
predictor variable in accordance with a special criterion.
• Stop the search when the loop size is reached.
Based on the approach, there are some proposals in [6, 13, 4, 20,18], for details:
Based on NSGA-II, the authors on [6] introduced a proposal which applied to
study trade-offs among objectives of a rocket injector design problem where
performance and life objectives compete. The Pareto Optimal Front (POF) is
approximated using a quintic polynomial. The compromise region quantifies trade-
offs among objectives.
In [13], to solve the problem on crash safety design of vehicles, the authors
suggest to use stepwise regression model for NSGA-II. The authors present a
multi-objective optimization procedure for the vehicle design, where the weight,
acceleration characteristics and toe-board intrusion are considered as the design
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 04 - 2019 123
objectives. The response surface method with linear and quadratic basis functions
is employed to formulate these objectives, in which optimal Latin hypercube
sampling and stepwise regression techniques are implemented.
The authors in [4] presented and analyzed in detail an efficient search method
based on evolutionary algorithms (EA) assisted by local Gaussian random field
meta-models (GRFM). It is created for the use in optimization problems with one
(or many) computationally expensive evaluation function(s). The role of GRFM is
to predict objective function values for new candidate solutions by exploiting
information recorded during previous evaluations. Moreover, GRFM are able to
provide estimates of the confidence of their predictions. Predictions and their
confidence intervals predicted by GRFM are used by the meta-model assisted EA.
It selects the promising members in each generation and carries out exact, costly
evaluations only for them. The extensive use of the uncertainty information of
predictions for screening the candidate solutions makes it possible to significantly
reduce the computational cost of single and multi-objective EA.
In [18], the authors used a special memetic operator, which performs local
search on some of the newly generated individuals. This operator used the meta-
model constructed based on previously evaluated points in the decision space. The
meta-model is trained to predict the distance to the currently known Pareto front.
In the proposal, the known points do not have the same weight, as those that are
closer to the locally optimized one are considered more important. The author’s
idea is that points closer to the Pareto front are more interesting during the run of
the algorithm, and the memetic operator moves the individuals towards the front.
The meta-model provides a general direction in which the search should proceed.
To obtain a training set for the meta-models we also added an external archive of
individuals with known objective values.
A meta-model based approach is introduced in [20] to the reduction in the
needed number of function evaluations is presented. Local aggregate meta-models
are used in a memetic operator. The algorithm is first discussed from a theoretical
point of view and then it is shown that the meta-models greatly reduce the number
of function evaluations.
1.2.3. The support vector machine (SVM)
In [24], based on the theory of statistical learning, the authors introduced to use
support vector machines (SVMs) which is a set of related supervised learning
methods. Here, the data is analyzed to recognize patterns. In this approach, to
classify, regress, analyst one or set of hyper-planes in corresponding multi-
dimensional space is suggested to use. The set of inputs are map to a larger space,
then, it calculates the cross product in terms of the variables in original space
which makes the computational load reasonable. The concept of “kernel function”
(as K(x, y)) is defined as the cross products in larger space. The kernel function
can be chosen as a solver for the regression problem. One of other function is
introduced is “loss function” with a measure of distance. The problem of
approximating the set of data is formed as:
, wf x g x b (5)
Công nghệ thông tin
N. D. Dinh, , N. X. Hoai, “Surrogate models for multi-objective research trends.” 124
Then, the minimum of the function:
2
1
1
( , )
2
n
i i
i
w w C
(6)
is known as the optimal regression function. Here, C is specified value, and
are slack variables which present the lower and upper constraints on the results.
Based on the SVM, there are some proposals in [14, 19, 20], for details:
In [14] the authors proposed a meta-model based approach to the reduction in
the needed number of function evaluations is presented. Local aggregate meta-
models are used in a memetic operator. The algorithm is first discussed from a
theoretical point of view and then it is shown that the meta-models greatly reduce
the number of function evaluations.
In [19], the authors proposed a surrogate-based evolutionary strategy for multi-
objective optimization. The evolutionary strategy uses distance based aggregate
surrogate models in two ways: as a part of memetic search and as way to pre-select
individuals in order to avoid evaluation of bad individuals. The model predicts the
distance of individuals to the currently known Pareto set.
In [20], the authors presented a memetic evolutionary algorithm for multi-
objective optimization with local meta-models. The proposal shows that the local
models give better results than a single global model, usually reducing the number
of needed function evaluations by 10%, with occasional reductions as high as 48%.
Although this difference may seem rather small, it may better reduce the associated
costs in practical tasks.
1.2.4. The kriging (KRG)
In [15], the authors proposed a method named “Kriging”, which is a response
surface method bases on spatial prediction techniques. This method minimizes the
mean squared error to build the spatial and temporal correlation among the values
of an attribute. In [22], the authors developed a parametric regression model which
design and analysis of computer experiments, called DACE. The model is an
extension of Kriging approach for at least three dimensions problems. The model
is a combination of a known function f(x) and a Gaussian random process f’(x) that
is assume to have mean zero and covariance, like as:
( ) ( ) ( ) ( ) ( ) ( )2( '( ), '( )) ( '( ), '( )) ( , , )
i j ... networks have advantages of easy design, good generalization,
strong tolerance to input noise, and online learning ability. The properties of RBF
Công nghệ thông tin
N. D. Dinh, , N. X. Hoai, “Surrogate models for multi-objective research trends.” 126
networks make it very suitable for design problems in the real-world. The RBF is
training faster than multi-layer perceptron (MLP). In RBF, the hidden layer is
easier to interpret than the hidden layer in an MLP.
- PRS: The integration of multiple disciplinary codes with an optimization
code becomes more manageable. The statistical methods can be used to detect
and repair bad data and to estimate the average error in the data [9]. The model is
suited to simple design landscapes with low dimensionality where data is cheap
to obtain [23].
- SVM: With the concept of kernel, SVMs gain flexibility in the choice of the
form of the threshold separating solvent from insolvent companies, which needs
not be linear and even needs not have the same functional form for all data, since
its function is non-parametric and operates locally. In the model, by choosing an
appropriate generalization grade, SVMs can be robust, even when the training
sample has some bias. By choosing different r values for different input values, it
is possible to rescale outliers. SVMs deliver a unique solution, since the optimality
problem is convex. With appropriate kernel, such as the Gaussian kernel, one can
put more stress on the similarity between companies, because the more similar the
financial structure of two companies is, the higher is the value of the kernel.
- Kriging: Kriging treats clusters as like as the point's help to carry estimate the
effects of data clustering. With the model the known value of estimation errors
(kriging variance) along with the z variable and it gives estimation of errors for
stochastic simulation of possible realizations of Z(u) [12].
2.2. Disadvantages
The models for surrogate assisted MOEAs show a lot of advantages in
optimization area, but they also give some disadvantages:
- RBF: Although the RBF is quick to train, when training is finished and it is
being used it is slower than a MLP, so where speed is a factor a MLP may be more
appropriate.
- PRS: Care is needed with the selection of the polynomial order. For problems
with a high number of dimensions, it is difficult to gain sufficient data. Care is
needed with order selections.
- SVM: the biggest limitation of the SVM model which lies in choice of the
kernel, speed and size, both in training and testing, high algorithmic complexity
and extensive memory requirements of the required quadratic programming in
large-scale tasks.
- Kriging: This model is expensive to train for higher dimensional problems.
3. CURRENT RESEARCH TRENDS
Through a short review and analysis about the common models for surrogate
multi-objective evolutionary algorithm, recently proposals, research issues and
discussions, there are some promised trends for academic researchers in this area:
1. Surrogates in interactive evolutionary multi-objective (EMO) algorithms:
in an interactive EMO algorithm, humans calculate the fitness value for all
individuals. This is necessary in case of no fitness functions are available. It is
difficult for the algorithm when humans feel tired, have no choices, no actions. To
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 04 - 2019 127
solve the problem, the usage of surrogate models can be used to replace the
evaluations by the human. This case, machine learning model is used to predict the
fitness value the human may assign to a design based on history data.
2. Surrogated-assisted evolution for solving dynamic multi-objective
optimization algorithms: Recently, researchers discussed a lot about evolutionary
algorithms for dynamic MOPs. The research idea is to design an evolutionary
search strategy to drive solutions belong POF. In this, the maintaining of diversity
of population is an important part of the evolutionary algorithm. Then, memory
mechanisms which include sub-populations, archives, external population need to
construct as research directions. To implement the strategy to maintain the
diversity of the population anticipation and prediction of the change in the fitness
function can be helpful in solving dynamic problems more efficiently. In such
strategies, a surrogate can be helpful in learning the changing fitness function.
3. Surrogates for robust multi-objective optimization: In the real-world
applications, we need to concern the performance of MOEAs to obtain optimal
solution and also the sensitivity of the performance to small. If an optimal solution
is insensitive to such changes, the solution is known as robust optimization. This
case, an implicit averaging or explicit averaging can be used in MOEAs to obtain
robust optimal solutions. Wherein an assumption on the probability distribution of
the noise is often made. By contrast, one can predefine the allowed performance
decrease and then search for an optimum that has the maximum tolerance of
changes in the design variables or in the environment, which is termed inverse
robust optimization. In both cases, explicit averaging based or inverse robust
optimization additional fitness evaluations are needed. Then, a surrogate is
promised direction to improve the efficiency and extra fitness evaluations.
4. Surrogate-assisted combinatorial multi-objective optimization: In the real-
world applications, there are also many computationally intensive combinatorial
optimization problems. In these cases, we can use discrete modeling techniques
such as binary neural network, which is used to assist a mixed integer evolution
strategy for medical images analysis. Then, the Kriging model is suitable to be
used for telecom network optimization problems.
5. Surrogates for constrained optimization: In the real-world, MOPs often
have one or more constraints. To determine if a given solution is feasible, we need
to evaluate the constraint functions. In case of the evaluations of constraint
functions are time consuming, it is desirable to replace the constraint functions
with others by using approximate models. Then, surrogates are applied to
manipulate the shape and size of the feasible region to ease the solution of highly
constrained MOPs. The usage of surrogate models to deliberately enlarge the
feasible region by building up a very simple surrogate for each constraint function.
As the optimal proceeds, the complexity of the surrogates increases gradually so
that the approximated feasible region can converge to the real feasible region.
4. CONCLUSION
The usage of surrogate models for multi-objective evolutionary algorithms are
motivated from real-world applications. As multi-objective optimizations are
Công nghệ thông tin
N. D. Dinh, , N. X. Hoai, “Surrogate models for multi-objective research trends.” 128
increasingly applied to solving complex problems, expensive problems research
trends in using surrogate models for MOEAs have considerably increased in recent
years. This paper provides a brief overview of recent advances in this research area
and raised some research trends that remain to be resolved in the future. Based on
research trends, researchers need to apply new techniques to incorporate with the
surrogate models for MOEAs such as Artificial Intelligence (AI), Deep Learning,
Cloud Computing,... with which more computing resources will be made available
to common users via computer networks.
Acknowledgments: The work is acknowledged by MOD project with code:
2018.76.040.
REFERENCES
[1]. Florian Bittner and Ingo Hahn. Kriging-assisted multi-objective particle
swarm optimization of permanent magnet synchronous machine for hybrid
and electric cars. In Electric Machines & Drives Conference (IEMDC), 2013
IEEE International, pages 15-22. IEEE, 2013.
[2]. Seongim Choi, Juan J Alonso, and Hyoung S Chung. Design of a low-boom
supersonic business jet using evolutionary algorithms and an adaptive
unstructured mesh method. AIAA paper, 1758:2004, 2004.
[3]. Norman R Draper and Harry Smith. Applied regression analysis, volume 326.
John Wiley & Sons, 2014.
[4]. Michael TM Emmerich, Kyriakos C Giannakoglou, and Boris Naujoks.
Single-and multiobjective evolutionary optimization assisted by gaussian
random field metamodels. IEEE Transactions on Evolutionary Computation,
10(4):421-439, 2006.
[5]. Chariklia A Georgopoulou and Kyriakos C Giannakoglou. Multiobjective
metamodel-assisted memetic algorithms. In Multi-Objective Memetic
Algorithms, pages 153-181. Springer, 2009.
[6]. Tushar Goel, Rajkumar Vaidyanathan, Raphael T Haftka, Wei Shyy, Nestor
V Queipo, and Kevin Tucker. Response surface approximation of pareto
optimal front in multi-objective optimization. Computer methods in applied
mechanics and engineering, 196(4-6):879-893, 2007.
[7]. Luis Gonzalez, Jacques Periaux, Karkenahalli Srinivas, and Eric Whitney. A
generic framework for the design optimisation of multidisciplinary uav
intelligent systems using evolutionary computing. In 44 th AIAA Aerospace
Sciences Meeting and Exhibit, page 1475, 2006.
[8]. Rolland L Hardy. Multiquadric equations of topography and other irregular
surfaces. Journal of geophysical research, 76(8):1905-1915, 1971.
[9]. Serhat Hosder, Layne T Watson, Bernard Grossman, William H Mason,
Hongman Kim, Raphael T Haftka, and Steven E Cox. Polynomial response
surface approximations for the multidisciplinary design optimization of a high
speed civil transport. Optimization and Engineering, 2(4):431-452, 2001.
[10]. Afzal Husain and Kwang-Yong Kim. Enhanced multi-objective optimization of
a microchannel heat sink through evolutionary algorithm coupled with multiple
surrogate models. Applied Thermal Engineering, 30(13):1683-1691, 2010.
Nghiên cứu khoa học công nghệ
Tạp chí Nghiên cứu KH&CN quân sự, Số Đặc san CNTT, 04 - 2019 129
[11]. Amitay Isaacs, Tapabrata Ray, and Warren Smith. An evolutionary algorithm
with spatially distributed surrogates for multiobjective optimization. In
Australian Conference on Artificial Life, pages 257-268. Springer, 2007.
[12]. Jack PC Kleijnen. Kriging metamodeling in simulation: A review. European
journal of operational research, 192(3):707-716, 2009.
[13]. Xingtao Liao, Qing Li, Xujing Yang, Weigang Zhang, and Wei Li.
Multiobjective optimization for crash safety design of vehicles using stepwise
regression model. Structural and multidisciplinary optimization, 35(6):561-
569, 2008.
[14]. Saul Zapotecas Martinez and Carlos A Coello Coello. A memetic algorithm
with non gradient- based local search assisted by a meta-model. In
International Conference on Parallel Problem, Solving from Nature, pages
576-585. Springer, 2010.
[15]. Georges Matheron. Principles of geostatistics. Economic geology, 58(8):1246-
1266, 1963.
[16]. K. Miettinen. Nonlinear Multiobjective Optimization. Kluwer Academic
Publishers, Boston, USA, 1999.
[17]. Alfredo Arias Montano, Carlos A Coello Coello, and Efren Mezura-Montes.
Mode-ld+ ss: A novel differential evolution algorithm incorporating local
dominance and scalar selection mechanisms for multi-objective optimization.
In Evolutionary Computation (CEC), 2010 IEEE Congress on, pages 1-8.
IEEE, 2010.
[18]. Martin Pilát and Roman Neruda. Lamm-mma: multiobjective memetic
algorithm with local aggregate meta-model. In Proceedings of the 13th annual
conference companion on Genetic and evolutionary computation, pages 79-
80. ACM, 2011.
[19]. Martin Pilat and Roman Neruda. An evolutionary strategy for surrogate-based
multiobjective optimization. In Evolutionary Computation (CEC), 2012 IEEE
Congress on, pages 1-7. IEEE, 2012.
[20]. Martin Piláat and Roman Neruda. Aggregate meta-models for evolutionary
multiobjective and many-objective optimization. Neurocomputing, 116:392-
402, 2013.
[21]. D. C. Montgomery R. H. Myers and C. M. Anderson-Cook. Response surface
methodology: process and product optimization using designed experiments.
Wiley New York, 2009.
[22]. Jerome Sacks, William J Welch, Toby J Mitchell, and Henry P Wynn. Design
and analysis of computer experiments. Statistical science, pages 409-423,
1989.
[23]. Paul G Tucker. Advanced computational fluid and aerodynamics, volume 54.
Cambridge University Press, 2016.
[24]. Vladimir Vapnik. Statistical learning theory. 1998, volume 3. Wiley, New
York, 1998.
[25]. Ivan Voutchkov and AJ Keane. Multiobjective optimization using surrogates.
2006.
[26]. Saul Zapotecas Martinez and Carlos A Coello Coello. Moea/d assisted by rbf
Công nghệ thông tin
N. D. Dinh, , N. X. Hoai, “Surrogate models for multi-objective research trends.” 130
networks for expensive multi-objective optimization problems. In
Proceedings of the 15th annual conference on Genetic and evolutionary
computation, pages 1405-1412. ACM, 2013.
[27]. Qingfu Zhang, Wudong Liu, Edward Tsang, and Botond Virginas. Expensive
multiobjective optimization by moea/d with gaussian process model. IEEE
Transactions on Evolutionary Computation, 14(3):456-474, 2010.
TÓM TẮT
MÔ HÌNH ĐẠI DIỆN CHO GIẢI THUẬT TIẾN HÓA ĐA MỤC TIÊU.
KHẢO SÁT VÀ VẤN ĐỀ NGHIÊN CỨU
Bài toán tối ưu đa mục tiêu là một lớp bài toán tối ưu trong thực tế, có
nhiều mục tiêu xung đột với nhau. Giải thuật tiến hóa tối ưu đa mục tiêu
thường khá hiệu quả để giải các bài toán tối ưu đa mục tiêu khó. Với giải
thuật tiến hóa đa mục tiêu, dựa trên nguyên lý quần thể, chúng ta đạt được
một tập giải pháp tối ưu (tập giải pháp khả dụng) sau quá trình tìm kiếm.
Chúng ta thường sử dụng quan hệ trội trong quần thể và không khó khăn để
xác định được các giải pháp Pareto qua mỗi thế hệ. Tuy nhiên, với các bài
toán tối ưu khó trong thực tế, cần phải có nhiều đánh giá của hàm thích nghi
trong quá trình tìm kiếm. Để tránh các thí nghiệm tốn kém, chúng ta có thể
dùng phương pháp mô phỏng trên máy tính để giải quyết các bài toán khó
này. Thực tế, phương pháp này đòi hỏi chi phí tính toán và thời gian lớn cho
quá trình mô phỏng. Vì thế, có nhiều nghiên cứu thảo luận về việc sử dụng
mô hình đại diện cho giải thuật tiến hóa, đặc biệt là tiến hóa đa mục tiêu để
giảm số lượng tính toán thích nghi. Bài báo này đưa ra một khảo sát ngắn về
mô hình đại diện cho giải thuật tiến hóa tối ưu đa mục tiêu, những kết quả
hiện tại và vấn đề nghiên cứu đặt ra.
Từ khóa: Mô hình đại diện; Mô hình xấp xỉ; Mô hình meta; Giải thuật tiến hóa đa mục tiêu.
Nhận bài ngày 11 tháng 01 năm 2019
Hoàn thiện ngày 16 tháng 3 năm 2019
Chấp nhận đăng ngày 25 tháng 3 năm 2019
Address: 1MITI, Military Academy of Science and Technology;
2National Defense Academy;
3VNPT Information Technology;
4AI Academy.
*Email: [email protected].
File đính kèm:
surrogate_models_for_multi_objective_evolutionary_algorithms.pdf

