Title: | Differential Evolution Optimization in Pure R |
---|---|
Description: | Differential Evolution (DE) stochastic heuristic algorithms for global optimization of problems with and without general constraints. The aim is to curate a collection of its variants that (1) do not sacrifice simplicity of design, (2) are essentially tuning-free, and (3) can be efficiently implemented directly in the R language. Currently, it provides implementations of the algorithms 'jDE' by Brest et al. (2006) <doi:10.1109/TEVC.2006.872133> for single-objective optimization and 'NCDE' by Qu et al. (2012) <doi:10.1109/TEVC.2011.2161873> for multimodal optimization (single-objective problems with multiple solutions). |
Authors: | Eduardo L. T. Conceicao [aut, cre], Martin Maechler [ctb] |
Maintainer: | Eduardo L. T. Conceicao <[email protected]> |
License: | GPL (>= 2) |
Version: | 1.1-3 |
Built: | 2024-10-28 07:01:43 UTC |
Source: | CRAN |
A bespoke implementation of the ‘jDE’ variant by Brest et al. (2006) doi:10.1109/TEVC.2006.872133.
JDEoptim(lower, upper, fn, constr = NULL, meq = 0, eps = 1e-05, NP = 10*length(lower), Fl = 0.1, Fu = 1, tau_F = 0.1, tau_CR = 0.1, tau_pF = 0.1, jitter_factor = 0.001, tol = 1e-15, maxiter = 200*length(lower), fnscale = 1, compare_to = c("median", "max"), add_to_init_pop = NULL, trace = FALSE, triter = 1, details = FALSE, ...)
JDEoptim(lower, upper, fn, constr = NULL, meq = 0, eps = 1e-05, NP = 10*length(lower), Fl = 0.1, Fu = 1, tau_F = 0.1, tau_CR = 0.1, tau_pF = 0.1, jitter_factor = 0.001, tol = 1e-15, maxiter = 200*length(lower), fnscale = 1, compare_to = c("median", "max"), add_to_init_pop = NULL, trace = FALSE, triter = 1, details = FALSE, ...)
lower , upper
|
numeric vectors of lower and upper
bounds for the parameters to be optimized over. Must be finite
( |
fn |
(nonlinear) objective |
constr |
an optional |
meq |
an optional positive integer specifying that the first
|
eps |
maximal admissible constraint violation for equality constraints.
An optional real vector of small positive tolerance values with length
|
NP |
an optional positive integer giving the number of candidate
solutions in the randomly distributed initial population. Defaults to
|
Fl |
an optional scalar which represents the minimum value that the
scaling factor |
Fu |
an optional scalar which represents the maximum value that
the scaling factor |
tau_F |
an optional scalar which represents the probability that
the scaling factor |
tau_CR |
an optional constant value which represents the probability
that the crossover probability |
tau_pF |
an optional scalar which represents the probability that
the mutation probability |
jitter_factor |
an optional tuning constant for jitter.
If |
tol |
an optional positive scalar giving the tolerance for the
stopping criterion. Default is |
maxiter |
an optional positive integer specifying the maximum
number of iterations that may be performed before the algorithm is halted.
Defaults to |
fnscale |
an optional positive scalar specifying the typical
magnitude of |
compare_to |
an optional character string controlling which function
should be applied to the |
add_to_init_pop |
an optional real vector of length |
trace |
an optional logical value indicating if a trace of the
iteration progress should be printed. Default is |
triter |
an optional positive integer giving the frequency of tracing
(every |
details |
an optional logical value. If |
... |
optional additional arguments passed to |
The setting of the control parameters of canonical
Differential Evolution (DE) is crucial for the algorithm's performance.
Unfortunately, when the generally recommended values for these parameters
(see, e.g., Storn and Price, 1997) are unsuitable for use,
their determination is often difficult and time consuming.
The jDE algorithm proposed in Brest et al. (2006) employs a simple
self-adaptive scheme to perform the automatic setting of
control parameters scale factor F
and crossover rate CR
.
This implementation differs from the original description, most notably
in the use of the DE/rand/1/either-or mutation strategy
(Price et al., 2005), combination of jitter with dither
(Storn, 2008), and the random initialization of F
and CR
.
The mutation operator brings an additional control parameter, the
mutation probability , which is self-adapted in the
same manner as
CR
.
As done by jDE and its variants (Brest et al., 2021) each worse parent in the current population is immediately replaced (asynchronous update) by its newly generated better or equal offspring (Babu and Angira, 2006) instead of updating the current population with all the new solutions at the same time as in classical DE (synchronous update).
As the algorithm subsamples via sample()
which from R version 3.6.0 depends on RNGkind(*,
sample.kind)
, exact reproducibility of results from R versions
3.5.3 and earlier requires setting RNGversion("3.5.0")
.
In any case, do use set.seed()
additionally for
reproducibility!
Constraint handling is done using the approach described in
Zhang and Rangaiah (2012), but with a different reduction
updating scheme for the constraint relaxation value ().
Instead of doing it once for every generation or iteration,
the reduction is triggered for two cases when the constraints only
contain inequalities. Firstly, every time a feasible solution
is selected for replacement in the next generation by a new feasible
trial candidate solution with a better objective function value.
Secondly, whenever a current infeasible solution gets replaced by a
feasible one. If the constraints include equalities, then the
reduction is not triggered in this last case. This constitutes an
original feature of the implementation.
The performance of any constraint handling technique for metaheuristics
is severely impaired by a small feasible region. Therefore, equality
constraints are particularly difficult to handle due to the tiny
feasible region they define. So, instead of explicitly including all
equality constraints in the formulation of the optimization problem,
it might prove advantageous to eliminate some of them.
This is done by expressing one variable in terms of the
remaining others for an equality constraint
where
is the vector of solutions,
thereby obtaining a relationship as
.
In this way both the variable
and the
equality constraint
can be removed altogether from the
original optimization formulation, since the value of
can be
calculated during the search process by the relationship
.
Notice, however, that two additional inequalities
where the values and
are the lower and upper bounds
of
, respectively, must be provided in order to obtain an
equivalent formulation of the problem. For guidance and examples on
applying this approach see Wu et al. (2015).
Bound constraints are enforced by the midpoint base approach (see, e.g., Biedrzycki et al., 2019).
Any DE variant is easily extended to deal with mixed integer
nonlinear programming problems using a small variation of the technique
presented by Lampinen and Zelinka (1999). Integer values are obtained by
means of the floor()
function only in the evaluation
of the objective function and constraints, whereas DE itself still uses
continuous variables. Additionally, each upper bound of the integer
variables should be added by 1
.
Notice that the final solution needs to be converted with floor()
to obtain its integer elements.
The algorithm is stopped if
where the “best” individual is the
feasible solution with the lowest objective function value in the
population and the total number of elements in the population,
npop
, is NP+NCOL(add_to_init_pop)
.
For compare_to = "max"
this is the Diff criterion
studied by Zielinski and Laur (2008) among several other alternatives,
which was found to yield the best results.
A list with the following components:
par |
The best set of parameters found. |
value |
The value of |
iter |
Number of iterations taken by the algorithm. |
convergence |
An integer code. |
and if details = TRUE
:
poppar |
Matrix of dimension |
popcost |
The values of |
It is possible to perform a warm start, i.e., starting from the
previous run and resume optimization, using NP = 0
and the
component poppar
for the add_to_init_pop
argument.
Eduardo L. T. Conceicao [email protected]
Babu, B. V. and Angira, R. (2006) Modified differential evolution (MDE) for optimization of non-linear chemical processes. Computers and Chemical Engineering 30, 989–1002. doi:10.1016/j.compchemeng.2005.12.020.
Biedrzycki, R., Arabas, J. and Jagodzinski, D. (2019) Bound constraints handling in differential evolution: An experimental study. Swarm and Evolutionary Computation 50, 100453. doi:10.1016/j.swevo.2018.10.004.
Brest, J., Greiner, S., Boskovic, B., Mernik, M. and Zumer, V. (2006) Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Transactions on Evolutionary Computation 10, 646–657. doi:10.1109/TEVC.2006.872133.
Brest, J., Maucec, M. S. and Boskovic, B. (2021) Self-adaptive differential evolution algorithm with population size reduction for single objective bound-constrained optimization: Algorithm j21; in 2021 IEEE Congress on Evolutionary Computation (CEC). IEEE, pp. 817–824. doi:10.1109/CEC45853.2021.9504782.
Lampinen, J. and Zelinka, I. (1999). Mechanical engineering design optimization by differential evolution; in Corne, D., Dorigo, M. and Glover, F., Eds., New Ideas in Optimization. McGraw-Hill, pp. 127–146.
Price, K. V., Storn, R. M. and Lampinen, J. A. (2005) Differential evolution: A practical approach to global optimization. Springer, Berlin, Heidelberg, pp. 117–118. doi:10.1007/3-540-31306-0_2.
Storn, R. (2008) Differential evolution research — Trends and open questions; in Chakraborty, U. K., Ed., Advances in differential evolution. SCI 143, Springer, Berlin, Heidelberg, pp. 11–12. doi:10.1007/978-3-540-68830-3_1.
Storn, R. and Price, K. (1997) Differential evolution - A simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization 11, 341–359. doi:10.1023/A:1008202821328.
Wu, G., Pedrycz, W., Suganthan, P. N. and Mallipeddi, R. (2015) A variable reduction strategy for evolutionary algorithms handling equality constraints. Applied Soft Computing 37, 774–786. doi:10.1016/j.asoc.2015.09.007.
Zhang, H. and Rangaiah, G. P. (2012) An efficient constraint handling method with integrated differential evolution for numerical and engineering optimization. Computers and Chemical Engineering 37, 74–88. doi:10.1016/j.compchemeng.2011.09.018.
Zielinski, K. and Laur, R. (2008) Stopping criteria for differential evolution in constrained single-objective optimization; in Chakraborty, U. K., Ed., Advances in differential evolution. SCI 143, Springer, Berlin, Heidelberg, pp. 111–138. doi:10.1007/978-3-540-68830-3_4.
Function DEoptim()
in the DEoptim package
has many more options than JDEoptim()
, but does not allow constraints
in the same flexible manner.
# NOTE: Examples were excluded from testing # to reduce package check time. # Use a preset seed so test values are reproducible. set.seed(1234) # Bound-constrained optimization # Griewank function # # -600 <= xi <= 600, i = {1, 2, ..., n} # The function has a global minimum located at # x* = (0, 0, ..., 0) with f(x*) = 0. Number of local minima # for arbitrary n is unknown, but in the two dimensional case # there are some 500 local minima. # # Source: # Ali, M. Montaz, Khompatraporn, Charoenchai, and # Zabinsky, Zelda B. (2005). # A numerical evaluation of several stochastic algorithms # on selected continuous global optimization test problems. # Journal of Global Optimization 31, 635-672. # https://doi.org/10.1007/s10898-004-9972-2 griewank <- function(x) { 1 + crossprod(x)/4000 - prod( cos(x/sqrt(seq_along(x))) ) } JDEoptim(rep(-600, 10), rep(600, 10), griewank, tol = 1e-7, trace = TRUE, triter = 50) # Nonlinear constrained optimization # 0 <= x1 <= 34, 0 <= x2 <= 17, 100 <= x3 <= 300 # The global optimum is # (x1, x2, x3; f) = (0, 16.666667, 100; 189.311627). # # Source: # Westerberg, Arthur W., and Shah, Jigar V. (1978). # Assuring a global optimum by the use of an upper bound # on the lower (dual) bound. # Computers and Chemical Engineering 2, 83-92. # https://doi.org/10.1016/0098-1354(78)80012-X fcn <- list(obj = function(x) { 35*x[1]^0.6 + 35*x[2]^0.6 }, eq = 2, con = function(x) { x1 <- x[1]; x3 <- x[3] c(600*x1 - 50*x3 - x1*x3 + 5000, 600*x[2] + 50*x3 - 15000) }) JDEoptim(c(0, 0, 100), c(34, 17, 300), fn = fcn$obj, constr = fcn$con, meq = fcn$eq, tol = 1e-7, trace = TRUE, triter = 50) # Designing a pressure vessel # Case A: all variables are treated as continuous # # 1.1 <= x1 <= 12.5*, 0.6 <= x2 <= 12.5*, # 0.0 <= x3 <= 240.0*, 0.0 <= x4 <= 240.0 # Roughly guessed* # The global optimum is (x1, x2, x3, x4; f) = # (1.100000, 0.600000, 56.99482, 51.00125; 7019.031). # # Source: # Lampinen, Jouni, and Zelinka, Ivan (1999). # Mechanical engineering design optimization # by differential evolution. # In: David Corne, Marco Dorigo and Fred Glover (Editors), # New Ideas in Optimization, McGraw-Hill, pp 127-146 pressure_vessel_A <- list(obj = function(x) { x1 <- x[1]; x2 <- x[2]; x3 <- x[3]; x4 <- x[4] 0.6224*x1*x3*x4 + 1.7781*x2*x3^2 + 3.1611*x1^2*x4 + 19.84*x1^2*x3 }, con = function(x) { x1 <- x[1]; x2 <- x[2]; x3 <- x[3]; x4 <- x[4] c(0.0193*x3 - x1, 0.00954*x3 - x2, 750.0*1728.0 - pi*x3^2*x4 - 4/3*pi*x3^3) }) JDEoptim(c( 1.1, 0.6, 0.0, 0.0), c(12.5, 12.5, 240.0, 240.0), fn = pressure_vessel_A$obj, constr = pressure_vessel_A$con, tol = 1e-7, trace = TRUE, triter = 50) # Mixed integer nonlinear programming # Designing a pressure vessel # Case B: solved according to the original problem statements # steel plate available in thicknesses multiple # of 0.0625 inch # # wall thickness of the # shell 1.1 [18*0.0625] <= x1 <= 12.5 [200*0.0625] # heads 0.6 [10*0.0625] <= x2 <= 12.5 [200*0.0625] # 0.0 <= x3 <= 240.0, 0.0 <= x4 <= 240.0 # The global optimum is (x1, x2, x3, x4; f) = # (1.125 [18*0.0625], 0.625 [10*0.0625], # 58.29016, 43.69266; 7197.729). pressure_vessel_B <- list(obj = function(x) { x1 <- floor(x[1])*0.0625 x2 <- floor(x[2])*0.0625 x3 <- x[3]; x4 <- x[4] 0.6224*x1*x3*x4 + 1.7781*x2*x3^2 + 3.1611*x1^2*x4 + 19.84*x1^2*x3 }, con = function(x) { x1 <- floor(x[1])*0.0625 x2 <- floor(x[2])*0.0625 x3 <- x[3]; x4 <- x[4] c(0.0193*x3 - x1, 0.00954*x3 - x2, 750.0*1728.0 - pi*x3^2*x4 - 4/3*pi*x3^3) }) res <- JDEoptim(c( 18, 10, 0.0, 0.0), c(200+1, 200+1, 240.0, 240.0), fn = pressure_vessel_B$obj, constr = pressure_vessel_B$con, tol = 1e-7, trace = TRUE, triter = 50) res # Now convert to integer x1 and x2 c(floor(res$par[1:2]), res$par[3:4])
# NOTE: Examples were excluded from testing # to reduce package check time. # Use a preset seed so test values are reproducible. set.seed(1234) # Bound-constrained optimization # Griewank function # # -600 <= xi <= 600, i = {1, 2, ..., n} # The function has a global minimum located at # x* = (0, 0, ..., 0) with f(x*) = 0. Number of local minima # for arbitrary n is unknown, but in the two dimensional case # there are some 500 local minima. # # Source: # Ali, M. Montaz, Khompatraporn, Charoenchai, and # Zabinsky, Zelda B. (2005). # A numerical evaluation of several stochastic algorithms # on selected continuous global optimization test problems. # Journal of Global Optimization 31, 635-672. # https://doi.org/10.1007/s10898-004-9972-2 griewank <- function(x) { 1 + crossprod(x)/4000 - prod( cos(x/sqrt(seq_along(x))) ) } JDEoptim(rep(-600, 10), rep(600, 10), griewank, tol = 1e-7, trace = TRUE, triter = 50) # Nonlinear constrained optimization # 0 <= x1 <= 34, 0 <= x2 <= 17, 100 <= x3 <= 300 # The global optimum is # (x1, x2, x3; f) = (0, 16.666667, 100; 189.311627). # # Source: # Westerberg, Arthur W., and Shah, Jigar V. (1978). # Assuring a global optimum by the use of an upper bound # on the lower (dual) bound. # Computers and Chemical Engineering 2, 83-92. # https://doi.org/10.1016/0098-1354(78)80012-X fcn <- list(obj = function(x) { 35*x[1]^0.6 + 35*x[2]^0.6 }, eq = 2, con = function(x) { x1 <- x[1]; x3 <- x[3] c(600*x1 - 50*x3 - x1*x3 + 5000, 600*x[2] + 50*x3 - 15000) }) JDEoptim(c(0, 0, 100), c(34, 17, 300), fn = fcn$obj, constr = fcn$con, meq = fcn$eq, tol = 1e-7, trace = TRUE, triter = 50) # Designing a pressure vessel # Case A: all variables are treated as continuous # # 1.1 <= x1 <= 12.5*, 0.6 <= x2 <= 12.5*, # 0.0 <= x3 <= 240.0*, 0.0 <= x4 <= 240.0 # Roughly guessed* # The global optimum is (x1, x2, x3, x4; f) = # (1.100000, 0.600000, 56.99482, 51.00125; 7019.031). # # Source: # Lampinen, Jouni, and Zelinka, Ivan (1999). # Mechanical engineering design optimization # by differential evolution. # In: David Corne, Marco Dorigo and Fred Glover (Editors), # New Ideas in Optimization, McGraw-Hill, pp 127-146 pressure_vessel_A <- list(obj = function(x) { x1 <- x[1]; x2 <- x[2]; x3 <- x[3]; x4 <- x[4] 0.6224*x1*x3*x4 + 1.7781*x2*x3^2 + 3.1611*x1^2*x4 + 19.84*x1^2*x3 }, con = function(x) { x1 <- x[1]; x2 <- x[2]; x3 <- x[3]; x4 <- x[4] c(0.0193*x3 - x1, 0.00954*x3 - x2, 750.0*1728.0 - pi*x3^2*x4 - 4/3*pi*x3^3) }) JDEoptim(c( 1.1, 0.6, 0.0, 0.0), c(12.5, 12.5, 240.0, 240.0), fn = pressure_vessel_A$obj, constr = pressure_vessel_A$con, tol = 1e-7, trace = TRUE, triter = 50) # Mixed integer nonlinear programming # Designing a pressure vessel # Case B: solved according to the original problem statements # steel plate available in thicknesses multiple # of 0.0625 inch # # wall thickness of the # shell 1.1 [18*0.0625] <= x1 <= 12.5 [200*0.0625] # heads 0.6 [10*0.0625] <= x2 <= 12.5 [200*0.0625] # 0.0 <= x3 <= 240.0, 0.0 <= x4 <= 240.0 # The global optimum is (x1, x2, x3, x4; f) = # (1.125 [18*0.0625], 0.625 [10*0.0625], # 58.29016, 43.69266; 7197.729). pressure_vessel_B <- list(obj = function(x) { x1 <- floor(x[1])*0.0625 x2 <- floor(x[2])*0.0625 x3 <- x[3]; x4 <- x[4] 0.6224*x1*x3*x4 + 1.7781*x2*x3^2 + 3.1611*x1^2*x4 + 19.84*x1^2*x3 }, con = function(x) { x1 <- floor(x[1])*0.0625 x2 <- floor(x[2])*0.0625 x3 <- x[3]; x4 <- x[4] c(0.0193*x3 - x1, 0.00954*x3 - x2, 750.0*1728.0 - pi*x3^2*x4 - 4/3*pi*x3^3) }) res <- JDEoptim(c( 18, 10, 0.0, 0.0), c(200+1, 200+1, 240.0, 240.0), fn = pressure_vessel_B$obj, constr = pressure_vessel_B$con, tol = 1e-7, trace = TRUE, triter = 50) res # Now convert to integer x1 and x2 c(floor(res$par[1:2]), res$par[3:4])
A bespoke implementation of the ‘NCDE’ (neighborhood based crowding DE) algorithm by Qu et al. (2012) doi:10.1109/TEVC.2011.2161873, assisted with the dynamic archive mechanism of Epitropakis et al. (2013) doi:10.1109/CEC.2013.6557556.
NCDEoptim(lower, upper, fn, constr = NULL, meq = 0, eps = 1e-5, crit = 1e-5, niche_radius = NULL, archive_size = 100, reinit_if_solu_in_arch = TRUE, NP = 100, Fl = 0.1, Fu = 1, CRl = 0, CRu = 1.1, nbngbrsl = NP/20, nbngbrsu = NP/5, tau_F = 0.1, tau_CR = 0.1, tau_pF = 0.1, tau_nbngbrs = 0.1, jitter_factor = 0.001, maxiter = 2000, add_to_init_pop = NULL, trace = FALSE, triter = 1, ...)
NCDEoptim(lower, upper, fn, constr = NULL, meq = 0, eps = 1e-5, crit = 1e-5, niche_radius = NULL, archive_size = 100, reinit_if_solu_in_arch = TRUE, NP = 100, Fl = 0.1, Fu = 1, CRl = 0, CRu = 1.1, nbngbrsl = NP/20, nbngbrsu = NP/5, tau_F = 0.1, tau_CR = 0.1, tau_pF = 0.1, tau_nbngbrs = 0.1, jitter_factor = 0.001, maxiter = 2000, add_to_init_pop = NULL, trace = FALSE, triter = 1, ...)
lower , upper
|
numeric vectors, the lower and upper bounds of the
search space (box constraints); must be finite
( |
fn |
a |
constr |
a vector |
meq |
an integer, the first |
eps |
the maximal admissible constraint violation for
equality constraints. A numeric vector of small positive tolerance values
with length |
crit |
a numeric, the acceptance threshold on the archive strategy. If
|
niche_radius |
a numeric, the absolute tolerance used to decide whether
the solution |
archive_size |
an integer, the maximum number of solutions that
can be kept in the archive; entries above this limit are discarded.
Default is |
reinit_if_solu_in_arch |
a logical, if |
NP |
an integer, the population size. Defaults to |
Fl |
a numeric, the minimum value that the
scaling factor |
Fu |
a numeric, the maximum value that the
scaling factor |
CRl |
a numeric, the minimum value to be used for the
crossover constant |
CRu |
a numeric, the maximum value to be used for the
crossover constant |
nbngbrsl |
an integer, the lower limit for the
neighborhood size |
nbngbrsu |
an integer, the upper limit for the
neighborhood size |
tau_F |
a numeric, the probability that the
scaling factor |
tau_CR |
a numeric, the probability that the
crossover constant |
tau_pF |
a numeric, the probability that the
mutation probability |
tau_nbngbrs |
a numeric, the probability that the
neighborhood size |
jitter_factor |
a numeric, the tuning constant for jitter.
If |
maxiter |
an integer, the maximum number of iterations allowed which is
the stopping condition. Default is |
add_to_init_pop |
numeric vector of length |
trace |
a logical, determines whether or not to monitor the
iteration process. Default is |
triter |
an integer, trace output is printed at every
|
... |
additional arguments passed to |
This implementation differs mainly from the original ‘NCDE’ algorithm
of Qu et al. (2012) by employing the archiving procedure proposed
in Epitropakis et al. (2013) and the adaptive ‘jDE’ strategy
instead of canonical Diferential Evolution. The key reason for archiving
good solutions during the search process is to prevent them from being lost
during evolution. Constraints are tackled through the
-constrained method as proposed
in Poole and Allen (2019). The ‘jDE’ and
-constrained mechanisms are applied in the same way
as in
JDEoptim
, but with synchronous mode of
population update. In contrast, the reinitialization in the current
population triggered by already found solutions is done asynchronously.
Each line of trace output follows the format of:
iteration : < value of niche radius > population>> ( value of best solution ) best solution { index of violated constraints } archive>> [ number of solutions found ] ( value of best solution ) best solution
A list with the following components:
solution_arch |
a |
objective_arch |
the values of |
solution_pop |
a |
objective_pop |
the values of |
iter |
the number of iterations used. |
and if there are general constraints present:
constr_value_arch |
a |
constr_value_pop |
a |
This function is in an experimental stage.
Eduardo L. T. Conceicao [email protected]
Epitropakis, M. G., Li, X. and Burke, E. K. (2013) A dynamic archive niching differential evolution algorithm for multimodal optimization; in 2013 IEEE Congress on Evolutionary Computation (CEC). IEEE, pp. 79–86. doi:10.1109/CEC.2013.6557556.
Poole, D. J. and Allen, C. B. (2019) Constrained niching using differential evolution. Swarm and Evolutionary Computation 44, 74–100. doi:10.1016/j.swevo.2018.11.004.
Qu, B. Y., Suganthan, P. N. and Liang, J. J. (2012) Differential evolution with neighborhood mutation for multimodal optimization. IEEE Transactions on Evolutionary Computation 16, 601–614. doi:10.1109/TEVC.2011.2161873.
# NOTE: Examples were excluded from testing # to reduce package check time. # Use a preset seed so test values are reproducible. set.seed(1234) # Warning: the examples are run using a very small number of # iterations to decrease execution time. # Bound-constrained optimization # Vincent function # # f(x) = -mean(sin(10*log(x))) # # 0.25 <= xi <= 10, i = {1, 2, ..., n} # The function has 6^n global minima without local minima. NCDEoptim(c(0.25, 0.25), c(10, 10), function(x) -mean(sin(10*log(x))), niche_radius = 0.2, maxiter = 200, trace = TRUE, triter = 20) # Nonlinear constrained optimization # Function F10 of Poole and Allen (2019) # # f(x) = -sin(5*pi*x)^6 + 1 # subject to: # g(x) = -cos(10*pi*x) <= 0 # # 0 <= x <= 1 # The 10 global optima are # (x1*, ..., x10*; f*) = ((2*(1:10) - 1)/20; 0.875). NCDEoptim(0, 1, function(x) -sin(5*pi*x)^6 + 1, function(x) -cos(10*pi*x), niche_radius = 0.05, maxiter = 200, trace = TRUE, triter = 20)
# NOTE: Examples were excluded from testing # to reduce package check time. # Use a preset seed so test values are reproducible. set.seed(1234) # Warning: the examples are run using a very small number of # iterations to decrease execution time. # Bound-constrained optimization # Vincent function # # f(x) = -mean(sin(10*log(x))) # # 0.25 <= xi <= 10, i = {1, 2, ..., n} # The function has 6^n global minima without local minima. NCDEoptim(c(0.25, 0.25), c(10, 10), function(x) -mean(sin(10*log(x))), niche_radius = 0.2, maxiter = 200, trace = TRUE, triter = 20) # Nonlinear constrained optimization # Function F10 of Poole and Allen (2019) # # f(x) = -sin(5*pi*x)^6 + 1 # subject to: # g(x) = -cos(10*pi*x) <= 0 # # 0 <= x <= 1 # The 10 global optima are # (x1*, ..., x10*; f*) = ((2*(1:10) - 1)/20; 0.875). NCDEoptim(0, 1, function(x) -sin(5*pi*x)^6 + 1, function(x) -cos(10*pi*x), niche_radius = 0.05, maxiter = 200, trace = TRUE, triter = 20)