Title: | Bayesian Optimization and Model-Based Optimization of Expensive Black-Box Functions |
---|---|
Description: | Flexible and comprehensive R toolbox for model-based optimization ('MBO'), also known as Bayesian optimization. It implements the Efficient Global Optimization Algorithm and is designed for both single- and multi- objective optimization with mixed continuous, categorical and conditional parameters. The machine learning toolbox 'mlr' provide dozens of regression learners to model the performance of the target algorithm with respect to the parameter settings. It provides many different infill criteria to guide the search process. Additional features include multi-point batch proposal, parallel execution as well as visualization and sophisticated logging mechanisms, which is especially useful for teaching and understanding of algorithm behavior. 'mlrMBO' is implemented in a modular fashion, such that single components can be easily replaced or adapted by the user for specific use cases. |
Authors: | Bernd Bischl [aut] , Jakob Richter [aut, cre] , Jakob Bossek [aut] , Daniel Horn [aut], Michel Lang [aut] , Janek Thomas [aut] |
Maintainer: | Jakob Richter <[email protected]> |
License: | BSD_2_clause + file LICENSE |
Version: | 1.1.5.1 |
Built: | 2024-11-21 06:54:12 UTC |
Source: | CRAN |
There are multiple types of errors that can occur during one optimization process. mlrMBO tries to handle most of them as smart as possible.
The target function could
1The target function returns NA(s) or NaN(s) (plural for the multi-objective case).
2The target function stops with an error.
3The target function does not return at all (infinite or very long execution time).
4The target function crashes the whole R process.
5The surrogate machine learning model might crash. Kriging quite often can run into numerical problems.
6The proposal mechanism - in multi-point or single point mode - produces a point which is either close to another candidate point in the same iteration or an already visited point in a previous iteration.
7The mbo process exits / stops / crashes itself. Maybe because it hit a walltime.
Mechanism I - Objective value imputation
Issues 1-4 all have in common that the optimizer does not obtain a useful
objective value. 3-4 are problematic, because we completely lose control of the R process.
We are currently only able to handle them, if you are parallelizing your optimization
via parallelMap
and use the BatchJobs mode.
In this case, you can specify a walltime (handles 3) and the function evaluation is performed
in a separate R process (handles 4). A later path might be to allow function evaluation in
a separate process in general, with a capping time. If you really need this now, you can always
do this yourself.
Now back to the problem of invalid objective values. By default, the mbo function stops with an error
(if it still has control of the process). But in many cases you still want the algorithm to continue.
Hence, mbo allows imputation of bad values via the control option impute.y.fun
.
Logging: All error messages are logged into the optimization path opt.path
if problems occur.
Mechanism II - The mlr's on.learner.error
If your surrogate learner crashes you can set on.surrogate.error
in makeMBOControl
to “quiet” or “warn”.
This will set mlr's on.learner.error
for the surrogate.
It prevents MBO from crashing in total (issue 5), if the surrogate learner produces an error.
As a resort a FailureModel will be returned instead of a the surrogate.
Subsequently a random point (or multiple ones) are proposed now for the current iteration.
And we pray that we can fit the model again in the next iteration.
Logging: The entry “model.error” is set in the opt.path
.
Mechanism III - Filtering of proposed point which are too close
Issue 6 is solved by filtering points that are to close to other proposed points or points already
proposed in preceding iterations. Filtering in this context means replacing the proposed points by
a randomly generated new point. The heuristics mechanism is (de)activated via the logical
filter.proposed.points.tol
parameter of the setMBOControlInfill
function, which defaults to
TRUE
.(closeness of two points is determined via the filter.proposed.points.tol
parameter).
Logging: The logical entry “filtered.point” is set in the opt.path indicating whether the corresponding point was filtered.
Mechanism IV - Continue optimization process
The mechanism is a save-state-then-continue-mechanism, that allows you to continue
your optimization after your system or the optimization process crashed for
some reason (issue 7). The mbo
function has the option to save the
current state after certain iterations of the main loop on disk via the control
option save.on.disk.at
of makeMBOControl
.
Note that this saving mechanism is disabled by default.
Here you can specify, after which iteration you want the current state to be
saved (option save.on.disk.at
). Notice that 0 denotes saving the initial
design and iters
+ 1 denotes saving the final results.
With mboContinue
you can continue the optimization from the last
saved state. This function only requires the path of the saved state.
You will get a warning if you turn on saving in general, but not for the the final result, as
this seems a bit stupid. save.file.path
defines the path of the RData file where
the state is stored. It is overwritten (= extended) in each saving iteration.
Usually used for 1D or 2D examples,
useful for figuring out how stuff works and for teaching purposes.
Currently only parameter spaces with numerical parameters are supported.
For visualization, run plotExampleRun
on the resulting object.
What is displayed is documented here: plotExampleRun
.
Rendering the plots without displaying them is possible via the function
renderExampleRunPlot
.
Please note the following things:
- The true objective function (and later everything which is predicted from our surrogate model)
is evaluated on a regular spaced grid. These evaluations are stored in the result object.
You can control the resolution of this grid via points.per.dim
.
Parallelization of these evaluations is possible with the R package parallelMap on the level mlrMBO.feval
.
- In every iteration the fitted, approximating surrogate model is stored in the result object
(via store.model.at
in control
) so we can later visualize it quickly.
- The global optimum of the function (if defined) is extracted from the passed smoof function.
- If the passed objective function fun
does not provide the true, unnoisy objective function
some features will not be displayed (for example the gap between the best point so far and the global optimum).
exampleRun( fun, design = NULL, learner = NULL, control, points.per.dim = 50, noisy.evals = 10, show.info = getOption("mlrMBO.show.info", TRUE) )
exampleRun( fun, design = NULL, learner = NULL, control, points.per.dim = 50, noisy.evals = 10, show.info = getOption("mlrMBO.show.info", TRUE) )
fun |
[ |
design |
[ |
learner |
[ |
control |
[ |
points.per.dim |
[ |
noisy.evals |
[ |
show.info |
[ |
[MBOExampleRun
]
Only available for 2D -> 2D examples,
useful for figuring out how stuff works and for teaching purposes.
Currently only parameter spaces with numerical parameters are supported.
For visualization, run plotExampleRun
on the resulting object.
What is displayed is documented here: plotExampleRun
.
exampleRunMultiObj( fun, design = NULL, learner, control, points.per.dim = 50, show.info = getOption("mlrMBO.show.info", TRUE), nsga2.args = list(), ... )
exampleRunMultiObj( fun, design = NULL, learner, control, points.per.dim = 50, show.info = getOption("mlrMBO.show.info", TRUE), nsga2.args = list(), ... )
fun |
[ |
design |
[ |
learner |
[ |
control |
[ |
points.per.dim |
[ |
show.info |
[ |
nsga2.args |
[ |
... |
[any] |
[MBOExampleRunMultiObj
]
If the passed objective function has no associated reference point max(y_i) + 1 of the nsga2 front is used.
Returns the common mlrMBO result object.
finalizeSMBO(opt.state)
finalizeSMBO(opt.state)
opt.state |
[ |
[MBOSingleObjResult
| MBOMultiObjResult
]
Helper function which returns the (estimated) global optimum.
getGlobalOpt(run)
getGlobalOpt(run)
run |
[ |
[numeric(1)
]. (Estimated) global optimum.
Returns properties of an infill criterion, e.g., name or id.
getMBOInfillCritParams(x) getMBOInfillCritParam(x, par.name) getMBOInfillCritName(x) getMBOInfillCritId(x) hasRequiresInfillCritStandardError(x) getMBOInfillCritComponents(x)
getMBOInfillCritParams(x) getMBOInfillCritParam(x, par.name) getMBOInfillCritName(x) getMBOInfillCritId(x) hasRequiresInfillCritStandardError(x) getMBOInfillCritComponents(x)
x |
[ |
par.name |
[ |
None.
getSupportedInfillOptFunctions()
getSupportedInfillOptFunctions()
[character
]
Returns all names of supported multi-point infill-criteria optimizers.
getSupportedMultipointInfillOptFunctions()
getSupportedMultipointInfillOptFunctions()
[character
]
mlrMBO contains most of the most popular infill criteria, e.g., expected
improvement, (lower) confidence bound etc. Moreover, custom infill criteria
may be generated with the makeMBOInfillCrit
function.
makeMBOInfillCritMeanResponse() makeMBOInfillCritStandardError() makeMBOInfillCritEI(se.threshold = 1e-06) makeMBOInfillCritCB(cb.lambda = NULL) makeMBOInfillCritAEI(aei.use.nugget = FALSE, se.threshold = 1e-06) makeMBOInfillCritEQI(eqi.beta = 0.75, se.threshold = 1e-06) makeMBOInfillCritDIB(cb.lambda = 1, sms.eps = NULL) makeMBOInfillCritAdaCB(cb.lambda.start = NULL, cb.lambda.end = NULL)
makeMBOInfillCritMeanResponse() makeMBOInfillCritStandardError() makeMBOInfillCritEI(se.threshold = 1e-06) makeMBOInfillCritCB(cb.lambda = NULL) makeMBOInfillCritAEI(aei.use.nugget = FALSE, se.threshold = 1e-06) makeMBOInfillCritEQI(eqi.beta = 0.75, se.threshold = 1e-06) makeMBOInfillCritDIB(cb.lambda = 1, sms.eps = NULL) makeMBOInfillCritAdaCB(cb.lambda.start = NULL, cb.lambda.end = NULL)
se.threshold |
[ |
cb.lambda |
[ |
aei.use.nugget |
[ |
eqi.beta |
[ |
sms.eps |
[ |
cb.lambda.start |
[ |
cb.lambda.end |
[ |
In the multi-objective case we recommend to set cb.lambda
to
where
is the quantile
function of the standard normal distribution,
is the probability
of improvement value and
is the number of objectives of the considered problem.
Some infill criteria have parameters that are dependent on values in the parameter set, design,
used learner or other control settings.
To actually set these default values, this function is called, which returns a fully
initialized [MBOInfillCrit
].
This function is mainly for internal use. If a custom infill criterion is created, it may be
required to create a separate method initCrit.InfillCritID
where ID
is the
id
of the custom MBOInfillCrit.
initCrit(crit, fun, design, learner, control)
initCrit(crit, fun, design, learner, control)
crit |
[ |
fun |
[ |
design |
Sampling plan. |
learner |
[ |
control |
[ |
When you want to run a human-in-the-loop MBO run you need to initialize it first.
initSMBO( par.set, design, learner = NULL, control, minimize = rep(TRUE, control$n.objectives), noisy = FALSE, show.info = getOption("mlrMBO.show.info", TRUE) )
initSMBO( par.set, design, learner = NULL, control, minimize = rep(TRUE, control$n.objectives), noisy = FALSE, show.info = getOption("mlrMBO.show.info", TRUE) )
par.set |
|
design |
[ |
learner |
[ |
control |
[ |
minimize |
[ |
noisy |
[ |
show.info |
[ |
[OptState
]
Creates a control object for MBO optimization.
makeMBOControl( n.objectives = 1L, propose.points = 1L, final.method = "best.true.y", final.evals = 0L, y.name = "y", impute.y.fun = NULL, trafo.y.fun = NULL, suppress.eval.errors = TRUE, save.on.disk.at = integer(0L), save.on.disk.at.time = Inf, save.file.path = file.path(getwd(), "mlrMBO_run.RData"), store.model.at = NULL, resample.at = integer(0), resample.desc = makeResampleDesc("CV", iter = 10), resample.measures = list(mse), output.num.format = "%.3g", on.surrogate.error = "stop" )
makeMBOControl( n.objectives = 1L, propose.points = 1L, final.method = "best.true.y", final.evals = 0L, y.name = "y", impute.y.fun = NULL, trafo.y.fun = NULL, suppress.eval.errors = TRUE, save.on.disk.at = integer(0L), save.on.disk.at.time = Inf, save.file.path = file.path(getwd(), "mlrMBO_run.RData"), store.model.at = NULL, resample.at = integer(0), resample.desc = makeResampleDesc("CV", iter = 10), resample.measures = list(mse), output.num.format = "%.3g", on.surrogate.error = "stop" )
n.objectives |
[ |
propose.points |
[ |
final.method |
[ |
final.evals |
[ |
y.name |
[ |
impute.y.fun |
[ |
trafo.y.fun |
[ |
suppress.eval.errors |
[ |
save.on.disk.at |
[ |
save.on.disk.at.time |
[ |
save.file.path |
[ |
store.model.at |
[ |
resample.at |
[ |
resample.desc |
[ |
resample.measures |
[list of |
output.num.format |
[ |
on.surrogate.error |
[ |
[MBOControl
].
Other MBOControl:
setMBOControlInfill()
,
setMBOControlMultiObj()
,
setMBOControlMultiPoint()
,
setMBOControlTermination()
The infill criterion guides the model based search process. The most prominent infill criteria, e.g., expected improvement, lower confidence bound and others, are already implemented in mlrMBO. Moreover, the package allows for the creation of custom infill criteria.
makeMBOInfillCrit( fun, name, id, opt.direction = "minimize", components = character(0L), params = list(), requires.se = FALSE )
makeMBOInfillCrit( fun, name, id, opt.direction = "minimize", components = character(0L), params = list(), requires.se = FALSE )
fun |
[
Important: Internally, this function will be minimized. So the proposals will be where this function is low. |
name |
[ |
id |
[ |
opt.direction |
[ |
components |
[ |
params |
[ |
requires.se |
[ |
Expected Improvement
Mean response
Standard error
Confidence bound with lambda automatically chosen, see infillcrits
Confidence bound with lambda=1
Confidence bound with lambda=2
Augmented expected improvement
Expected quantile improvement
Direct indicator-based with lambda=1
This is a helper function that generates a default surrogate, based on properties of the objective function and the selected infill criterion.
For numeric-only (including integers) parameter spaces without any dependencies:
A Kriging model “regr.km” with kernel “matern3_2” is created.
If the objective function is deterministic we add a small nugget effect (10^-8*Var(y), y is vector of observed outcomes in current design) to increase numerical stability to hopefully prevent crashes of DiceKriging.
If the objective function is noisy the nugget effect will be estimated with
nugget.estim = TRUE
(but you can override this in ...
.
Also jitter
is set to TRUE
to circumvent a problem with DiceKriging where already
trained input values produce the exact trained output.
For further information check the $note
slot of the created learner.
Instead of the default "BFGS"
optimization method we use rgenoud ("gen"
),
which is a hybrid algorithm, to combine global search based on genetic algorithms and local search
based on gradients.
This may improve the model fit and will less frequently produce a constant surrogate model.
You can also override this setting in ...
.
For mixed numeric-categorical parameter spaces, or spaces with conditional parameters:
A random regression forest “regr.randomForest” with 500 trees is created.
The standard error of a prediction (if required by the infill criterion) is estimated
by computing the jackknife-after-bootstrap.
This is the se.method = "jackknife"
option of the “regr.randomForest” Learner.
If additionally dependencies are in present in the parameter space, inactive conditional parameters
are represented by missing NA
values in the training design data.frame.
We simply handle those with an imputation method, added to the random forest:
If a numeric value is inactive, i.e., missing, it will be imputed by 2 times the maximum of observed values
If a categorical value is inactive, i.e., missing, it will be imputed by the
special class label "__miss__"
Both of these techniques make sense for tree-based methods and are usually hard to beat, see Ding et.al. (2010).
makeMBOLearner(control, fun, config = list(), ...)
makeMBOLearner(control, fun, config = list(), ...)
control |
[ |
fun |
[ |
config |
[ |
... |
[any] |
[Learner
]
Ding, Yufeng, and Jeffrey S. Simonoff. An investigation of missing data methods for classification trees applied to binary response data. Journal of Machine Learning Research 11.Jan (2010): 131-170.
Creates a transformation function for MBOExampleRun.
makeMBOTrafoFunction(name, fun)
makeMBOTrafoFunction(name, fun)
name |
[ |
fun |
[ |
Object of type MBOTrafoFunction.
See mbo_parallel for all parallelization options.
mbo( fun, design = NULL, learner = NULL, control = NULL, show.info = getOption("mlrMBO.show.info", TRUE), more.args = list() )
mbo( fun, design = NULL, learner = NULL, control = NULL, show.info = getOption("mlrMBO.show.info", TRUE), more.args = list() )
fun |
[ |
design |
[ |
learner |
[ |
control |
[ |
show.info |
[ |
more.args |
[list] |
[MBOSingleObjResult
| MBOMultiObjResult
]
# simple 2d objective function obj.fun = makeSingleObjectiveFunction( fn = function(x) x[1]^2 + sin(x[2]), par.set = makeNumericParamSet(id = "x", lower = -1, upper = 1, len = 2) ) # create base control object ctrl = makeMBOControl() # do three MBO iterations ctrl = setMBOControlTermination(ctrl, iters = 3L) # use 500 points in the focussearch (should be sufficient for 2d) ctrl = setMBOControlInfill(ctrl, opt.focussearch.points = 500) # create initial design des = generateDesign(n = 5L, getParamSet(obj.fun), fun = lhs::maximinLHS) # start mbo res = mbo(obj.fun, design = des, control = ctrl) print(res) ## Not run: plot(res) ## End(Not run)
# simple 2d objective function obj.fun = makeSingleObjectiveFunction( fn = function(x) x[1]^2 + sin(x[2]), par.set = makeNumericParamSet(id = "x", lower = -1, upper = 1, len = 2) ) # create base control object ctrl = makeMBOControl() # do three MBO iterations ctrl = setMBOControlTermination(ctrl, iters = 3L) # use 500 points in the focussearch (should be sufficient for 2d) ctrl = setMBOControlInfill(ctrl, opt.focussearch.points = 500) # create initial design des = generateDesign(n = 5L, getParamSet(obj.fun), fun = lhs::maximinLHS) # start mbo res = mbo(obj.fun, design = des, control = ctrl) print(res) ## Not run: plot(res) ## End(Not run)
In mlrMBO the OptPath
contains extra information next to the information documented in OptPath
.
The extras are:
Time to train the model(s) that produced the points. Only the first slot of the vector is used (if we have multiple points), rest are NA.
Time needed to propose the point. If we have individual timings from the proposal mechanism, we have one different value per point here. If all were generated in one go, we only have one timing, we store it in the slot for the first point, rest are NA.
Possible Error Messages. If point-producing model(s) crashed they are replicated for all n points, if only one error message was passed we store it for the first point, rest are NA.
Type of point proposal. Possible values are
Points actually not proposed, but in the initial design.
Here x is a placeholder for the selected infill criterion, e.g., infill_ei for expected improvement.
Uniformly sampled points added additionally to the proposed points.
If filtering of proposed points located too close to each other is active, these are replaced by random points.
If final.evals
is set in makeMBOControl
: Final evaluations of the proposed solution to reduce noise in y.
Weight vector sampled for multi-point ParEGO
Depending on the chosen infill criterion there will be additional columns, e.g. se
and mean
for the Expected Improvement)
Moreover, the user may pass additional “user extras” by appending a named list of scalar values to the return value of the objective function.
In mlrMBO you can parallelize the tuning on two different levels to speed up computation:
mlrMBO.feval
Multiple evaluations of the target function.
mlrMBO.propose.points
Optimization of the infill criteria if multiple are used (e.g. ParEGO and ParallelLCB)
Internally the evaluation of the target function is realized with the R package parallelMap.
See the mlrMBO tutorial and the Github project pages of parallelMap for instructions on how to set up parallelization.
The different levels of parallelization can be specified in parallelStart*
.
Details for the levels mentioned above are given below:
Evaluation of the objective function can be parallelized in cases multiple points are to be evaluated at once. These are: evaluation of the initial design, multiple proposed points per iteration and evaluation of the target function in exampleRun
. (Level: mlrMBO.feval
)
Model fitting / point proposal - in some cases where independent, expensive operations are performed. (Level: mlrMBO.propose.points
)
Details regarding the latter:
Parallel optimization of LCBs for the lambda-values.
Parallel optimization of scalarization functions.
Useful if your optimization is likely to crash, so you can continue from a save point and will not lose too much information and runtime.
mboContinue(opt.state)
mboContinue(opt.state)
opt.state |
[ |
See mbo
.
Useful if your optimization didn't terminate but you want a results nonetheless.
mboFinalize(file)
mboFinalize(file)
file |
[ |
See mbo
.
pareto.front [matrix
]Pareto front of all evaluated points.
pareto.set [list
of list
s]Pareto set of all evaluated points.
pareto.inds [numeric
]Indices of the Pareto-optimal points in the opt.path
opt.path [OptPath
]Optimization path.
Includes all evaluated points and additional information as documented in mbo_OptPath.
You can convert it via as.data.frame
.
final.state [character
] The final termination state. Gives information why the optimization ended
models [List of WrappedModel
]List of saved regression models.
control[MBOControl
] Control object used in optimization
x [list
]Named list of proposed optimal parameters.
y [numeric(1)
]Value of objective function at x
,
either from evals during optimization or from requested final evaluations,
if those were greater than 0.
best.ind [numeric(1)
]Index of x
in the opt.path.
opt.path [OptPath
]Optimization path.
Includes all evaluated points and additional information as documented in mbo_OptPath.
You can convert it via as.data.frame
.
resample.results [List of ResampleResult
]List of the desired resample.results
if resample.at
is set in makeMBOControl
.
final.state [character
] The final termination state. Gives information why the optimization ended. Possible values are
Maximal number of iterations reached.
Maximal running time exceeded.
Maximal execution time of function evaluations reached.
Target function value reached.
maximal number of function evaluations reached.
Terminated due to custom, user-defined termination condition.
models [List of WrappedModel
]List of saved regression models if store.model.at
is set in makeMBOControl
. The default is that it contains the model generated after the last iteration.
control [MBOControl
] Control object used in optimization
Different scenarios of the usage of mlrMBO and visualizations.
##################################################### ### ### optimizing a simple sin(x) with mbo / EI ### ##################################################### ## Not run: library(ggplot2) library(mlrMBO) configureMlr(show.learner.output = FALSE) set.seed(1) obj.fun = makeSingleObjectiveFunction( name = "Sine", fn = function(x) sin(x), par.set = makeNumericParamSet(lower = 3, upper = 13, len = 1), global.opt.value = -1 ) ctrl = makeMBOControl(propose.points = 1) ctrl = setMBOControlTermination(ctrl, iters = 10L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt = "focussearch", opt.focussearch.points = 500L) lrn = makeMBOLearner(ctrl, obj.fun) design = generateDesign(6L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 100, show.info = TRUE) plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing branin in 2D with mbo / EI ##### ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) obj.fun = makeBraninFunction() ctrl = makeMBOControl(propose.points = 1L) ctrl = setMBOControlTermination(ctrl, iters = 10L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt = "focussearch", opt.focussearch.points = 2000L) lrn = makeMBOLearner(ctrl, obj.fun) design = generateDesign(10L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 50L, show.info = TRUE) print(run) plotExampleRun(run, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing a simple sin(x) with multipoint proposal ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) obj.fun = makeSingleObjectiveFunction( name = "Sine", fn = function(x) sin(x), par.set = makeNumericParamSet(lower = 3, upper = 13, len = 1L), global.opt.value = -1 ) ctrl = makeMBOControl(propose.points = 2L) ctrl = setMBOControlTermination(ctrl, iters = 10L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritMeanResponse()) ctrl = setMBOControlMultiPoint( ctrl, method = "moimbo", moimbo.objective = "ei.dist", moimbo.dist = "nearest.neighbor", moimbo.maxit = 200L ) lrn = makeMBOLearner(ctrl, obj.fun) design = generateDesign(4L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 100, show.info = TRUE) print(run) plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing branin in 2D with multipoint proposal ##### ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(2) configureMlr(show.learner.output = FALSE) obj.fun = makeBraninFunction() ctrl = makeMBOControl(propose.points = 5L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritMeanResponse()) ctrl = setMBOControlTermination(ctrl, iters = 10L) ctrl = setMBOControlMultiPoint(ctrl, method = "moimbo", moimbo.objective = "ei.dist", moimbo.dist = "nearest.neighbor", moimbo.maxit = 200L ) lrn = makeLearner("regr.km", predict.type = "se") design = generateDesign(10L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 50L, show.info = TRUE) print(run) plotExampleRun(run, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing a simple noisy sin(x) with mbo / EI ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) # function with noise obj.fun = makeSingleObjectiveFunction( name = "Some noisy function", fn = function(x) sin(x) + rnorm(1, 0, 0.1), par.set = makeNumericParamSet(lower = 3, upper = 13, len = 1L), noisy = TRUE, global.opt.value = -1, fn.mean = function(x) sin(x) ) ctrl = makeMBOControl( propose.points = 1L, final.method = "best.predicted", final.evals = 10L ) ctrl = setMBOControlTermination(ctrl, iters = 5L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt = "focussearch", opt.focussearch.points = 500L) lrn = makeMBOLearner(ctrl, obj.fun) design = generateDesign(6L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 200L, noisy.evals = 50L, show.info = TRUE) print(run) plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing 1D fun with 3 categorical level and ### noisy outout with random forest ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) obj.fun = makeSingleObjectiveFunction( name = "Mixed decision space function", fn = function(x) { if (x$foo == "a") { return(5 + x$bar^2 + rnorm(1)) } else if (x$foo == "b") { return(4 + x$bar^2 + rnorm(1, sd = 0.5)) } else { return(3 + x$bar^2 + rnorm(1, sd = 1)) } }, par.set = makeParamSet( makeDiscreteParam("foo", values = letters[1:3]), makeNumericParam("bar", lower = -5, upper = 5) ), has.simple.signature = FALSE, # function expects a named list of parameter values noisy = TRUE ) ctrl = makeMBOControl() ctrl = setMBOControlTermination(ctrl, iters = 10L) # we can basically do an exhaustive search in 3 values ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt.restarts = 1L, opt.focussearch.points = 3L, opt.focussearch.maxit = 1L) design = generateDesign(20L, getParamSet(obj.fun), fun = lhs::maximinLHS) lrn = makeMBOLearner(ctrl, obj.fun) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 50L, show.info = TRUE) print(run) plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing mixed space function ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) obj.fun = makeSingleObjectiveFunction( name = "Mixed functions", fn = function(x) { if (x$cat == "a") x$num^2 else x$num^2 + 3 }, par.set = makeParamSet( makeDiscreteParam("cat", values = c("a", "b")), makeNumericParam("num", lower = -5, upper = 5) ), has.simple.signature = FALSE, global.opt.value = -1 ) ctrl = makeMBOControl(propose.points = 1L) ctrl = setMBOControlTermination(ctrl, iters = 10L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt = "focussearch", opt.focussearch.points = 500L) lrn = makeMBOLearner(ctrl, obj.fun) design = generateDesign(4L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 100L, show.info = TRUE) print(run) plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing multi-objective function ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) obj.fun = makeZDT1Function(dimensions = 2L) ctrl = makeMBOControl(n.objectives = 2L, propose.points = 2L, save.on.disk.at = integer(0L)) ctrl = setMBOControlTermination(ctrl, iters = 5L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritDIB(), opt.focussearch.points = 10000L) ctrl = setMBOControlMultiObj(ctrl, parego.s = 100) learner = makeMBOLearner(ctrl, obj.fun) design = generateDesign(5L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRunMultiObj(obj.fun, design = design, learner = learner, ctrl, points.per.dim = 50L, show.info = TRUE, nsga2.args = list()) plotExampleRun(run, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing multi objective function and plots ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) obj.fun = makeDTLZ1Function(dimensions = 5L, n.objectives = 2L) ctrl = makeMBOControl(n.objectives = 2L, propose.points = 2L) ctrl = setMBOControlTermination(ctrl, iters = 10L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt.focussearch.points = 1000L, opt.focussearch.maxit = 3L) ctrl = setMBOControlMultiObj(ctrl, method = "parego") lrn = makeMBOLearner(ctrl, obj.fun) design = generateDesign(8L, getParamSet(obj.fun), fun = lhs::maximinLHS) res = mbo(obj.fun, design = design, learner = lrn, control = ctrl, show.info = TRUE) plot(res) ## End(Not run)
##################################################### ### ### optimizing a simple sin(x) with mbo / EI ### ##################################################### ## Not run: library(ggplot2) library(mlrMBO) configureMlr(show.learner.output = FALSE) set.seed(1) obj.fun = makeSingleObjectiveFunction( name = "Sine", fn = function(x) sin(x), par.set = makeNumericParamSet(lower = 3, upper = 13, len = 1), global.opt.value = -1 ) ctrl = makeMBOControl(propose.points = 1) ctrl = setMBOControlTermination(ctrl, iters = 10L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt = "focussearch", opt.focussearch.points = 500L) lrn = makeMBOLearner(ctrl, obj.fun) design = generateDesign(6L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 100, show.info = TRUE) plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing branin in 2D with mbo / EI ##### ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) obj.fun = makeBraninFunction() ctrl = makeMBOControl(propose.points = 1L) ctrl = setMBOControlTermination(ctrl, iters = 10L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt = "focussearch", opt.focussearch.points = 2000L) lrn = makeMBOLearner(ctrl, obj.fun) design = generateDesign(10L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 50L, show.info = TRUE) print(run) plotExampleRun(run, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing a simple sin(x) with multipoint proposal ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) obj.fun = makeSingleObjectiveFunction( name = "Sine", fn = function(x) sin(x), par.set = makeNumericParamSet(lower = 3, upper = 13, len = 1L), global.opt.value = -1 ) ctrl = makeMBOControl(propose.points = 2L) ctrl = setMBOControlTermination(ctrl, iters = 10L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritMeanResponse()) ctrl = setMBOControlMultiPoint( ctrl, method = "moimbo", moimbo.objective = "ei.dist", moimbo.dist = "nearest.neighbor", moimbo.maxit = 200L ) lrn = makeMBOLearner(ctrl, obj.fun) design = generateDesign(4L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 100, show.info = TRUE) print(run) plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing branin in 2D with multipoint proposal ##### ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(2) configureMlr(show.learner.output = FALSE) obj.fun = makeBraninFunction() ctrl = makeMBOControl(propose.points = 5L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritMeanResponse()) ctrl = setMBOControlTermination(ctrl, iters = 10L) ctrl = setMBOControlMultiPoint(ctrl, method = "moimbo", moimbo.objective = "ei.dist", moimbo.dist = "nearest.neighbor", moimbo.maxit = 200L ) lrn = makeLearner("regr.km", predict.type = "se") design = generateDesign(10L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 50L, show.info = TRUE) print(run) plotExampleRun(run, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing a simple noisy sin(x) with mbo / EI ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) # function with noise obj.fun = makeSingleObjectiveFunction( name = "Some noisy function", fn = function(x) sin(x) + rnorm(1, 0, 0.1), par.set = makeNumericParamSet(lower = 3, upper = 13, len = 1L), noisy = TRUE, global.opt.value = -1, fn.mean = function(x) sin(x) ) ctrl = makeMBOControl( propose.points = 1L, final.method = "best.predicted", final.evals = 10L ) ctrl = setMBOControlTermination(ctrl, iters = 5L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt = "focussearch", opt.focussearch.points = 500L) lrn = makeMBOLearner(ctrl, obj.fun) design = generateDesign(6L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 200L, noisy.evals = 50L, show.info = TRUE) print(run) plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing 1D fun with 3 categorical level and ### noisy outout with random forest ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) obj.fun = makeSingleObjectiveFunction( name = "Mixed decision space function", fn = function(x) { if (x$foo == "a") { return(5 + x$bar^2 + rnorm(1)) } else if (x$foo == "b") { return(4 + x$bar^2 + rnorm(1, sd = 0.5)) } else { return(3 + x$bar^2 + rnorm(1, sd = 1)) } }, par.set = makeParamSet( makeDiscreteParam("foo", values = letters[1:3]), makeNumericParam("bar", lower = -5, upper = 5) ), has.simple.signature = FALSE, # function expects a named list of parameter values noisy = TRUE ) ctrl = makeMBOControl() ctrl = setMBOControlTermination(ctrl, iters = 10L) # we can basically do an exhaustive search in 3 values ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt.restarts = 1L, opt.focussearch.points = 3L, opt.focussearch.maxit = 1L) design = generateDesign(20L, getParamSet(obj.fun), fun = lhs::maximinLHS) lrn = makeMBOLearner(ctrl, obj.fun) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 50L, show.info = TRUE) print(run) plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing mixed space function ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) obj.fun = makeSingleObjectiveFunction( name = "Mixed functions", fn = function(x) { if (x$cat == "a") x$num^2 else x$num^2 + 3 }, par.set = makeParamSet( makeDiscreteParam("cat", values = c("a", "b")), makeNumericParam("num", lower = -5, upper = 5) ), has.simple.signature = FALSE, global.opt.value = -1 ) ctrl = makeMBOControl(propose.points = 1L) ctrl = setMBOControlTermination(ctrl, iters = 10L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt = "focussearch", opt.focussearch.points = 500L) lrn = makeMBOLearner(ctrl, obj.fun) design = generateDesign(4L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRun(obj.fun, design = design, learner = lrn, control = ctrl, points.per.dim = 100L, show.info = TRUE) print(run) plotExampleRun(run, densregion = TRUE, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing multi-objective function ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) obj.fun = makeZDT1Function(dimensions = 2L) ctrl = makeMBOControl(n.objectives = 2L, propose.points = 2L, save.on.disk.at = integer(0L)) ctrl = setMBOControlTermination(ctrl, iters = 5L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritDIB(), opt.focussearch.points = 10000L) ctrl = setMBOControlMultiObj(ctrl, parego.s = 100) learner = makeMBOLearner(ctrl, obj.fun) design = generateDesign(5L, getParamSet(obj.fun), fun = lhs::maximinLHS) run = exampleRunMultiObj(obj.fun, design = design, learner = learner, ctrl, points.per.dim = 50L, show.info = TRUE, nsga2.args = list()) plotExampleRun(run, gg.objects = list(theme_bw())) ## End(Not run) ##################################################### ### ### optimizing multi objective function and plots ### ##################################################### ## Not run: library(mlrMBO) library(ggplot2) set.seed(1) configureMlr(show.learner.output = FALSE) obj.fun = makeDTLZ1Function(dimensions = 5L, n.objectives = 2L) ctrl = makeMBOControl(n.objectives = 2L, propose.points = 2L) ctrl = setMBOControlTermination(ctrl, iters = 10L) ctrl = setMBOControlInfill(ctrl, crit = makeMBOInfillCritEI(), opt.focussearch.points = 1000L, opt.focussearch.maxit = 3L) ctrl = setMBOControlMultiObj(ctrl, method = "parego") lrn = makeMBOLearner(ctrl, obj.fun) design = generateDesign(8L, getParamSet(obj.fun), fun = lhs::maximinLHS) res = mbo(obj.fun, design = design, learner = lrn, control = ctrl, show.info = TRUE) plot(res) ## End(Not run)
The OptProblem contains all the constants values which define a OptProblem within our MBO Steps. It is an environment and is always pointed at by the OptState.
The OptResult stores all entities which are not needed while optimizing but are needed to build the final result.
It can contains fitted surrogate models at certain times as well as resample objects.
When the optimization ended it will contain the [MBOResult
].
The OptState is the central component of the mbo iterations.
This environment contains every necessary information needed during optimization in MBO.
It also links to the OptProblem
and to the OptResult
.
Plots the values of the infill criterion for a 1- and 2-dimensional numerical search space for a given OptState
.
## S3 method for class 'OptState' plot(x, scale.panels = FALSE, points.per.dim = 100, ...)
## S3 method for class 'OptState' plot(x, scale.panels = FALSE, points.per.dim = 100, ...)
x |
[ |
scale.panels |
[ |
points.per.dim |
[ |
... |
[any] |
The graphical output depends on the target function at hand. - For 1D numeric functions the upper plot shows the true function (if known), the model and the (infill) points. The lower plot shows the infill criterion. - For 2D mixed target functions only one plot is displayed. - For 2D numeric only target functions up to four plots are presented to the viewer: - levelplot of the true function landscape (with [infill] points), - levelplot of the model landscape (with [infill] points), - levelplot of the infill criterion - levelplot of the standard error (only if learner supports standard error estimation). - For bi-criteria target functions the upper plot shows the target space and the lower plot displays the x-space.
plotExampleRun( object, iters, pause = interactive(), densregion = TRUE, se.factor = 1, single.prop.point.plots = FALSE, xlim = NULL, ylim = NULL, point.size = 3, line.size = 1, trafo = NULL, colors = c("red", "blue", "green"), gg.objects = list(), ... )
plotExampleRun( object, iters, pause = interactive(), densregion = TRUE, se.factor = 1, single.prop.point.plots = FALSE, xlim = NULL, ylim = NULL, point.size = 3, line.size = 1, trafo = NULL, colors = c("red", "blue", "green"), gg.objects = list(), ... )
object |
[ |
iters |
[ |
pause |
[ |
densregion |
[ |
se.factor |
[ |
single.prop.point.plots |
[ |
xlim |
[ |
ylim |
[ |
point.size |
[ |
line.size |
[ |
trafo |
[ |
colors |
[ |
gg.objects |
[ |
... |
[any] |
Nothing.
Plots any MBO result objects. Plots for X-Space, Y-Space and any column in
the optimization path are available. This function uses
plotOptPath
from package ParamHelpers
.
## S3 method for class 'MBOSingleObjResult' plot(x, iters = NULL, pause = interactive(), ...) ## S3 method for class 'MBOMultiObjResult' plot(x, iters = NULL, pause = interactive(), ...)
## S3 method for class 'MBOSingleObjResult' plot(x, iters = NULL, pause = interactive(), ...) ## S3 method for class 'MBOMultiObjResult' plot(x, iters = NULL, pause = interactive(), ...)
x |
[ |
iters |
[ |
pause |
[ |
... |
Additional parameters for the |
Print mbo control object.
## S3 method for class 'MBOControl' print(x, ...)
## S3 method for class 'MBOControl' print(x, ...)
x |
[ |
... |
[any] |
Propose points for the objective function that should be evaluated according to the infill criterion and the recent evaluations.
proposePoints(opt.state)
proposePoints(opt.state)
opt.state |
[ |
The graphical output depends on the target function at hand. - For 1D numeric functions the upper plot shows the true function (if known), the model and the (infill) points. The lower plot shows the infill criterion. - For 2D mixed target functions only one plot is displayed. - For 2D numeric only target functions up to four plots are presented to the viewer: - levelplot of the true function landscape (with [infill] points), - levelplot of the model landscape (with [infill] points), - levelplot of the infill criterion - levelplot of the standard error (only if learner supports standard error estimation). - For bi-criteria target functions the upper plot shows the target space and the lower plot displays the x-space.
renderExampleRunPlot( object, iter, densregion = TRUE, se.factor = 1, single.prop.point.plots = FALSE, xlim = NULL, ylim = NULL, point.size = 3, line.size = 1, trafo = NULL, colors = c("red", "blue", "green"), ... )
renderExampleRunPlot( object, iter, densregion = TRUE, se.factor = 1, single.prop.point.plots = FALSE, xlim = NULL, ylim = NULL, point.size = 3, line.size = 1, trafo = NULL, colors = c("red", "blue", "green"), ... )
object |
[ |
iter |
[ |
densregion |
[ |
se.factor |
[ |
single.prop.point.plots |
[ |
xlim |
[ |
ylim |
[ |
point.size |
[ |
line.size |
[ |
trafo |
[ |
colors |
[ |
... |
[any] |
[list
]. List containing separate ggplot object. The number of plots depends on
the type of MBO problem. See the description for details.
Please note that internally all infill criteria are minimized. So for some of them, we internally compute their negated version, e.g., for EI or also for CB when the objective is to be maximized. In the latter case mlrMBO actually computes the negative upper confidence bound and minimizes that.
setMBOControlInfill( control, crit = NULL, interleave.random.points = 0L, filter.proposed.points = NULL, filter.proposed.points.tol = NULL, opt = "focussearch", opt.restarts = NULL, opt.focussearch.maxit = NULL, opt.focussearch.points = NULL, opt.cmaes.control = NULL, opt.ea.maxit = NULL, opt.ea.mu = NULL, opt.ea.sbx.eta = NULL, opt.ea.sbx.p = NULL, opt.ea.pm.eta = NULL, opt.ea.pm.p = NULL, opt.ea.lambda = NULL, opt.nsga2.popsize = NULL, opt.nsga2.generations = NULL, opt.nsga2.cprob = NULL, opt.nsga2.cdist = NULL, opt.nsga2.mprob = NULL, opt.nsga2.mdist = NULL )
setMBOControlInfill( control, crit = NULL, interleave.random.points = 0L, filter.proposed.points = NULL, filter.proposed.points.tol = NULL, opt = "focussearch", opt.restarts = NULL, opt.focussearch.maxit = NULL, opt.focussearch.points = NULL, opt.cmaes.control = NULL, opt.ea.maxit = NULL, opt.ea.mu = NULL, opt.ea.sbx.eta = NULL, opt.ea.sbx.p = NULL, opt.ea.pm.eta = NULL, opt.ea.pm.p = NULL, opt.ea.lambda = NULL, opt.nsga2.popsize = NULL, opt.nsga2.generations = NULL, opt.nsga2.cprob = NULL, opt.nsga2.cdist = NULL, opt.nsga2.mprob = NULL, opt.nsga2.mdist = NULL )
control |
[ |
crit |
[ |
interleave.random.points |
[ |
filter.proposed.points |
[ |
filter.proposed.points.tol |
[ |
opt |
[ |
opt.restarts |
[ |
opt.focussearch.maxit |
[ |
opt.focussearch.points |
[ |
opt.cmaes.control |
[ |
opt.ea.maxit |
[ |
opt.ea.mu |
[ |
opt.ea.sbx.eta |
[ |
opt.ea.sbx.p |
[ |
opt.ea.pm.eta |
[ |
opt.ea.pm.p |
[ |
opt.ea.lambda |
[ |
opt.nsga2.popsize |
[ |
opt.nsga2.generations |
[ |
opt.nsga2.cprob |
[ |
opt.nsga2.cdist |
[ |
opt.nsga2.mprob |
[ |
opt.nsga2.mdist |
[ |
[MBOControl
].
Other MBOControl:
makeMBOControl()
,
setMBOControlMultiObj()
,
setMBOControlMultiPoint()
,
setMBOControlTermination()
Extends MBO control object with multi-objective specific options.
setMBOControlMultiObj( control, method = NULL, ref.point.method = NULL, ref.point.offset = NULL, ref.point.val = NULL, parego.s = NULL, parego.rho = NULL, parego.use.margin.points = NULL, parego.sample.more.weights = NULL, parego.normalize = NULL, dib.indicator = NULL, mspot.select.crit = NULL )
setMBOControlMultiObj( control, method = NULL, ref.point.method = NULL, ref.point.offset = NULL, ref.point.val = NULL, parego.s = NULL, parego.rho = NULL, parego.use.margin.points = NULL, parego.sample.more.weights = NULL, parego.normalize = NULL, dib.indicator = NULL, mspot.select.crit = NULL )
control |
[ |
method |
[ |
ref.point.method |
[ |
ref.point.offset |
[ |
ref.point.val |
[ |
parego.s |
[ |
parego.rho |
[ |
parego.use.margin.points |
[ |
parego.sample.more.weights |
[ |
parego.normalize |
[ |
dib.indicator |
[ |
mspot.select.crit |
[ |
[MBOControl
].
For more information on the implemented multi-objective procedures the following sources might be helpful: Knowles, J.: ParEGO: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Transactions on Evolutionary Computation, 10 (2006) 1, pp. 50-66
Wagner, T.; Emmerich, M.; Deutz, A.; Ponweiser, W.: On Expected- Improvement Criteria for Model-Based Multi-Objective Optimization. In: Proc. 11th Int. Conf. Parallel Problem Solving From Nature (PPSN XI) - Part I, Krakow, Poland, Schaefer, R.; Cotta, C.; Kolodziej, J.; Rudolph, G. (eds.), no. 6238 in Lecture Notes in Computer Science, Springer, Berlin, 2010, ISBN 978-3-642-15843-8, pp. 718-727, doi:10. 1007/978-3-642-15844-5 72
Wagner, T.: Planning and Multi-Objective Optimization of Manufacturing Processes by Means of Empirical Surrogate Models. No. 71 in Schriftenreihe des ISF, Vulkan Verlag, Essen, 2013, ISBN 978-3-8027-8775-1
Zaefferer, M.; Bartz-Beielstein, T.; Naujoks, B.; Wagner, T.; Emmerich, M.: A Case Study on Multi-Criteria Optimization of an Event Detection Software under Limited Budgets. In: Proc. 7th International. Conf. Evolutionary Multi-Criterion Optimization (EMO 2013), March 19-22, Sheffield, UK, R. Purshouse; P. J. Fleming; C. M. Fonseca; S. Greco; J. Shaw, eds., 2013, vol. 7811 of Lecture Notes in Computer Science, ISBN 978-3-642-37139-4, pp. 756770, doi:10.1007/978-3-642-37140-0 56
Jeong, S.; Obayashi, S.: Efficient global optimization (EGO) for Multi-Objective Problem and Data Mining. In: Proc. IEEE Congress on Evolutionary Computation (CEC 2005), Edinburgh, UK, Corne, D.; et.al. (eds.), IEEE, 2005, ISBN 0-7803-9363-5, pp. 2138-2145
Other MBOControl:
makeMBOControl()
,
setMBOControlInfill()
,
setMBOControlMultiPoint()
,
setMBOControlTermination()
Extends an MBO control object with options for multipoint proposal.
setMBOControlMultiPoint( control, method = NULL, cl.lie = NULL, moimbo.objective = NULL, moimbo.dist = NULL, moimbo.selection = NULL, moimbo.maxit = NULL, moimbo.sbx.eta = NULL, moimbo.sbx.p = NULL, moimbo.pm.eta = NULL, moimbo.pm.p = NULL )
setMBOControlMultiPoint( control, method = NULL, cl.lie = NULL, moimbo.objective = NULL, moimbo.dist = NULL, moimbo.selection = NULL, moimbo.maxit = NULL, moimbo.sbx.eta = NULL, moimbo.sbx.p = NULL, moimbo.pm.eta = NULL, moimbo.pm.p = NULL )
control |
[ |
method |
[ |
cl.lie |
[ |
moimbo.objective |
[ |
moimbo.dist |
[ |
moimbo.selection |
[ |
moimbo.maxit |
[ |
moimbo.sbx.eta |
[ |
moimbo.sbx.p |
[ |
moimbo.pm.eta |
[ |
moimbo.pm.p |
[ |
[MBOControl
].
Other MBOControl:
makeMBOControl()
,
setMBOControlInfill()
,
setMBOControlMultiObj()
,
setMBOControlTermination()
Extends an MBO control object with infill criteria and infill optimizer options.
setMBOControlTermination( control, iters = NULL, time.budget = NULL, exec.time.budget = NULL, target.fun.value = NULL, max.evals = NULL, more.termination.conds = list(), use.for.adaptive.infill = NULL )
setMBOControlTermination( control, iters = NULL, time.budget = NULL, exec.time.budget = NULL, target.fun.value = NULL, max.evals = NULL, more.termination.conds = list(), use.for.adaptive.infill = NULL )
control |
[ |
iters |
[ |
time.budget |
[ |
exec.time.budget |
[ |
target.fun.value |
[ |
max.evals |
[ |
more.termination.conds |
[
|
use.for.adaptive.infill |
[ |
[MBOControl
].
Other MBOControl:
makeMBOControl()
,
setMBOControlInfill()
,
setMBOControlMultiObj()
,
setMBOControlMultiPoint()
fn = smoof::makeSphereFunction(1L) ctrl = makeMBOControl() # custom termination condition (stop if target function value reached) # We neglect the optimization direction (min/max) in this example. yTargetValueTerminator = function(y.val) { force(y.val) function(opt.state) { opt.path = opt.state$opt.path current.best = getOptPathEl(opt.path, getOptPathBestIndex((opt.path)))$y term = (current.best <= y.val) message = if (!term) NA_character_ else sprintf("Target function value %f reached.", y.val) return(list(term = term, message = message)) } } # assign custom termination condition ctrl = setMBOControlTermination(ctrl, more.termination.conds = list(yTargetValueTerminator(0.05))) res = mbo(fn, control = ctrl) print(res)
fn = smoof::makeSphereFunction(1L) ctrl = makeMBOControl() # custom termination condition (stop if target function value reached) # We neglect the optimization direction (min/max) in this example. yTargetValueTerminator = function(y.val) { force(y.val) function(opt.state) { opt.path = opt.state$opt.path current.best = getOptPathEl(opt.path, getOptPathBestIndex((opt.path)))$y term = (current.best <= y.val) message = if (!term) NA_character_ else sprintf("Target function value %f reached.", y.val) return(list(term = term, message = message)) } } # assign custom termination condition ctrl = setMBOControlTermination(ctrl, more.termination.conds = list(yTargetValueTerminator(0.05))) res = mbo(fn, control = ctrl) print(res)
logTrafo
Natural logarithm.
sqrtTrafo
Square root.
If negative values occur and the trafo function can handle only positive values,
a shift of the form x - min(x) + 1 is performed prior to the transformation if the
argument handle.violations
is set to “warn” which is the default
value.
trafoLog(base = 10, handle.violations = "warn") trafoSqrt(handle.violations = "warn")
trafoLog(base = 10, handle.violations = "warn") trafoSqrt(handle.violations = "warn")
base |
[ |
handle.violations |
[ |
None
After a function evaluation you want to update the OptState
to get new proposals.
updateSMBO(opt.state, x, y)
updateSMBO(opt.state, x, y)
opt.state |
[ |
x |
[ |
y |
[ |
[OptState
]