Title: | Black-Box Optimization Toolkit |
---|---|
Description: | Features highly configurable search spaces via the 'paradox' package and optimizes every user-defined objective function. The package includes several optimization algorithms e.g. Random Search, Iterated Racing, Bayesian Optimization (in 'mlr3mbo') and Hyperband (in 'mlr3hyperband'). bbotk is the base package of 'mlr3tuning', 'mlr3fselect' and 'miesmuschel'. |
Authors: | Marc Becker [cre, aut] , Jakob Richter [aut] , Michel Lang [aut] , Bernd Bischl [aut] , Martin Binder [aut], Olaf Mersmann [ctb] |
Maintainer: | Marc Becker <[email protected]> |
License: | LGPL-3 |
Version: | 1.4.0 |
Built: | 2024-11-26 20:48:00 UTC |
Source: | CRAN |
Features highly configurable search spaces via the 'paradox' package and optimizes every user-defined objective function. The package includes several optimization algorithms e.g. Random Search, Iterated Racing, Bayesian Optimization (in 'mlr3mbo') and Hyperband (in 'mlr3hyperband'). bbotk is the base package of 'mlr3tuning', 'mlr3fselect' and 'miesmuschel'.
Maintainer: Marc Becker [email protected] (ORCID)
Authors:
Jakob Richter [email protected] (ORCID)
Michel Lang [email protected] (ORCID)
Bernd Bischl [email protected] (ORCID)
Martin Binder [email protected]
Other contributors:
Olaf Mersmann [email protected] [contributor]
Useful links:
Report bugs at https://github.com/mlr-org/bbotk/issues
The 'Archive“ class stores all evaluated points and performance scores
The Archive
is an abstract class that implements the base functionality each archive must provide.
search_space
(paradox::ParamSet)
Specification of the search space for the Optimizer.
codomain
(Codomain)
Codomain of objective function.
start_time
(POSIXct)
Time stamp of when the optimization started.
The time is set by the Optimizer.
check_values
(logical(1)
)
Determines if points and results are checked for validity.
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
cols_x
(character()
)
Column names of search space parameters.
cols_y
(character()
)
Column names of codomain target parameters.
new()
Creates a new instance of this R6 class.
Archive$new( search_space, codomain, check_values = FALSE, label = NA_character_, man = NA_character_ )
search_space
(paradox::ParamSet)
Specifies the search space for the Optimizer. The paradox::ParamSet
describes either a subset of the domain
of the Objective or it describes
a set of parameters together with a trafo
function that transforms values
from the search space to values of the domain. Depending on the context, this
value defaults to the domain of the objective.
codomain
(paradox::ParamSet)
Specifies codomain of function.
Most importantly the tags of each output "Parameter" define whether it should
be minimized or maximized. The default is to minimize each component.
check_values
(logical(1)
)
Should x-values that are added to the archive be checked for validity?
Search space that is logged into archive.
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
format()
Helper for print outputs.
Archive$format(...)
...
(ignored).
print()
Printer.
Archive$print()
...
(ignored).
clear()
Clear all evaluation results from archive.
Archive$clear()
help()
Opens the corresponding help page referenced by field $man
.
Archive$help()
clone()
The objects of this class are cloneable with this method.
Archive$clone(deep = FALSE)
deep
Whether to make a deep clone.
The ArchiveAsync
stores all evaluated points and performance scores in a rush::Rush data base.
as.data.table(archive)
ArchiveAsync -> data.table::data.table()
Returns a tabular view of all performed function calls of the Objective.
The x_domain
column is unnested to separate columns.
bbotk::Archive
-> ArchiveAsync
rush
(Rush
)
Rush controller for parallel optimization.
data
(data.table::data.table)
Data table with all finished points.
queued_data
(data.table::data.table)
Data table with all queued points.
running_data
(data.table::data.table)
Data table with all running points.
finished_data
(data.table::data.table)
Data table with all finished points.
failed_data
(data.table::data.table)
Data table with all failed points.
n_queued
(integer(1)
)
Number of queued points.
n_running
(integer(1)
)
Number of running points.
n_finished
(integer(1)
)
Number of finished points.
n_failed
(integer(1)
)
Number of failed points.
n_evals
(integer(1)
)
Number of evaluations stored in the archive.
new()
Creates a new instance of this R6 class.
ArchiveAsync$new(search_space, codomain, check_values = FALSE, rush)
search_space
(paradox::ParamSet)
Specifies the search space for the Optimizer. The paradox::ParamSet
describes either a subset of the domain
of the Objective or it describes
a set of parameters together with a trafo
function that transforms values
from the search space to values of the domain. Depending on the context, this
value defaults to the domain of the objective.
codomain
(paradox::ParamSet)
Specifies codomain of function.
Most importantly the tags of each output "Parameter" define whether it should
be minimized or maximized. The default is to minimize each component.
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
rush
(Rush
)
If a rush instance is supplied, the tuning runs without batches.
push_points()
Push queued points to the archive.
ArchiveAsync$push_points(xss)
xss
(list of named list()
)
List of named lists of point values.
pop_point()
Pop a point from the queue.
ArchiveAsync$pop_point()
push_running_point()
Push running point to the archive.
ArchiveAsync$push_running_point(xs, extra = NULL)
xs
(named list
)
Named list of point values.
extra
(list()
)
Named list of additional information.
push_result()
Push result to the archive.
ArchiveAsync$push_result(key, ys, x_domain, extra = NULL)
key
(character()
)
Key of the point.
ys
(list()
)
Named list of results.
x_domain
(list()
)
Named list of transformed point values.
extra
(list()
)
Named list of additional information.
push_failed_point()
Push failed point to the archive.
ArchiveAsync$push_failed_point(key, message)
key
(character()
)
Key of the point.
message
(character()
)
Error message.
data_with_state()
Fetch points with a specific state.
ArchiveAsync$data_with_state( fields = c("xs", "ys", "xs_extra", "worker_extra", "ys_extra", "condition"), states = c("queued", "running", "finished", "failed"), reset_cache = FALSE )
fields
(character()
)
Fields to fetch.
Defaults to c("xs", "ys", "xs_extra", "worker_extra", "ys_extra")
.
states
(character()
)
States of the tasks to be fetched.
Defaults to c("queued", "running", "finished", "failed")
.
reset_cache
(logical(1)
)
Whether to reset the cache of the finished points.
best()
Returns the best scoring evaluation(s). For single-crit optimization, the solution that minimizes / maximizes the objective function. For multi-crit optimization, the Pareto set / front.
ArchiveAsync$best(n_select = 1, ties_method = "first")
n_select
(integer(1L)
)
Amount of points to select.
Ignored for multi-crit optimization.
ties_method
(character(1L)
)
Method to break ties when multiple points have the same score.
Either "first"
(default) or "random"
.
Ignored for multi-crit optimization.
If n_select > 1L
, the tie method is ignored and the first point is returned.
nds_selection()
Calculate best points w.r.t. non dominated sorting with hypervolume contribution.
ArchiveAsync$nds_selection(n_select = 1, ref_point = NULL)
n_select
(integer(1L)
)
Amount of points to select.
ref_point
(numeric()
)
Reference point for hypervolume.
clear()
Clear all evaluation results from archive.
ArchiveAsync$clear()
The ArchiveBatch
stores all evaluated points and performance scores in a data.table::data.table()
.
as.data.table(archive)
ArchiveBatch -> data.table::data.table()
Returns a tabular view of all performed function calls of the Objective.
The x_domain
column is unnested to separate columns.
bbotk::Archive
-> ArchiveBatch
data
(data.table::data.table)
Contains all performed Objective function calls.
data_extra
(named list
)
Data created by specific Optimizer
s that does not relate to any individual function evaluation and can therefore not be held in $data
.
Every optimizer should create and refer to its own entry in this list, named by its class()
.
n_evals
(integer(1)
)
Number of evaluations stored in the archive.
n_batch
(integer(1)
)
Number of batches stored in the archive.
new()
Creates a new instance of this R6 class.
ArchiveBatch$new(search_space, codomain, check_values = FALSE)
search_space
(paradox::ParamSet)
Specifies the search space for the Optimizer. The paradox::ParamSet
describes either a subset of the domain
of the Objective or it describes
a set of parameters together with a trafo
function that transforms values
from the search space to values of the domain. Depending on the context, this
value defaults to the domain of the objective.
codomain
(paradox::ParamSet)
Specifies codomain of function.
Most importantly the tags of each output "Parameter" define whether it should
be minimized or maximized. The default is to minimize each component.
check_values
(logical(1)
)
Should x-values that are added to the archive be checked for validity?
Search space that is logged into archive.
add_evals()
Adds function evaluations to the archive table.
ArchiveBatch$add_evals(xdt, xss_trafoed = NULL, ydt)
xdt
(data.table::data.table()
)
Set of untransformed points / points from the search space.
One point per row, e.g. data.table(x1 = c(1, 3), x2 = c(2, 4))
.
Column names have to match ids of the search_space
.
However, xdt
can contain additional columns.
xss_trafoed
(list()
)
Transformed point(s) in the domain space.
ydt
(data.table::data.table()
)
Optimal outcome.
best()
Returns the best scoring evaluation(s). For single-crit optimization, the solution that minimizes / maximizes the objective function. For multi-crit optimization, the Pareto set / front.
ArchiveBatch$best(batch = NULL, n_select = 1L, ties_method = "first")
batch
(integer()
)
The batch number(s) to limit the best results to.
Default is all batches.
n_select
(integer(1L)
)
Amount of points to select.
Ignored for multi-crit optimization.
ties_method
(character(1L)
)
Method to break ties when multiple points have the same score.
Either "first"
(default) or "random"
.
Ignored for multi-crit optimization.
If n_select > 1L
, the tie method is ignored and the first point is returned.
nds_selection()
Calculate best points w.r.t. non dominated sorting with hypervolume contribution.
ArchiveBatch$nds_selection(batch = NULL, n_select = 1, ref_point = NULL)
batch
(integer()
)
The batch number(s) to limit the best points to. Default is
all batches.
n_select
(integer(1L)
)
Amount of points to select.
ref_point
(numeric()
)
Reference point for hypervolume.
clear()
Clear all evaluation results from archive.
ArchiveBatch$clear()
clone()
The objects of this class are cloneable with this method.
ArchiveBatch$clone(deep = FALSE)
deep
Whether to make a deep clone.
Convert object to a Terminator or a list of Terminator.
as_terminator(x, ...) ## S3 method for class 'Terminator' as_terminator(x, clone = FALSE, ...) as_terminators(x, ...) ## Default S3 method: as_terminators(x, ...) ## S3 method for class 'list' as_terminators(x, ...)
as_terminator(x, ...) ## S3 method for class 'Terminator' as_terminator(x, clone = FALSE, ...) as_terminators(x, ...) ## Default S3 method: as_terminators(x, ...) ## S3 method for class 'list' as_terminators(x, ...)
x |
(any) |
... |
(any) |
clone |
( |
This function optimizes a function or Objective with a given method.
bb_optimize( x, method = "random_search", max_evals = 1000, max_time = NULL, ... ) ## S3 method for class ''function'' bb_optimize( x, method = "random_search", max_evals = 1000, max_time = NULL, lower = NULL, upper = NULL, maximize = FALSE, ... ) ## S3 method for class 'Objective' bb_optimize( x, method = "random_search", max_evals = 1000, max_time = NULL, search_space = NULL, ... )
bb_optimize( x, method = "random_search", max_evals = 1000, max_time = NULL, ... ) ## S3 method for class ''function'' bb_optimize( x, method = "random_search", max_evals = 1000, max_time = NULL, lower = NULL, upper = NULL, maximize = FALSE, ... ) ## S3 method for class 'Objective' bb_optimize( x, method = "random_search", max_evals = 1000, max_time = NULL, search_space = NULL, ... )
x |
( |
method |
( |
max_evals |
( |
max_time |
( |
... |
(named |
lower |
( |
upper |
( |
maximize |
( |
search_space |
list
of
"par"
- Best found parameters
"value"
- Optimal outcome
"instance"
- OptimInstanceBatchSingleCrit | OptimInstanceBatchMultiCrit
If both max_evals
and max_time
are NULL
, TerminatorNone is used. This
is useful if the Optimizer can terminate itself. If both are given,
TerminatorCombo is created and the optimization stops if the time or
evaluation budget is exhausted.
# function and bounds fun = function(xs) { -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10 } bb_optimize(fun, lower = c(-10, -5), upper = c(10, 5), max_evals = 10) # function and constant fun = function(xs, c) { -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + c } bb_optimize(fun, lower = c(-10, -5), upper = c(10, 5), max_evals = 10, c = 1) # objective fun = function(xs) { c(z = -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10) } # define domain and codomain using a `ParamSet` from paradox domain = ps(x1 = p_dbl(-10, 10), x2 = p_dbl(-5, 5)) codomain = ps(z = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun, domain, codomain) bb_optimize(objective, method = "random_search", max_evals = 10)
# function and bounds fun = function(xs) { -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10 } bb_optimize(fun, lower = c(-10, -5), upper = c(10, 5), max_evals = 10) # function and constant fun = function(xs, c) { -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + c } bb_optimize(fun, lower = c(-10, -5), upper = c(10, 5), max_evals = 10, c = 1) # objective fun = function(xs) { c(z = -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10) } # define domain and codomain using a `ParamSet` from paradox domain = ps(x1 = p_dbl(-10, 10), x2 = p_dbl(-5, 5)) codomain = ps(z = p_dbl(tags = "minimize")) objective = ObjectiveRFun$new(fun, domain, codomain) bb_optimize(objective, method = "random_search", max_evals = 10)
This CallbackBatch writes the Archive after each batch to disk.
clbk("bbotk.backup", path = "backup.rds")
clbk("bbotk.backup", path = "backup.rds")
Classic 2-D Branin function with noise branin(x1, x2, noise)
and Branin function with fidelity parameter branin_wu(x1, x2, fidelity)
.
branin(x1, x2, noise = 0) branin_wu(x1, x2, fidelity)
branin(x1, x2, noise = 0) branin_wu(x1, x2, fidelity)
x1 |
( |
x2 |
( |
noise |
( |
fidelity |
( |
numeric()
Wu J, Toscano-Palmerin S, Frazier PI, Wilson AG (2019). “Practical Multi-fidelity Bayesian Optimization for Hyperparameter Tuning.” 1903.04703.
branin(x1 = 12, x2 = 2, noise = 0.05) branin_wu(x1 = 12, x2 = 2, fidelity = 1)
branin(x1 = 12, x2 = 2, noise = 0.05) branin_wu(x1 = 12, x2 = 2, fidelity = 1)
Function to create a CallbackAsync.
Optimization callbacks can be called from different stages of optimization process.
The stages are prefixed with on_*
.
Start Optimization - on_optimization_begin Start Worker - on_worker_begin Start Optimization on Worker - on_optimizer_before_eval - on_optimizer_after_eval End Optimization on Worker - on_worker_end End Worker - on_result_begin - on_result_end - on_optimization_end End Optimization
See also the section on parameters for more information on the stages. A optimization callback works with ContextAsync.
callback_async( id, label = NA_character_, man = NA_character_, on_optimization_begin = NULL, on_worker_begin = NULL, on_optimizer_before_eval = NULL, on_optimizer_after_eval = NULL, on_worker_end = NULL, on_result_begin = NULL, on_result_end = NULL, on_result = NULL, on_optimization_end = NULL )
callback_async( id, label = NA_character_, man = NA_character_, on_optimization_begin = NULL, on_worker_begin = NULL, on_optimizer_before_eval = NULL, on_optimizer_after_eval = NULL, on_worker_end = NULL, on_result_begin = NULL, on_result_end = NULL, on_result = NULL, on_optimization_end = NULL )
id |
( |
label |
( |
man |
( |
on_optimization_begin |
( |
on_worker_begin |
( |
on_optimizer_before_eval |
( |
on_optimizer_after_eval |
( |
on_worker_end |
( |
on_result_begin |
( |
on_result_end |
( |
on_result |
( |
on_optimization_end |
( |
A callback can write data to its state ($state
), e.g. settings that affect the callback itself.
The ContextAsync allows to modify the instance, archive, optimizer and final result.
Function to create a CallbackBatch.
Optimization callbacks can be called from different stages of optimization process.
The stages are prefixed with on_*
.
Start Optimization - on_optimization_begin Start Optimizer Batch - on_optimizer_before_eval - on_optimizer_after_eval End Optimizer Batch - on_result_begin - on_result_end - on_optimization_end End Optimization
See also the section on parameters for more information on the stages. A optimization callback works with ContextBatch.
callback_batch( id, label = NA_character_, man = NA_character_, on_optimization_begin = NULL, on_optimizer_before_eval = NULL, on_optimizer_after_eval = NULL, on_result_begin = NULL, on_result_end = NULL, on_result = NULL, on_optimization_end = NULL )
callback_batch( id, label = NA_character_, man = NA_character_, on_optimization_begin = NULL, on_optimizer_before_eval = NULL, on_optimizer_after_eval = NULL, on_result_begin = NULL, on_result_end = NULL, on_result = NULL, on_optimization_end = NULL )
id |
( |
label |
( |
man |
( |
on_optimization_begin |
( |
on_optimizer_before_eval |
( |
on_optimizer_after_eval |
( |
on_result_begin |
( |
on_result_end |
( |
on_result |
( |
on_optimization_end |
( |
A callback can write data to its state ($state
), e.g. settings that affect the callback itself.
The ContextBatch allows to modify the instance, archive, optimizer and final result.
# write archive to disk callback_batch("bbotk.backup", on_optimization_end = function(callback, context) { saveRDS(context$instance$archive, "archive.rds") } )
# write archive to disk callback_batch("bbotk.backup", on_optimization_end = function(callback, context) { saveRDS(context$instance$archive, "archive.rds") } )
Specialized mlr3misc::Callback for asynchronous optimization.
Callbacks allow to customize the behavior of processes in bbotk.
The callback_async()
function creates a CallbackAsync.
Predefined callbacks are stored in the dictionary mlr_callbacks and can be retrieved with clbk()
.
For more information on optimization callbacks see callback_async()
.
mlr3misc::Callback
-> CallbackAsync
on_optimization_begin
(function()
)
Stage called at the beginning of the optimization in the main process.
Called in Optimizer$optimize()
.
on_worker_begin
(function()
)
Stage called at the beginning of the optimization on the worker.
Called in the worker loop.
on_optimizer_before_eval
(function()
)
Stage called after the optimizer proposes points.
Called in OptimInstance$.eval_point()
.
on_optimizer_after_eval
(function()
)
Stage called after points are evaluated.
Called in OptimInstance$.eval_point()
.
on_worker_end
(function()
)
Stage called at the end of the optimization on the worker.
Called in the worker loop.
on_result_begin
(function()
)
Stage called before the results are written.
Called in OptimInstance$assign_result()
.
on_result_end
(function()
)
Stage called after the results are written.
Called in OptimInstance$assign_result()
.
on_optimization_end
(function()
)
Stage called at the end of the optimization in the main process.
Called in Optimizer$optimize()
.
clone()
The objects of this class are cloneable with this method.
CallbackAsync$clone(deep = FALSE)
deep
Whether to make a deep clone.
Specialized mlr3misc::Callback for batch optimization.
Callbacks allow to customize the behavior of processes in bbotk.
The callback_batch()
function creates a CallbackBatch.
Predefined callbacks are stored in the dictionary mlr_callbacks and can be retrieved with clbk()
.
For more information on optimization callbacks see callback_batch()
.
mlr3misc::Callback
-> CallbackBatch
on_optimization_begin
(function()
)
Stage called at the beginning of the optimization.
Called in Optimizer$optimize()
.
on_optimizer_before_eval
(function()
)
Stage called after the optimizer proposes points.
Called in OptimInstance$eval_batch()
.
on_optimizer_after_eval
(function()
)
Stage called after points are evaluated.
Called in OptimInstance$eval_batch()
.
on_result_begin
(function()
)
Stage called before the results are written.
Called in OptimInstance$assign_result()
.
on_result_end
(function()
)
Stage called after the results are written.
Called in OptimInstance$assign_result()
.
on_optimization_end
(function()
)
Stage called at the end of the optimization.
Called in Optimizer$optimize()
.
clone()
The objects of this class are cloneable with this method.
CallbackBatch$clone(deep = FALSE)
deep
Whether to make a deep clone.
# write archive to disk callback_batch("bbotk.backup", on_optimization_end = function(callback, context) { saveRDS(context$instance$archive, "archive.rds") } )
# write archive to disk callback_batch("bbotk.backup", on_optimization_end = function(callback, context) { saveRDS(context$instance$archive, "archive.rds") } )
A paradox::ParamSet defining the codomain of a function. The parameter
set must contain at least one target parameter tagged with "minimize"
or
"maximize"
. The codomain may contain extra parameters which are ignored
when calling the Archive methods $best()
, $nds_selection()
and
$cols_y
. This class is usually constructed internally from a
paradox::ParamSet when Objective is initialized.
paradox::ParamSet
-> Codomain
is_target
(named logical()
)
Position is TRUE
for target parameters.
target_length
(integer()
)
Returns number of target parameters.
target_ids
(character()
)
IDs of contained target parameters.
target_tags
(named list()
of character()
)
Tags of target parameters.
maximization_to_minimization
(integer()
)
Returns a numeric vector with values -1 and 1. Multiply with the outcome
of a maximization problem to turn it into a minimization problem.
paradox::ParamSet$add_dep()
paradox::ParamSet$aggr_internal_tuned_values()
paradox::ParamSet$assert()
paradox::ParamSet$assert_dt()
paradox::ParamSet$check()
paradox::ParamSet$check_dependencies()
paradox::ParamSet$check_dt()
paradox::ParamSet$convert_internal_search_space()
paradox::ParamSet$disable_internal_tuning()
paradox::ParamSet$flatten()
paradox::ParamSet$format()
paradox::ParamSet$get_domain()
paradox::ParamSet$get_values()
paradox::ParamSet$ids()
paradox::ParamSet$print()
paradox::ParamSet$qunif()
paradox::ParamSet$search_space()
paradox::ParamSet$set_values()
paradox::ParamSet$subset()
paradox::ParamSet$subspaces()
paradox::ParamSet$test()
paradox::ParamSet$test_constraint()
paradox::ParamSet$test_constraint_dt()
paradox::ParamSet$test_dt()
paradox::ParamSet$trafo()
new()
Creates a new instance of this R6 class.
Codomain$new(params)
params
(list()
)
Named list with which to initialize the codomain.
This argument is analogous to paradox::ParamSet's $initialize()
params
argument.
clone()
The objects of this class are cloneable with this method.
Codomain$clone(deep = FALSE)
deep
Whether to make a deep clone.
# define objective function fun = function(xs) { c(y = -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10) } # set domain domain = ps( x1 = p_dbl(-10, 10), x2 = p_dbl(-5, 5) ) # set codomain codomain = ps( y = p_dbl(tags = "maximize"), time = p_dbl() ) # create Objective object objective = ObjectiveRFun$new( fun = fun, domain = domain, codomain = codomain, properties = "deterministic" )
# define objective function fun = function(xs) { c(y = -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10) } # set domain domain = ps( x1 = p_dbl(-10, 10), x2 = p_dbl(-5, 5) ) # set codomain codomain = ps( y = p_dbl(tags = "maximize"), time = p_dbl() ) # create Objective object objective = ObjectiveRFun$new( fun = fun, domain = domain, codomain = codomain, properties = "deterministic" )
A CallbackAsync accesses and modifies data during the optimization via the ContextAsync
.
See the section on active bindings for a list of modifiable objects.
See callback_async()
for a list of stages which access ContextAsync
.
Changes to $instance
and $optimizer
in the stages executed on the workers are not reflected in the main process.
mlr3misc::Context
-> ContextAsync
instance
optimizer
(Optimizer).
xs
(list())
The point to be evaluated in instance$.eval_point()
.
xs_trafoed
(list())
The transformed point to be evaluated in instance$.eval_point()
.
extra
(list())
Additional information of the point to be evaluated in instance$.eval_point()
.
ys
(list())
The result of the evaluation in instance$.eval_point()
.
result_xdt
(data.table::data.table)
The xdt passed to instance$assign_result()
.
result_y
(numeric(1)
)
The y passed to instance$assign_result()
.
Only available for single criterion optimization.
result_ydt
(data.table::data.table)
The ydt passed to instance$assign_result()
.
Only available for multi criterion optimization.
result_extra
(data.table::data.table)
Additional information about the result passed to instance$assign_result()
.
result
(data.table::data.table)
The result of the optimization in instance$assign_result()
.
new()
Creates a new instance of this R6 class.
ContextAsync$new(inst, optimizer)
inst
optimizer
(Optimizer).
clone()
The objects of this class are cloneable with this method.
ContextAsync$clone(deep = FALSE)
deep
Whether to make a deep clone.
A CallbackBatch accesses and modifies data during the optimization via the ContextBatch
.
See the section on active bindings for a list of modifiable objects.
See callback_batch()
for a list of stages which that ContextBatch
.
mlr3misc::Context
-> ContextBatch
instance
optimizer
(Optimizer).
xdt
(data.table::data.table)
The points of the latest batch in instance$eval_batch()
.
Contains the values in the search space i.e. transformations are not yet applied.
result_xdt
(data.table::data.table)
The xdt passed to instance$assign_result()
.
result_y
(numeric(1)
)
The y passed to instance$assign_result()
.
Only available for single criterion optimization.
result_ydt
(data.table::data.table)
The ydt passed to instance$assign_result()
.
Only available for multi criterion optimization.
result_extra
(data.table::data.table)
Additional information about the result passed to instance$assign_result()
.
result
(data.table::data.table)
The result of the optimization in instance$assign_result()
.
new()
Creates a new instance of this R6 class.
ContextBatch$new(inst, optimizer)
inst
optimizer
(Optimizer).
clone()
The objects of this class are cloneable with this method.
ContextBatch$clone(deep = FALSE)
deep
Whether to make a deep clone.
Returns which points from a set are dominated by another point in the set.
is_dominated(ymat)
is_dominated(ymat)
ymat |
( |
A simple mlr3misc::Dictionary storing objects of class Optimizer.
Each optimizer has an associated help page, see mlr_optimizer_[id]
.
This dictionary can get populated with additional optimizer by add-on packages.
For a more convenient way to retrieve and construct optimizer, see opt()
/opts()
.
R6::R6Class object inheriting from mlr3misc::Dictionary.
See mlr3misc::Dictionary.
as.data.table(dict, ..., objects = FALSE)
mlr3misc::Dictionary -> data.table::data.table()
Returns a data.table::data.table()
with fields "key", "label", "param_classes", "properties" and "packages" as columns.
If objects
is set to TRUE
, the constructed objects are returned in the list column named object
.
Sugar functions: opt()
, opts()
as.data.table(mlr_optimizers) mlr_optimizers$get("random_search") opt("random_search")
as.data.table(mlr_optimizers) mlr_optimizers$get("random_search") opt("random_search")
OptimizerAsyncDesignPoints
class that implements optimization w.r.t. fixed design points.
We simply search over a set of points fully specified by the ser.
This Optimizer can be instantiated via the dictionary
mlr_optimizers or with the associated sugar function opt()
:
mlr_optimizers$get("async_design_points") opt("async_design_points")
design
data.table::data.table
Design points to try in search, one per row.
bbotk::Optimizer
-> bbotk::OptimizerAsync
-> OptimizerAsyncDesignPoints
new()
Creates a new instance of this R6 class.
OptimizerAsyncDesignPoints$new()
optimize()
Starts the asynchronous optimization.
OptimizerAsyncDesignPoints$optimize(inst)
inst
clone()
The objects of this class are cloneable with this method.
OptimizerAsyncDesignPoints$clone(deep = FALSE)
deep
Whether to make a deep clone.
OptimizerAsyncGridSearch
class that implements a grid search.
The grid is constructed as a Cartesian product over discretized values per parameter, see paradox::generate_design_grid()
.
The points of the grid are evaluated in a random order.
This Optimizer can be instantiated via the dictionary
mlr_optimizers or with the associated sugar function opt()
:
mlr_optimizers$get("async_grid_search") opt("async_grid_search")
batch_size
integer(1)
Maximum number of points to try in a batch.
bbotk::Optimizer
-> bbotk::OptimizerAsync
-> OptimizerAsyncGridSearch
new()
Creates a new instance of this R6 class.
OptimizerAsyncGridSearch$new()
optimize()
Starts the asynchronous optimization.
OptimizerAsyncGridSearch$optimize(inst)
inst
clone()
The objects of this class are cloneable with this method.
OptimizerAsyncGridSearch$clone(deep = FALSE)
deep
Whether to make a deep clone.
Bergstra J, Bengio Y (2012). “Random Search for Hyper-Parameter Optimization.” Journal of Machine Learning Research, 13(10), 281–305. https://jmlr.csail.mit.edu/papers/v13/bergstra12a.html.
OptimizerAsyncRandomSearch
class that implements a simple Random Search.
This Optimizer can be instantiated via the dictionary
mlr_optimizers or with the associated sugar function opt()
:
mlr_optimizers$get("async_random_search") opt("async_random_search")
bbotk::Optimizer
-> bbotk::OptimizerAsync
-> OptimizerAsyncRandomSearch
new()
Creates a new instance of this R6 class.
OptimizerAsyncRandomSearch$new()
clone()
The objects of this class are cloneable with this method.
OptimizerAsyncRandomSearch$clone(deep = FALSE)
deep
Whether to make a deep clone.
Bergstra J, Bengio Y (2012). “Random Search for Hyper-Parameter Optimization.” Journal of Machine Learning Research, 13(10), 281–305. https://jmlr.csail.mit.edu/papers/v13/bergstra12a.html.
OptimizerBatchChain
allows to run multiple OptimizerBatch sequentially.
For each OptimizerBatch an (optional) additional Terminator can be specified during construction. While the original Terminator of the OptimInstanceBatch guards the optimization process as a whole, the additional Terminators guard each individual OptimizerBatch.
The optimization process works as follows: The first OptimizerBatch is run on the OptimInstanceBatch relying on a TerminatorCombo of the original Terminator of the OptimInstanceBatch and the (optional) additional Terminator as passed during construction. Once this TerminatorCombo indicates termination (usually via the additional Terminator), the second OptimizerBatch is run. This continues for all optimizers unless the original Terminator of the OptimInstanceBatch indicates termination.
OptimizerBatchChain can also be used for random restarts of the same Optimizer (if applicable) by setting the Terminator of the OptimInstanceBatch to TerminatorNone and setting identical additional Terminators during construction.
This Optimizer can be instantiated via the dictionary
mlr_optimizers or with the associated sugar function opt()
:
mlr_optimizers$get("chain") opt("chain")
Parameters are inherited from the individual OptimizerBatch and collected as a
paradox::ParamSetCollection (with set_id
s potentially postfixed via _1
, _2
,
..., if the same OptimizerBatch are used multiple times).
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerBatchChain
new()
Creates a new instance of this R6 class.
OptimizerBatchChain$new( optimizers, terminators = rep(list(NULL), length(optimizers)) )
optimizers
(list of Optimizers).
terminators
(list of Terminators | NULL).
clone()
The objects of this class are cloneable with this method.
OptimizerBatchChain$clone(deep = FALSE)
deep
Whether to make a deep clone.
library(paradox) domain = ps(x = p_dbl(lower = -1, upper = 1)) search_space = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain ) terminator = trm("evals", n_evals = 10) # run optimizers sequentially instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = terminator ) optimizer = opt("chain", optimizers = list(opt("random_search"), opt("grid_search")), terminators = list(trm("evals", n_evals = 5), trm("evals", n_evals = 5)) ) optimizer$optimize(instance) # random restarts instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("none") ) optimizer = opt("chain", optimizers = list(opt("gensa"), opt("gensa")), terminators = list(trm("evals", n_evals = 10), trm("evals", n_evals = 10)) ) optimizer$optimize(instance)
library(paradox) domain = ps(x = p_dbl(lower = -1, upper = 1)) search_space = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain ) terminator = trm("evals", n_evals = 10) # run optimizers sequentially instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = terminator ) optimizer = opt("chain", optimizers = list(opt("random_search"), opt("grid_search")), terminators = list(trm("evals", n_evals = 5), trm("evals", n_evals = 5)) ) optimizer$optimize(instance) # random restarts instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("none") ) optimizer = opt("chain", optimizers = list(opt("gensa"), opt("gensa")), terminators = list(trm("evals", n_evals = 10), trm("evals", n_evals = 10)) ) optimizer$optimize(instance)
OptimizerBatchCmaes
class that implements CMA-ES. Calls adagio::pureCMAES()
from package adagio. The algorithm is typically applied to search
space dimensions between three and fifty. Lower search space dimensions might
crash.
This Optimizer can be instantiated via the dictionary
mlr_optimizers or with the associated sugar function opt()
:
mlr_optimizers$get("cmaes") opt("cmaes")
sigma
numeric(1)
start_values
character(1)
Create random
start values or based on center
of search space? In the
latter case, it is the center of the parameters before a trafo is applied.
For the meaning of the control parameters, see adagio::pureCMAES()
. Note
that we have removed all control parameters which refer to the termination of
the algorithm and where our terminators allow to obtain the same behavior.
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerBatchCmaes
new()
Creates a new instance of this R6 class.
OptimizerBatchCmaes$new()
clone()
The objects of this class are cloneable with this method.
OptimizerBatchCmaes$clone(deep = FALSE)
deep
Whether to make a deep clone.
if (requireNamespace("adagio")) { search_space = domain = ps( x1 = p_dbl(-10, 10), x2 = p_dbl(-5, 5) ) codomain = ps(y = p_dbl(tags = "maximize")) objective_function = function(xs) { c(y = -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 10)) optimizer = opt("cmaes") # modifies the instance by reference optimizer$optimize(instance) # returns best scoring evaluation instance$result # allows access of data.table of full path of all evaluations as.data.table(instance$archive$data) }
if (requireNamespace("adagio")) { search_space = domain = ps( x1 = p_dbl(-10, 10), x2 = p_dbl(-5, 5) ) codomain = ps(y = p_dbl(tags = "maximize")) objective_function = function(xs) { c(y = -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 10)) optimizer = opt("cmaes") # modifies the instance by reference optimizer$optimize(instance) # returns best scoring evaluation instance$result # allows access of data.table of full path of all evaluations as.data.table(instance$archive$data) }
OptimizerBatchDesignPoints
class that implements optimization w.r.t. fixed
design points. We simply search over a set of points fully specified by the
user. The points in the design are evaluated in order as given.
In order to support general termination criteria and parallelization, we
evaluate points in a batch-fashion of size batch_size
. Larger batches mean
we can parallelize more, smaller batches imply a more fine-grained checking
of termination criteria.
This Optimizer can be instantiated via the dictionary
mlr_optimizers or with the associated sugar function opt()
:
mlr_optimizers$get("design_points") opt("design_points")
batch_size
integer(1)
Maximum number of configurations to try in a batch.
design
data.table::data.table
Design points to try in search, one per row.
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerBatchDesignPoints
new()
Creates a new instance of this R6 class.
OptimizerBatchDesignPoints$new()
clone()
The objects of this class are cloneable with this method.
OptimizerBatchDesignPoints$clone(deep = FALSE)
deep
Whether to make a deep clone.
library(data.table) search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 10)) design = data.table(x = c(0, 1)) optimizer = opt("design_points", design = design) # Modifies the instance by reference optimizer$optimize(instance) # Returns best scoring evaluation instance$result # Allows access of data.table of full path of all evaluations as.data.table(instance$archive)
library(data.table) search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 10)) design = data.table(x = c(0, 1)) optimizer = opt("design_points", design = design) # Modifies the instance by reference optimizer$optimize(instance) # Returns best scoring evaluation instance$result # Allows access of data.table of full path of all evaluations as.data.table(instance$archive)
OptimizerBatchFocusSearch
class that implements a Focus Search.
Focus Search starts with evaluating n_points
drawn uniformly at random.
For 1 to maxit
batches, n_points
are then drawn uniformly at random and
if the best value of a batch outperforms the previous best value over all
batches evaluated so far, the search space is shrinked around this new best
point prior to the next batch being sampled and evaluated.
For details on the shrinking, see shrink_ps.
Depending on the Terminator this procedure simply restarts after maxit
is
reached.
This Optimizer can be instantiated via the dictionary
mlr_optimizers or with the associated sugar function opt()
:
mlr_optimizers$get("focus_search") opt("focus_search")
n_points
integer(1)
Number of points to evaluate in each random search batch.
maxit
integer(1)
Number of random search batches to run.
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerBatchFocusSearch
new()
Creates a new instance of this R6 class.
OptimizerBatchFocusSearch$new()
clone()
The objects of this class are cloneable with this method.
OptimizerBatchFocusSearch$clone(deep = FALSE)
deep
Whether to make a deep clone.
search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 10)) optimizer = opt("focus_search") # modifies the instance by reference optimizer$optimize(instance) # returns best scoring evaluation instance$result # allows access of data.table of full path of all evaluations as.data.table(instance$archive$data)
search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 10)) optimizer = opt("focus_search") # modifies the instance by reference optimizer$optimize(instance) # returns best scoring evaluation instance$result # allows access of data.table of full path of all evaluations as.data.table(instance$archive$data)
OptimizerBatchGenSA
class that implements generalized simulated annealing. Calls
GenSA::GenSA()
from package GenSA.
This Optimizer can be instantiated via the dictionary
mlr_optimizers or with the associated sugar function opt()
:
mlr_optimizers$get("gensa") opt("gensa")
smooth
logical(1)
temperature
numeric(1)
acceptance.param
numeric(1)
verbose
logical(1)
trace.mat
logical(1)
For the meaning of the control parameters, see GenSA::GenSA()
. Note that we
have removed all control parameters which refer to the termination of the
algorithm and where our terminators allow to obtain the same behavior.
In contrast to the GenSA::GenSA()
defaults, we set trace.mat = FALSE
.
Note that GenSA::GenSA()
uses smooth = TRUE
as a default.
In the case of using this optimizer for Hyperparameter Optimization you may
want to set smooth = FALSE
.
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerBatchGenSA
new()
Creates a new instance of this R6 class.
OptimizerBatchGenSA$new()
clone()
The objects of this class are cloneable with this method.
OptimizerBatchGenSA$clone(deep = FALSE)
deep
Whether to make a deep clone.
Tsallis C, Stariolo DA (1996). “Generalized simulated annealing.” Physica A: Statistical Mechanics and its Applications, 233(1-2), 395–406. doi:10.1016/s0378-4371(96)00271-3.
Xiang Y, Gubian S, Suomela B, Hoeng J (2013). “Generalized Simulated Annealing for Global Optimization: The GenSA Package.” The R Journal, 5(1), 13. doi:10.32614/rj-2013-002.
if (requireNamespace("GenSA")) { search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 10)) optimizer = opt("gensa") # Modifies the instance by reference optimizer$optimize(instance) # Returns best scoring evaluation instance$result # Allows access of data.table of full path of all evaluations as.data.table(instance$archive$data) }
if (requireNamespace("GenSA")) { search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 10)) optimizer = opt("gensa") # Modifies the instance by reference optimizer$optimize(instance) # Returns best scoring evaluation instance$result # Allows access of data.table of full path of all evaluations as.data.table(instance$archive$data) }
OptimizerBatchGridSearch
class that implements grid search. The grid is
constructed as a Cartesian product over discretized values per parameter, see
paradox::generate_design_grid()
. The points of the grid are evaluated in a
random order.
In order to support general termination criteria and parallelization, we
evaluate points in a batch-fashion of size batch_size
. Larger batches mean
we can parallelize more, smaller batches imply a more fine-grained checking
of termination criteria.
This Optimizer can be instantiated via the dictionary
mlr_optimizers or with the associated sugar function opt()
:
mlr_optimizers$get("grid_search") opt("grid_search")
resolution
integer(1)
Resolution of the grid, see paradox::generate_design_grid()
.
param_resolutions
named integer()
Resolution per parameter, named by parameter ID, see
paradox::generate_design_grid()
.
batch_size
integer(1)
Maximum number of points to try in a batch.
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerBatchGridSearch
new()
Creates a new instance of this R6 class.
OptimizerBatchGridSearch$new()
clone()
The objects of this class are cloneable with this method.
OptimizerBatchGridSearch$clone(deep = FALSE)
deep
Whether to make a deep clone.
search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 10)) optimizer = opt("grid_search") # modifies the instance by reference optimizer$optimize(instance) # returns best scoring evaluation instance$result # allows access of data.table of full path of all evaluations as.data.table(instance$archive$data)
search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 10)) optimizer = opt("grid_search") # modifies the instance by reference optimizer$optimize(instance) # returns best scoring evaluation instance$result # allows access of data.table of full path of all evaluations as.data.table(instance$archive$data)
OptimizerBatchIrace
class that implements iterated racing. Calls
irace::irace()
from package irace.
instances
list()
A list of instances where the configurations executed on.
targetRunnerParallel
function()
A function that executes the objective function with a specific parameter
configuration and instance. A default function is provided, see section
"Target Runner and Instances".
For the meaning of all other parameters, see irace::defaultScenario()
. Note
that we have removed all control parameters which refer to the termination of
the algorithm. Use TerminatorEvals instead. Other terminators do not work
with OptimizerBatchIrace
.
In contrast to irace::defaultScenario()
, we set digits = 15
.
This represents double parameters with a higher precision and avoids rounding errors.
The irace package uses a targetRunner
script or R function to evaluate a
configuration on a particular instance. Usually it is not necessary to
specify a targetRunner
function when using OptimizerBatchIrace
. A default
function is used that forwards several configurations and instances to the
user defined objective function. As usually, the user defined function has
a xs
, xss
or xdt
parameter depending on the used Objective class.
For irace, the function needs an additional instances
parameter.
fun = function(xs, instances) { # function to evaluate configuration in `xs` on instance `instances` }
The Archive holds the following additional columns:
"race"
(integer(1)
)
Race iteration.
"step"
(integer(1)
)
Step number of race.
"instance"
(integer(1)
)
Identifies instances across races and steps.
"configuration"
(integer(1)
)
Identifies configurations across races and steps.
The optimization result (instance$result
) is the best performing elite of
the final race. The reported performance is the average performance estimated
on all used instances.
This Optimizer can be instantiated via the dictionary
mlr_optimizers or with the associated sugar function opt()
:
mlr_optimizers$get("irace") opt("irace")
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerBatchIrace
new()
Creates a new instance of this R6 class.
OptimizerBatchIrace$new()
clone()
The objects of this class are cloneable with this method.
OptimizerBatchIrace$clone(deep = FALSE)
deep
Whether to make a deep clone.
Lopez-Ibanez M, Dubois-Lacoste J, Caceres LP, Birattari M, Stuetzle T (2016). “The irace package: Iterated racing for automatic algorithm configuration.” Operations Research Perspectives, 3, 43–58. doi:10.1016/j.orp.2016.09.002.
library(data.table) search_space = domain = ps( x1 = p_dbl(-5, 10), x2 = p_dbl(0, 15) ) codomain = ps(y = p_dbl(tags = "minimize")) # branin function with noise # the noise generates different instances of the branin function # the noise values are passed via the `instances` parameter fun = function(xdt, instances) { ys = branin(xdt[["x1"]], xdt[["x2"]], noise = as.numeric(instances)) data.table(y = ys) } # define objective with instances as a constant objective = ObjectiveRFunDt$new( fun = fun, domain = domain, codomain = codomain, constants = ps(instances = p_uty())) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 1000)) # create instances of branin function instances = rnorm(10, mean = 0, sd = 0.1) # load optimizer irace and set branin instances optimizer = opt("irace", instances = instances) # modifies the instance by reference optimizer$optimize(instance) # best scoring configuration instance$result # all evaluations as.data.table(instance$archive)
library(data.table) search_space = domain = ps( x1 = p_dbl(-5, 10), x2 = p_dbl(0, 15) ) codomain = ps(y = p_dbl(tags = "minimize")) # branin function with noise # the noise generates different instances of the branin function # the noise values are passed via the `instances` parameter fun = function(xdt, instances) { ys = branin(xdt[["x1"]], xdt[["x2"]], noise = as.numeric(instances)) data.table(y = ys) } # define objective with instances as a constant objective = ObjectiveRFunDt$new( fun = fun, domain = domain, codomain = codomain, constants = ps(instances = p_uty())) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 1000)) # create instances of branin function instances = rnorm(10, mean = 0, sd = 0.1) # load optimizer irace and set branin instances optimizer = opt("irace", instances = instances) # modifies the instance by reference optimizer$optimize(instance) # best scoring configuration instance$result # all evaluations as.data.table(instance$archive)
OptimizerBatchLocalSearch
class that implements a simple Local Search.
Local Search starts by determining the n_initial_points
initial best points present in the Archive of the OptimInstance.
If fewer points than n_initial_points
are present, additional initial_random_sample_size
points sampled uniformly at random are evaluated and the best n_initial_points
initial points are determined.
In each iteration, for each of the n_initial_points
initial best points, neighbors_per_point
neighbors are generated by local mutation.
Local mutation generates a neighbor by sampling a single parameter that is to be mutated and then proceeds as follows: Double parameters (paradox::p_dbl()
) are mutated via Gaussian mutation (with a prior standardization to [0, 1]
and retransformation after mutation).
Integer parameters (paradox::p_int()
) undergo the same mutation but are rounded to the closest integer after mutation.
Categorical parameters (paradox::p_fct()
and paradox::p_lgl()
) are mutated via uniform mutation.
Note that parameters that are conditioned on (i.e., they are parents of a paradox::Condition, see the dependencies of the search space) are not mutated.
This Optimizer can be instantiated via the dictionary
mlr_optimizers or with the associated sugar function opt()
:
mlr_optimizers$get("local_search") opt("local_search")
n_initial_points
integer(1)
Size of the set of initial best points which are used as starting points for the Local Search.
Default is 10
.
initial_random_sample_size
integer(1)
Number of points that are sampled uniformly at random before the best n_initial_points
initial points are determined, if fewer points than n_initial_points
are present in the Archive of the OptimInstance.
Default is 100
.
neighbors_per_point
integer(1)
Number of neighboring points to generate for each of the n_initial_points
best starting points in each iteration.
Default is 100
.
mutation_sd
numeric(1)
Standard deviation used to create neighbors during mutation of numeric parameters on the standardized [0, 1]
scale.
Default is 0.1
.
The Archive holds the following additional column that is specific to the algorithm:
.point_id
(integer(1)
)
The id (1, ..., n_initial_points
) indicating from which of the n_initial_points
best points the evaluated point was generated from.
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerBatchLocalSearch
new()
Creates a new instance of this R6 class.
OptimizerBatchLocalSearch$new()
clone()
The objects of this class are cloneable with this method.
OptimizerBatchLocalSearch$clone(deep = FALSE)
deep
Whether to make a deep clone.
search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 100)) # evaluate an initial sample of 10 points uniformly at random # choose the best 3 points as the initial points # for each of these points generate 10 neighbors # repeat this process optimizer = opt("local_search", n_initial_points = 3, initial_random_sample_size = 10, neighbors_per_point = 10) # modifies the instance by reference optimizer$optimize(instance) # returns best scoring evaluation instance$result # allows access of data.table of full path of all evaluations as.data.table(instance$archive$data)
search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 100)) # evaluate an initial sample of 10 points uniformly at random # choose the best 3 points as the initial points # for each of these points generate 10 neighbors # repeat this process optimizer = opt("local_search", n_initial_points = 3, initial_random_sample_size = 10, neighbors_per_point = 10) # modifies the instance by reference optimizer$optimize(instance) # returns best scoring evaluation instance$result # allows access of data.table of full path of all evaluations as.data.table(instance$archive$data)
OptimizerBatchNLoptr
class that implements non-linear optimization. Calls
nloptr::nloptr()
from package nloptr.
algorithm
character(1)
eval_g_ineq
function()
xtol_rel
numeric(1)
xtol_abs
numeric(1)
ftol_rel
numeric(1)
ftol_abs
numeric(1)
start_values
character(1)
Create random
start values or based on center
of search space? In the
latter case, it is the center of the parameters before a trafo is applied.
For the meaning of the control parameters, see nloptr::nloptr()
and
nloptr::nloptr.print.options()
.
The termination conditions stopval
, maxtime
and maxeval
of
nloptr::nloptr()
are deactivated and replaced by the Terminator
subclasses. The x and function value tolerance termination conditions
(xtol_rel = 10^-4
, xtol_abs = rep(0.0, length(x0))
, ftol_rel = 0.0
and
ftol_abs = 0.0
) are still available and implemented with their package
defaults. To deactivate these conditions, set them to -1
.
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerBatchNLoptr
new()
Creates a new instance of this R6 class.
OptimizerBatchNLoptr$new()
clone()
The objects of this class are cloneable with this method.
OptimizerBatchNLoptr$clone(deep = FALSE)
deep
Whether to make a deep clone.
Johnson, G S (2020). “The NLopt nonlinear-optimization package.” https://github.com/stevengj/nlopt.
if (requireNamespace("nloptr")) { search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) # We use the internal termination criterion xtol_rel terminator = trm("none") instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = terminator) optimizer = opt("nloptr", algorithm = "NLOPT_LN_BOBYQA") # Modifies the instance by reference optimizer$optimize(instance) # Returns best scoring evaluation instance$result # Allows access of data.table of full path of all evaluations as.data.table(instance$archive) }
if (requireNamespace("nloptr")) { search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) # We use the internal termination criterion xtol_rel terminator = trm("none") instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = terminator) optimizer = opt("nloptr", algorithm = "NLOPT_LN_BOBYQA") # Modifies the instance by reference optimizer$optimize(instance) # Returns best scoring evaluation instance$result # Allows access of data.table of full path of all evaluations as.data.table(instance$archive) }
OptimizerBatchRandomSearch
class that implements a simple Random Search.
In order to support general termination criteria and parallelization, we
evaluate points in a batch-fashion of size batch_size
. Larger batches mean
we can parallelize more, smaller batches imply a more fine-grained checking
of termination criteria.
This Optimizer can be instantiated via the dictionary
mlr_optimizers or with the associated sugar function opt()
:
mlr_optimizers$get("random_search") opt("random_search")
batch_size
integer(1)
Maximum number of points to try in a batch.
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
bbotk::Optimizer
-> bbotk::OptimizerBatch
-> OptimizerBatchRandomSearch
new()
Creates a new instance of this R6 class.
OptimizerBatchRandomSearch$new()
clone()
The objects of this class are cloneable with this method.
OptimizerBatchRandomSearch$clone(deep = FALSE)
deep
Whether to make a deep clone.
Bergstra J, Bengio Y (2012). “Random Search for Hyper-Parameter Optimization.” Journal of Machine Learning Research, 13(10), 281–305. https://jmlr.csail.mit.edu/papers/v13/bergstra12a.html.
search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 10)) optimizer = opt("random_search") # modifies the instance by reference optimizer$optimize(instance) # returns best scoring evaluation instance$result # allows access of data.table of full path of all evaluations as.data.table(instance$archive$data)
search_space = domain = ps(x = p_dbl(lower = -1, upper = 1)) codomain = ps(y = p_dbl(tags = "minimize")) objective_function = function(xs) { list(y = as.numeric(xs)^2) } objective = ObjectiveRFun$new( fun = objective_function, domain = domain, codomain = codomain) instance = OptimInstanceBatchSingleCrit$new( objective = objective, search_space = search_space, terminator = trm("evals", n_evals = 10)) optimizer = opt("random_search") # modifies the instance by reference optimizer$optimize(instance) # returns best scoring evaluation instance$result # allows access of data.table of full path of all evaluations as.data.table(instance$archive$data)
A simple mlr3misc::Dictionary storing objects of class Terminator.
Each terminator has an associated help page, see mlr_terminators_[id]
.
This dictionary can get populated with additional terminators by add-on packages.
For a more convenient way to retrieve and construct terminator, see trm()
/trms()
.
R6::R6Class object inheriting from mlr3misc::Dictionary.
See mlr3misc::Dictionary.
as.data.table(dict, ..., objects = FALSE)
mlr3misc::Dictionary -> data.table::data.table()
Returns a data.table::data.table()
with fields "key", "label", "properties" and "unit" as columns.
If objects
is set to TRUE
, the constructed objects are returned in the list column named object
.
Sugar functions: trm()
, trms()
Other Terminator:
Terminator
,
mlr_terminators_clock_time
,
mlr_terminators_combo
,
mlr_terminators_evals
,
mlr_terminators_none
,
mlr_terminators_perf_reached
,
mlr_terminators_run_time
,
mlr_terminators_stagnation
,
mlr_terminators_stagnation_batch
,
mlr_terminators_stagnation_hypervolume
as.data.table(mlr_terminators) mlr_terminators$get("evals") trm("evals", n_evals = 10)
as.data.table(mlr_terminators) mlr_terminators$get("evals") trm("evals", n_evals = 10)
Class to terminate the optimization after a fixed time point has been reached (as reported by Sys.time()
).
This Terminator can be instantiated via the
dictionary mlr_terminators or with the associated
sugar function trm()
:
mlr_terminators$get("clock_time") trm("clock_time")
stop_time
POSIXct(1)
Terminator stops after this point in time.
bbotk::Terminator
-> TerminatorClockTime
new()
Creates a new instance of this R6 class.
TerminatorClockTime$new()
is_terminated()
Is TRUE
iff the termination criterion is positive, and FALSE
otherwise.
TerminatorClockTime$is_terminated(archive)
archive
(Archive).
logical(1)
.
clone()
The objects of this class are cloneable with this method.
TerminatorClockTime$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Terminator:
Terminator
,
mlr_terminators
,
mlr_terminators_combo
,
mlr_terminators_evals
,
mlr_terminators_none
,
mlr_terminators_perf_reached
,
mlr_terminators_run_time
,
mlr_terminators_stagnation
,
mlr_terminators_stagnation_batch
,
mlr_terminators_stagnation_hypervolume
stop_time = as.POSIXct("2030-01-01 00:00:00") trm("clock_time", stop_time = stop_time)
stop_time = as.POSIXct("2030-01-01 00:00:00") trm("clock_time", stop_time = stop_time)
This class takes multiple Terminators and terminates as soon as one or all of the included terminators are positive.
This Terminator can be instantiated via the
dictionary mlr_terminators or with the associated
sugar function trm()
:
mlr_terminators$get("combo") trm("combo")
any
logical(1)
Terminate iff any included terminator is positive? (not all).
Default is TRUE
.
bbotk::Terminator
-> TerminatorCombo
terminators
(list()
)
List of objects of class Terminator.
new()
Creates a new instance of this R6 class.
TerminatorCombo$new(terminators = list(TerminatorNone$new()))
terminators
(list()
)
List of objects of class Terminator.
is_terminated()
Is TRUE
iff the termination criterion is positive, and FALSE
otherwise.
TerminatorCombo$is_terminated(archive)
archive
(Archive).
logical(1)
.
print()
Printer.
TerminatorCombo$print(...)
...
(ignored).
remaining_time()
Returns the remaining runtime in seconds. If any = TRUE
, the remaining
runtime is determined by the time-based terminator with the shortest time
remaining. If non-time-based terminators are used and any = FALSE
,
the the remaining runtime is always Inf
.
TerminatorCombo$remaining_time(archive)
archive
(Archive).
integer(1)
.
status_long()
Returns max_steps
and current_steps
for each terminator.
TerminatorCombo$status_long(archive)
archive
(Archive).
clone()
The objects of this class are cloneable with this method.
TerminatorCombo$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Terminator:
Terminator
,
mlr_terminators
,
mlr_terminators_clock_time
,
mlr_terminators_evals
,
mlr_terminators_none
,
mlr_terminators_perf_reached
,
mlr_terminators_run_time
,
mlr_terminators_stagnation
,
mlr_terminators_stagnation_batch
,
mlr_terminators_stagnation_hypervolume
trm("combo", list(trm("clock_time", stop_time = Sys.time() + 60), trm("evals", n_evals = 10)), any = FALSE )
trm("combo", list(trm("clock_time", stop_time = Sys.time() + 60), trm("evals", n_evals = 10)), any = FALSE )
Class to terminate the optimization depending on the number of evaluations.
An evaluation is defined by one resampling of a parameter value.
The total number of evaluations is defined as
where is the dimension of the search space.
This Terminator can be instantiated via the
dictionary mlr_terminators or with the associated
sugar function trm()
:
mlr_terminators$get("evals") trm("evals")
n_evals
integer(1)
See formula above. Default is 100.
k
integer(1)
See formula above. Default is 0.
bbotk::Terminator
-> TerminatorEvals
new()
Creates a new instance of this R6 class.
TerminatorEvals$new()
is_terminated()
Is TRUE
iff the termination criterion is positive, and FALSE
otherwise.
TerminatorEvals$is_terminated(archive)
archive
(Archive).
logical(1)
.
clone()
The objects of this class are cloneable with this method.
TerminatorEvals$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Terminator:
Terminator
,
mlr_terminators
,
mlr_terminators_clock_time
,
mlr_terminators_combo
,
mlr_terminators_none
,
mlr_terminators_perf_reached
,
mlr_terminators_run_time
,
mlr_terminators_stagnation
,
mlr_terminators_stagnation_batch
,
mlr_terminators_stagnation_hypervolume
TerminatorEvals$new() # 5 evaluations in total trm("evals", n_evals = 5) # 3 * [dimension of search space] evaluations in total trm("evals", n_evals = 0, k = 3) # (3 * [dimension of search space] + 1) evaluations in total trm("evals", n_evals = 1, k = 3)
TerminatorEvals$new() # 5 evaluations in total trm("evals", n_evals = 5) # 3 * [dimension of search space] evaluations in total trm("evals", n_evals = 0, k = 3) # (3 * [dimension of search space] + 1) evaluations in total trm("evals", n_evals = 1, k = 3)
Mainly useful for optimization algorithms where the stopping is inherently controlled by the algorithm itself (e.g. OptimizerBatchGridSearch).
This Terminator can be instantiated via the
dictionary mlr_terminators or with the associated
sugar function trm()
:
mlr_terminators$get("none") trm("none")
bbotk::Terminator
-> TerminatorNone
new()
Creates a new instance of this R6 class.
TerminatorNone$new()
is_terminated()
Is TRUE
iff the termination criterion is positive, and FALSE
otherwise.
TerminatorNone$is_terminated(archive)
archive
(Archive).
logical(1)
.
clone()
The objects of this class are cloneable with this method.
TerminatorNone$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Terminator:
Terminator
,
mlr_terminators
,
mlr_terminators_clock_time
,
mlr_terminators_combo
,
mlr_terminators_evals
,
mlr_terminators_perf_reached
,
mlr_terminators_run_time
,
mlr_terminators_stagnation
,
mlr_terminators_stagnation_batch
,
mlr_terminators_stagnation_hypervolume
Class to terminate the optimization after a performance level has been hit.
This Terminator can be instantiated via the
dictionary mlr_terminators or with the associated
sugar function trm()
:
mlr_terminators$get("perf_reached") trm("perf_reached")
level
numeric(1)
Performance level that needs to be reached.
Default is 0.
Terminates if the performance exceeds (respective measure has to be maximized) or falls below (respective measure has to be minimized) this value.
bbotk::Terminator
-> TerminatorPerfReached
new()
Creates a new instance of this R6 class.
TerminatorPerfReached$new()
is_terminated()
Is TRUE
iff the termination criterion is positive, and FALSE
otherwise.
TerminatorPerfReached$is_terminated(archive)
archive
(Archive).
logical(1)
.
clone()
The objects of this class are cloneable with this method.
TerminatorPerfReached$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Terminator:
Terminator
,
mlr_terminators
,
mlr_terminators_clock_time
,
mlr_terminators_combo
,
mlr_terminators_evals
,
mlr_terminators_none
,
mlr_terminators_run_time
,
mlr_terminators_stagnation
,
mlr_terminators_stagnation_batch
,
mlr_terminators_stagnation_hypervolume
TerminatorPerfReached$new() trm("perf_reached")
TerminatorPerfReached$new() trm("perf_reached")
Class to terminate the optimization after the optimization process took a number of seconds on the clock.
This Terminator can be instantiated via the
dictionary mlr_terminators or with the associated
sugar function trm()
:
mlr_terminators$get("run_time") trm("run_time")
secs
numeric(1)
Maximum allowed time, in seconds, default is 100.
bbotk::Terminator
-> TerminatorRunTime
new()
Creates a new instance of this R6 class.
TerminatorRunTime$new()
is_terminated()
Is TRUE
iff the termination criterion is positive, and FALSE
otherwise.
TerminatorRunTime$is_terminated(archive)
archive
(Archive).
logical(1)
.
clone()
The objects of this class are cloneable with this method.
TerminatorRunTime$clone(deep = FALSE)
deep
Whether to make a deep clone.
This terminator only works if archive$start_time
is set. This is usually
done by the Optimizer.
Other Terminator:
Terminator
,
mlr_terminators
,
mlr_terminators_clock_time
,
mlr_terminators_combo
,
mlr_terminators_evals
,
mlr_terminators_none
,
mlr_terminators_perf_reached
,
mlr_terminators_stagnation
,
mlr_terminators_stagnation_batch
,
mlr_terminators_stagnation_hypervolume
trm("run_time", secs = 1800)
trm("run_time", secs = 1800)
Class to terminate the optimization after the performance stagnates, i.e.
does not improve more than threshold
over the last iters
iterations.
This Terminator can be instantiated via the
dictionary mlr_terminators or with the associated
sugar function trm()
:
mlr_terminators$get("stagnation") trm("stagnation")
iters
integer(1)
Number of iterations to evaluate the performance improvement on, default
is 10.
threshold
numeric(1)
If the improvement is less than threshold
, optimization is stopped,
default is 0
.
bbotk::Terminator
-> TerminatorStagnation
new()
Creates a new instance of this R6 class.
TerminatorStagnation$new()
is_terminated()
Is TRUE
iff the termination criterion is positive, and FALSE
otherwise.
TerminatorStagnation$is_terminated(archive)
archive
(Archive).
logical(1)
.
clone()
The objects of this class are cloneable with this method.
TerminatorStagnation$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Terminator:
Terminator
,
mlr_terminators
,
mlr_terminators_clock_time
,
mlr_terminators_combo
,
mlr_terminators_evals
,
mlr_terminators_none
,
mlr_terminators_perf_reached
,
mlr_terminators_run_time
,
mlr_terminators_stagnation_batch
,
mlr_terminators_stagnation_hypervolume
TerminatorStagnation$new() trm("stagnation", iters = 5, threshold = 1e-5)
TerminatorStagnation$new() trm("stagnation", iters = 5, threshold = 1e-5)
Class to terminate the optimization after the performance stagnates, i.e.
does not improve more than threshold
over the last n
batches.
This Terminator can be instantiated via the
dictionary mlr_terminators or with the associated
sugar function trm()
:
mlr_terminators$get("stagnation_batch") trm("stagnation_batch")
n
integer(1)
Number of batches to evaluate the performance improvement on, default
is 1.
threshold
numeric(1)
If the improvement is less than threshold
, optimization is stopped,
default is 0
.
bbotk::Terminator
-> TerminatorStagnationBatch
new()
Creates a new instance of this R6 class.
TerminatorStagnationBatch$new()
is_terminated()
Is TRUE
iff the termination criterion is positive, and FALSE
otherwise.
TerminatorStagnationBatch$is_terminated(archive)
archive
(Archive).
logical(1)
.
clone()
The objects of this class are cloneable with this method.
TerminatorStagnationBatch$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Terminator:
Terminator
,
mlr_terminators
,
mlr_terminators_clock_time
,
mlr_terminators_combo
,
mlr_terminators_evals
,
mlr_terminators_none
,
mlr_terminators_perf_reached
,
mlr_terminators_run_time
,
mlr_terminators_stagnation
,
mlr_terminators_stagnation_hypervolume
TerminatorStagnationBatch$new() trm("stagnation_batch", n = 1, threshold = 1e-5)
TerminatorStagnationBatch$new() trm("stagnation_batch", n = 1, threshold = 1e-5)
Class to terminate the optimization after the hypervolume stagnates, i.e. does not improve more than threshold
over the last iters
iterations.
This Terminator can be instantiated via the
dictionary mlr_terminators or with the associated
sugar function trm()
:
mlr_terminators$get("stagnation_hypervolume") trm("stagnation_hypervolume")
iters
integer(1)
Number of iterations to evaluate the performance improvement on, default is 10.
threshold
numeric(1)
If the improvement is less than threshold
, optimization is stopped, default is 0
.
bbotk::Terminator
-> TerminatorStagnationHypervolume
new()
Creates a new instance of this R6 class.
TerminatorStagnationHypervolume$new()
is_terminated()
Is TRUE
if the termination criterion is positive, and FALSE
otherwise.
TerminatorStagnationHypervolume$is_terminated(archive)
archive
(Archive).
logical(1)
.
clone()
The objects of this class are cloneable with this method.
TerminatorStagnationHypervolume$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Terminator:
Terminator
,
mlr_terminators
,
mlr_terminators_clock_time
,
mlr_terminators_combo
,
mlr_terminators_evals
,
mlr_terminators_none
,
mlr_terminators_perf_reached
,
mlr_terminators_run_time
,
mlr_terminators_stagnation
,
mlr_terminators_stagnation_batch
TerminatorStagnation$new() trm("stagnation", iters = 5, threshold = 1e-5)
TerminatorStagnation$new() trm("stagnation", iters = 5, threshold = 1e-5)
The Objective
class describes a black-box objective function that maps an arbitrary domain to a numerical codomain.
Objective
objects can have the following properties: "noisy"
, "deterministic"
, "single-crit"
and "multi-crit"
.
callbacks
(list of mlr3misc::Callback)
Callbacks applied during the optimization.
context
(ContextBatch)
Stores the context for the callbacks.
id
(character(1)
)).
properties
(character()
).
domain
(paradox::ParamSet)
Specifies domain of function, hence its input parameters, their types
and ranges.
codomain
(paradox::ParamSet)
Specifies codomain of function, hence its feasible values.
constants
(paradox::ParamSet).
Changeable constants or parameters that are not subject to tuning can be
stored and accessed here. Set constant values are passed to $.eval()
and $.eval_many()
as named arguments.
check_values
(logical(1)
)
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
xdim
(integer(1)
)
Dimension of domain.
ydim
(integer(1)
)
Dimension of codomain.
new()
Creates a new instance of this R6 class.
Objective$new( id = "f", properties = character(), domain, codomain = ps(y = p_dbl(tags = "minimize")), constants = ps(), check_values = TRUE, label = NA_character_, man = NA_character_ )
id
(character(1)
).
properties
(character()
).
domain
(paradox::ParamSet)
Specifies domain of function.
The paradox::ParamSet should describe all possible input parameters of the objective function.
This includes their id
, their types and the possible range.
codomain
(paradox::ParamSet)
Specifies codomain of function.
Most importantly the tags of each output "Parameter" define whether it should
be minimized or maximized. The default is to minimize each component.
constants
(paradox::ParamSet)
Changeable constants or parameters that are not subject to tuning can be stored and accessed here.
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
format()
Helper for print outputs.
Objective$format(...)
...
(ignored).
print()
Print method.
Objective$print()
character()
.
eval()
Evaluates a single input value on the objective function. If
check_values = TRUE
, the validity of the point as well as the validity
of the result is checked.
Objective$eval(xs)
xs
(list()
)
A list that contains a single x value, e.g. list(x1 = 1, x2 = 2)
.
list()
that contains the result of the evaluation, e.g. list(y = 1)
.
The list can also contain additional named entries that will be stored in the
archive if called through the OptimInstance.
These extra entries are referred to as extras.
eval_many()
Evaluates multiple input values on the objective function. If
check_values = TRUE
, the validity of the points as well as the validity
of the results are checked. bbotk does not take care of
parallelization. If the function should make use of parallel computing,
it has to be implemented by deriving from this class and overwriting this
function.
Objective$eval_many(xss)
xss
(list()
)
A list of lists that contains multiple x values, e.g.
list(list(x1 = 1, x2 = 2), list(x1 = 3, x2 = 4))
.
data.table::data.table()] that contains one y-column for
single-criteria functions and multiple y-columns for multi-criteria functions,
e.g. data.table(y = 1:2)
or data.table(y1 = 1:2, y2 = 3:4)
.
It may also contain additional columns that will be stored in the archive if
called through the OptimInstance.
These extra columns are referred to as extras.
eval_dt()
Evaluates multiple input values on the objective function
Objective$eval_dt(xdt)
xdt
(data.table::data.table()
)
Set of untransformed points / points from the search space.
One point per row, e.g. data.table(x1 = c(1, 3), x2 = c(2, 4))
.
Column names have to match ids of the search_space
.
However, xdt
can contain additional columns.
data.table::data.table()] that contains one y-column for
single-criteria functions and multiple y-columns for multi-criteria
functions, e.g. data.table(y = 1:2)
or data.table(y1 = 1:2, y2 = 3:4)
.
help()
Opens the corresponding help page referenced by field $man
.
Objective$help()
clone()
The objects of this class are cloneable with this method.
Objective$clone(deep = FALSE)
deep
Whether to make a deep clone.
Objective interface where the user can pass a custom R function that expects a list as input. If the return of the function is unnamed, it is named with the ids of the codomain.
bbotk::Objective
-> ObjectiveRFun
fun
(function
)
Objective function.
new()
Creates a new instance of this R6 class.
ObjectiveRFun$new( fun, domain, codomain = NULL, id = "function", properties = character(), constants = ps(), check_values = TRUE )
fun
(function
)
R function that encodes objective and expects a list with the input for a single point
(e.g. list(x1 = 1, x2 = 2)
) and returns the result either as a numeric vector or a
list (e.g. list(y = 3)
).
domain
(paradox::ParamSet)
Specifies domain of function.
The paradox::ParamSet should describe all possible input parameters of the objective function.
This includes their id
, their types and the possible range.
codomain
(paradox::ParamSet)
Specifies codomain of function.
Most importantly the tags of each output "Parameter" define whether it should
be minimized or maximized. The default is to minimize each component.
id
(character(1)
).
properties
(character()
).
constants
(paradox::ParamSet)
Changeable constants or parameters that are not subject to tuning can be stored and accessed here.
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
eval()
Evaluates input value(s) on the objective function. Calls the R function supplied by the user.
ObjectiveRFun$eval(xs)
xs
Input values.
clone()
The objects of this class are cloneable with this method.
ObjectiveRFun$clone(deep = FALSE)
deep
Whether to make a deep clone.
# define objective function fun = function(xs) { -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10 } # set domain domain = ps( x1 = p_dbl(-10, 10), x2 = p_dbl(-5, 5) ) # set codomain codomain = ps(y = p_dbl(tags = "maximize")) # create Objective object obfun = ObjectiveRFun$new( fun = fun, domain = domain, codomain = codomain, properties = "deterministic" )
# define objective function fun = function(xs) { -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10 } # set domain domain = ps( x1 = p_dbl(-10, 10), x2 = p_dbl(-5, 5) ) # set codomain codomain = ps(y = p_dbl(tags = "maximize")) # create Objective object obfun = ObjectiveRFun$new( fun = fun, domain = domain, codomain = codomain, properties = "deterministic" )
Objective interface where user can pass an R function that works on an data.table()
.
bbotk::Objective
-> ObjectiveRFunDt
fun
(function
)
Objective function.
new()
Creates a new instance of this R6 class.
ObjectiveRFunDt$new( fun, domain, codomain = NULL, id = "function", properties = character(), constants = ps(), check_values = TRUE )
fun
(function
)
R function that encodes objective and expects an data.table()
as input
whereas each point is represented by one row.
domain
(paradox::ParamSet)
Specifies domain of function.
The paradox::ParamSet should describe all possible input parameters of the objective function.
This includes their id
, their types and the possible range.
codomain
(paradox::ParamSet)
Specifies codomain of function.
Most importantly the tags of each output "Parameter" define whether it should
be minimized or maximized. The default is to minimize each component.
id
(character(1)
).
properties
(character()
).
constants
(paradox::ParamSet)
Changeable constants or parameters that are not subject to tuning can be stored and accessed here.
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
eval_many()
Evaluates multiple input values received as a list, converted to a data.table()
on the
objective function. Missing columns in xss are filled with NA
s in xdt
.
ObjectiveRFunDt$eval_many(xss)
xss
(list()
)
A list of lists that contains multiple x values, e.g.
list(list(x1 = 1, x2 = 2), list(x1 = 3, x2 = 4))
.
data.table::data.table()
that contains one y-column for single-criteria functions
and multiple y-columns for multi-criteria functions, e.g.
data.table(y = 1:2)
or data.table(y1 = 1:2, y2 = 3:4)
.
eval_dt()
Evaluates multiple input values on the objective function supplied by the user.
ObjectiveRFunDt$eval_dt(xdt)
xdt
(data.table::data.table()
)
Set of untransformed points / points from the search space.
One point per row, e.g. data.table(x1 = c(1, 3), x2 = c(2, 4))
.
Column names have to match ids of the search_space
.
However, xdt
can contain additional columns.
data.table::data.table()] that contains one y-column for single-criteria functions
and multiple y-columns for multi-criteria functions, e.g.
data.table(y = 1:2)
or data.table(y1 = 1:2, y2 = 3:4)
.
clone()
The objects of this class are cloneable with this method.
ObjectiveRFunDt$clone(deep = FALSE)
deep
Whether to make a deep clone.
Objective interface where the user can pass a custom R function that expects a list of configurations as input. If the return of the function is unnamed, it is named with the ids of the codomain.
bbotk::Objective
-> ObjectiveRFunMany
fun
(function
)
Objective function.
new()
Creates a new instance of this R6 class.
ObjectiveRFunMany$new( fun, domain, codomain = NULL, id = "function", properties = character(), constants = ps(), check_values = TRUE )
fun
(function
)
R function that encodes objective and expects a list of lists that contains multiple x values, e.g. list(list(x1 = 1, x2 = 2), list(x1 = 3, x2 = 4))
.
The function must return a data.table::data.table()
that contains one y-column for single-criteria functions and multiple y-columns for multi-criteria functions, e.g. data.table(y = 1:2)
or data.table(y1 = 1:2, y2 = 3:4)
.
domain
(paradox::ParamSet)
Specifies domain of function.
The paradox::ParamSet should describe all possible input parameters of the objective function.
This includes their id
, their types and the possible range.
codomain
(paradox::ParamSet)
Specifies codomain of function.
Most importantly the tags of each output "Parameter" define whether it should
be minimized or maximized. The default is to minimize each component.
id
(character(1)
).
properties
(character()
).
constants
(paradox::ParamSet)
Changeable constants or parameters that are not subject to tuning can be stored and accessed here.
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
eval_many()
Evaluates input value(s) on the objective function. Calls the R function supplied by the user.
ObjectiveRFunMany$eval_many(xss)
xss
(list()
)
A list of lists that contains multiple x values, e.g. list(list(x1 = 1, x2 = 2), list(x1 = 3, x2 = 4))
.
data.table::data.table()
that contains one y-column for single-criteria functions and multiple y-columns for multi-criteria functions, e.g. data.table(y = 1:2)
or data.table(y1 = 1:2, y2 = 3:4)
.
It may also contain additional columns that will be stored in the archive if called through the OptimInstance.
These extra columns are referred to as extras.
clone()
The objects of this class are cloneable with this method.
ObjectiveRFunMany$clone(deep = FALSE)
deep
Whether to make a deep clone.
# define objective function fun = function(xss) { res = lapply(xss, function(xs) -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10) data.table(y = as.numeric(res)) } # set domain domain = ps( x1 = p_dbl(-10, 10), x2 = p_dbl(-5, 5) ) # set codomain codomain = ps(y = p_dbl(tags = "maximize")) # create Objective object obfun = ObjectiveRFunMany$new( fun = fun, domain = domain, codomain = codomain, properties = "deterministic" )
# define objective function fun = function(xss) { res = lapply(xss, function(xs) -(xs[[1]] - 2)^2 - (xs[[2]] + 3)^2 + 10) data.table(y = as.numeric(res)) } # set domain domain = ps( x1 = p_dbl(-10, 10), x2 = p_dbl(-5, 5) ) # set codomain codomain = ps(y = p_dbl(tags = "maximize")) # create Objective object obfun = ObjectiveRFunMany$new( fun = fun, domain = domain, codomain = codomain, properties = "deterministic" )
Function to construct a OptimInstanceBatchSingleCrit and OptimInstanceBatchMultiCrit.
oi( objective, search_space = NULL, terminator, callbacks = NULL, check_values = TRUE, keep_evals = "all" )
oi( objective, search_space = NULL, terminator, callbacks = NULL, check_values = TRUE, keep_evals = "all" )
objective |
(Objective) |
search_space |
(paradox::ParamSet) |
terminator |
Terminator |
callbacks |
(list of mlr3misc::Callback) |
check_values |
( |
keep_evals |
( |
Function to construct an OptimInstanceAsyncSingleCrit and OptimInstanceAsyncMultiCrit.
oi_async( objective, search_space = NULL, terminator, check_values = FALSE, callbacks = NULL, rush = NULL )
oi_async( objective, search_space = NULL, terminator, check_values = FALSE, callbacks = NULL, rush = NULL )
objective |
(Objective) |
search_space |
(paradox::ParamSet) |
terminator |
Terminator |
check_values |
( |
callbacks |
(list of mlr3misc::Callback) |
rush |
( |
This function complements mlr_optimizers with functions in the spirit
of mlr_sugar
from mlr3.
opt(.key, ...) opts(.keys, ...)
opt(.key, ...) opts(.keys, ...)
.key |
( |
... |
(named |
.keys |
( |
opt("random_search", batch_size = 10)
opt("random_search", batch_size = 10)
The OptimInstance
specifies an optimization problem for an Optimizer.
OptimInstance
is an abstract base class that implements the base functionality each instance must provide.
The Optimizer writes the final result to the .result
field by using the $assign_result()
method.
.result
stores a data.table::data.table consisting of x values in the search space, (transformed) x values in the domain space and y values in the codomain space of the Objective.
The user can access the results with active bindings (see below).
objective
(Objective)
Objective function of the instance.
search_space
(paradox::ParamSet)
Specification of the search space for the Optimizer.
terminator
Terminator
Termination criterion of the optimization.
archive
(Archive)
Contains all performed function calls of the Objective.
progressor
(progressor()
)
Stores progressor
function.
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
result
(data.table::data.table)
Get result
result_x_search_space
(data.table::data.table)
x part of the result in the search space.
is_terminated
(logical(1)
).
new()
Creates a new instance of this R6 class.
OptimInstance$new( objective, search_space = NULL, terminator, check_values = TRUE, callbacks = NULL, archive = NULL, label = NA_character_, man = NA_character_ )
objective
(Objective)
Objective function.
search_space
(paradox::ParamSet)
Specifies the search space for the Optimizer. The paradox::ParamSet
describes either a subset of the domain
of the Objective or it describes
a set of parameters together with a trafo
function that transforms values
from the search space to values of the domain. Depending on the context, this
value defaults to the domain of the objective.
terminator
Terminator
Termination criterion.
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
callbacks
(list of mlr3misc::Callback)
List of callbacks.
archive
(Archive).
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
format()
Helper for print outputs.
OptimInstance$format(...)
...
(ignored).
print()
Printer.
OptimInstance$print(...)
...
(ignored).
assign_result()
The Optimizer object writes the best found point and estimated performance value here. For internal use.
OptimInstance$assign_result(xdt, y, ...)
xdt
(data.table::data.table()
)
x values as data.table::data.table()
with one row. Contains the value in the
search space of the OptimInstance object. Can contain additional
columns for extra information.
y
(numeric(1)
)
Optimal outcome.
...
(any
)
ignored.
clear()
Reset terminator and clear all evaluation results from archive and results.
OptimInstance$clear()
clone()
The objects of this class are cloneable with this method.
OptimInstance$clone(deep = FALSE)
deep
Whether to make a deep clone.
The OptimInstanceAsync
specifies an optimization problem for an OptimizerAsync.
The function oi_async()
creates an OptimInstanceAsyncSingleCrit or OptimInstanceAsyncMultiCrit.
OptimInstanceAsync
is an abstract base class that implements the base functionality each instance must provide.
bbotk::OptimInstance
-> OptimInstanceAsync
rush
(Rush
)
Rush controller for parallel optimization.
new()
Creates a new instance of this R6 class.
OptimInstanceAsync$new( objective, search_space = NULL, terminator, check_values = FALSE, callbacks = NULL, archive = NULL, rush = NULL, label = NA_character_, man = NA_character_ )
objective
(Objective)
Objective function.
search_space
(paradox::ParamSet)
Specifies the search space for the Optimizer. The paradox::ParamSet
describes either a subset of the domain
of the Objective or it describes
a set of parameters together with a trafo
function that transforms values
from the search space to values of the domain. Depending on the context, this
value defaults to the domain of the objective.
terminator
Terminator
Termination criterion.
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
callbacks
(list of mlr3misc::Callback)
List of callbacks.
archive
(Archive).
rush
(Rush
)
If a rush instance is supplied, the tuning runs without batches.
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
print()
Printer.
OptimInstanceAsync$print(...)
...
(ignored).
clear()
Reset terminator and clear all evaluation results from archive and results.
OptimInstanceAsync$clear()
reconnect()
Reconnect to Redis. The connection breaks when the rush::Rush is saved to disk. Call this method to reconnect after loading the object.
OptimInstanceAsync$reconnect()
The OptimInstanceAsyncMultiCrit specifies an optimization problem for an OptimizerAsync.
The function oi_async()
creates an OptimInstanceAsyncMultiCrit.
bbotk::OptimInstance
-> bbotk::OptimInstanceAsync
-> OptimInstanceAsyncMultiCrit
result_x_domain
(list()
)
(transformed) x part of the result in the domain space of the objective.
result_y
(numeric(1)
)
Optimal outcome.
new()
Creates a new instance of this R6 class.
OptimInstanceAsyncMultiCrit$new( objective, search_space = NULL, terminator, check_values = FALSE, callbacks = NULL, archive = NULL, rush = NULL )
objective
(Objective)
Objective function.
search_space
(paradox::ParamSet)
Specifies the search space for the Optimizer. The paradox::ParamSet
describes either a subset of the domain
of the Objective or it describes
a set of parameters together with a trafo
function that transforms values
from the search space to values of the domain. Depending on the context, this
value defaults to the domain of the objective.
terminator
Terminator
Termination criterion.
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
callbacks
(list of mlr3misc::Callback)
List of callbacks.
archive
(Archive).
rush
(Rush
)
If a rush instance is supplied, the tuning runs without batches.
assign_result()
The OptimizerAsync writes the best found points and estimated performance values here (probably the Pareto set / front). For internal use.
OptimInstanceAsyncMultiCrit$assign_result(xdt, ydt, extra = NULL, ...)
xdt
(data.table::data.table()
)
Set of untransformed points / points from the search space.
One point per row, e.g. data.table(x1 = c(1, 3), x2 = c(2, 4))
.
Column names have to match ids of the search_space
.
However, xdt
can contain additional columns.
ydt
(numeric(1)
)
Optimal outcomes, e.g. the Pareto front.
extra
(data.table::data.table()
)
Additional information.
...
(any
)
ignored.
clone()
The objects of this class are cloneable with this method.
OptimInstanceAsyncMultiCrit$clone(deep = FALSE)
deep
Whether to make a deep clone.
The OptimInstanceAsyncSingleCrit
specifies an optimization problem for an OptimizerAsync.
The function oi_async()
creates an OptimInstanceAsyncSingleCrit.
bbotk::OptimInstance
-> bbotk::OptimInstanceAsync
-> OptimInstanceAsyncSingleCrit
result_x_domain
(list()
)
(transformed) x part of the result in the domain space of the objective.
result_y
(numeric()
)
Optimal outcome.
new()
Creates a new instance of this R6 class.
OptimInstanceAsyncSingleCrit$new( objective, search_space = NULL, terminator, check_values = FALSE, callbacks = NULL, archive = NULL, rush = NULL )
objective
(Objective)
Objective function.
search_space
(paradox::ParamSet)
Specifies the search space for the Optimizer. The paradox::ParamSet
describes either a subset of the domain
of the Objective or it describes
a set of parameters together with a trafo
function that transforms values
from the search space to values of the domain. Depending on the context, this
value defaults to the domain of the objective.
terminator
Terminator
Termination criterion.
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
callbacks
(list of mlr3misc::Callback)
List of callbacks.
archive
(Archive).
rush
(Rush
)
If a rush instance is supplied, the tuning runs without batches.
assign_result()
The OptimizerAsync object writes the best found point and estimated performance value here. For internal use.
OptimInstanceAsyncSingleCrit$assign_result(xdt, y, extra = NULL, ...)
xdt
(data.table::data.table()
)
Set of untransformed points / points from the search space.
One point per row, e.g. data.table(x1 = c(1, 3), x2 = c(2, 4))
.
Column names have to match ids of the search_space
.
However, xdt
can contain additional columns.
y
(numeric(1)
)
Optimal outcome.
extra
(data.table::data.table()
)
Additional information.
...
(any
)
ignored.
clone()
The objects of this class are cloneable with this method.
OptimInstanceAsyncSingleCrit$clone(deep = FALSE)
deep
Whether to make a deep clone.
The OptimInstanceBatch
specifies an optimization problem for an OptimizerBatch.
The function oi()
creates an OptimInstanceAsyncSingleCrit or OptimInstanceAsyncMultiCrit.
bbotk::OptimInstance
-> OptimInstanceBatch
objective_multiplicator
(integer()
).
result
(data.table::data.table)
Get result
result_x_search_space
(data.table::data.table)
x part of the result in the search space.
result_x_domain
(list()
)
(transformed) x part of the result in the domain space of the objective.
result_y
(numeric()
)
Optimal outcome.
is_terminated
(logical(1)
).
new()
Creates a new instance of this R6 class.
OptimInstanceBatch$new( objective, search_space = NULL, terminator, check_values = TRUE, callbacks = NULL, archive = NULL, label = NA_character_, man = NA_character_ )
objective
(Objective)
Objective function.
search_space
(paradox::ParamSet)
Specifies the search space for the Optimizer. The paradox::ParamSet
describes either a subset of the domain
of the Objective or it describes
a set of parameters together with a trafo
function that transforms values
from the search space to values of the domain. Depending on the context, this
value defaults to the domain of the objective.
terminator
Terminator
Termination criterion.
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
callbacks
(list of mlr3misc::Callback)
List of callbacks.
archive
(Archive).
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
eval_batch()
Evaluates all input values in xdt
by calling
the Objective. Applies possible transformations to the input values
and writes the results to the Archive.
Before each batch-evaluation, the Terminator is checked, and if it
is positive, an exception of class terminated_error
is raised. This
function should be internally called by the Optimizer.
OptimInstanceBatch$eval_batch(xdt)
xdt
(data.table::data.table()
)
x values as data.table()
with one point per row. Contains the value in
the search space of the OptimInstance object. Can contain additional
columns for extra information.
objective_function()
Evaluates (untransformed) points of only numeric values. Returns a
numeric scalar for single-crit or a numeric vector for multi-crit. The
return value(s) are negated if the measure is maximized. Internally,
$eval_batch()
is called with a single row. This function serves as a
objective function for optimizers of numeric spaces - which should always
be minimized.
OptimInstanceBatch$objective_function(x)
x
(numeric()
)
Untransformed points.
Objective value as numeric(1)
, negated for maximization problems.
clone()
The objects of this class are cloneable with this method.
OptimInstanceBatch$clone(deep = FALSE)
deep
Whether to make a deep clone.
The OptimInstanceBatchMultiCrit specifies an optimization problem for an OptimizerBatch.
The function oi()
creates an OptimInstanceBatchMultiCrit.
bbotk::OptimInstance
-> bbotk::OptimInstanceBatch
-> OptimInstanceBatchMultiCrit
result_x_domain
(list()
)
(transformed) x part of the result in the domain space of the objective.
result_y
(numeric(1)
)
Optimal outcome.
new()
Creates a new instance of this R6 class.
OptimInstanceBatchMultiCrit$new( objective, search_space = NULL, terminator, check_values = TRUE, callbacks = NULL, archive = NULL )
objective
(Objective)
Objective function.
search_space
(paradox::ParamSet)
Specifies the search space for the Optimizer. The paradox::ParamSet
describes either a subset of the domain
of the Objective or it describes
a set of parameters together with a trafo
function that transforms values
from the search space to values of the domain. Depending on the context, this
value defaults to the domain of the objective.
terminator
Terminator
Termination criterion.
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
callbacks
(list of mlr3misc::Callback)
List of callbacks.
archive
(Archive).
assign_result()
The Optimizer object writes the best found points and estimated performance values here (probably the Pareto set / front). For internal use.
OptimInstanceBatchMultiCrit$assign_result(xdt, ydt, extra = NULL, ...)
xdt
(data.table::data.table()
)
Set of untransformed points / points from the search space.
One point per row, e.g. data.table(x1 = c(1, 3), x2 = c(2, 4))
.
Column names have to match ids of the search_space
.
However, xdt
can contain additional columns.
ydt
(data.table::data.table()
)
Optimal outcome.
extra
(data.table::data.table()
)
Additional information.
...
(any
)
ignored.
clone()
The objects of this class are cloneable with this method.
OptimInstanceBatchMultiCrit$clone(deep = FALSE)
deep
Whether to make a deep clone.
The OptimInstanceBatchSingleCrit specifies an optimization problem for an OptimizerBatch.
The function oi()
creates an OptimInstanceBatchSingleCrit.
bbotk::OptimInstance
-> bbotk::OptimInstanceBatch
-> OptimInstanceBatchSingleCrit
new()
Creates a new instance of this R6 class.
OptimInstanceBatchSingleCrit$new( objective, search_space = NULL, terminator, check_values = TRUE, callbacks = NULL, archive = NULL )
objective
(Objective)
Objective function.
search_space
(paradox::ParamSet)
Specifies the search space for the Optimizer. The paradox::ParamSet
describes either a subset of the domain
of the Objective or it describes
a set of parameters together with a trafo
function that transforms values
from the search space to values of the domain. Depending on the context, this
value defaults to the domain of the objective.
terminator
Terminator
Termination criterion.
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
callbacks
(list of mlr3misc::Callback)
List of callbacks.
archive
(Archive).
assign_result()
The Optimizer object writes the best found point and estimated performance value here. For internal use.
OptimInstanceBatchSingleCrit$assign_result(xdt, y, extra = NULL, ...)
xdt
(data.table::data.table()
)
Set of untransformed points / points from the search space.
One point per row, e.g. data.table(x1 = c(1, 3), x2 = c(2, 4))
.
Column names have to match ids of the search_space
.
However, xdt
can contain additional columns.
y
(numeric(1)
)
Optimal outcome.
extra
(data.table::data.table()
)
Additional information.
...
(any
)
ignored.
clone()
The objects of this class are cloneable with this method.
OptimInstanceBatchSingleCrit$clone(deep = FALSE)
deep
Whether to make a deep clone.
OptimInstanceMultiCrit
is a deprecated class that is now a wrapper around OptimInstanceBatchMultiCrit.
bbotk::OptimInstance
-> bbotk::OptimInstanceBatch
-> bbotk::OptimInstanceBatchMultiCrit
-> OptimInstanceMultiCrit
new()
Creates a new instance of this R6 class.
OptimInstanceMultiCrit$new( objective, search_space = NULL, terminator, keep_evals = "all", check_values = TRUE, callbacks = NULL )
objective
(Objective)
Objective function.
search_space
(paradox::ParamSet)
Specifies the search space for the Optimizer. The paradox::ParamSet
describes either a subset of the domain
of the Objective or it describes
a set of parameters together with a trafo
function that transforms values
from the search space to values of the domain. Depending on the context, this
value defaults to the domain of the objective.
terminator
Terminator
Termination criterion.
keep_evals
(character(1)
)
Keep all
or only best
evaluations in archive?
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
callbacks
(list of mlr3misc::Callback)
List of callbacks.
clone()
The objects of this class are cloneable with this method.
OptimInstanceMultiCrit$clone(deep = FALSE)
deep
Whether to make a deep clone.
OptimInstanceSingleCrit
is a deprecated class that is now a wrapper around OptimInstanceBatchSingleCrit
.
bbotk::OptimInstance
-> bbotk::OptimInstanceBatch
-> bbotk::OptimInstanceBatchSingleCrit
-> OptimInstanceSingleCrit
new()
Creates a new instance of this R6 class.
OptimInstanceSingleCrit$new( objective, search_space = NULL, terminator, keep_evals = "all", check_values = TRUE, callbacks = NULL )
objective
(Objective)
Objective function.
search_space
(paradox::ParamSet)
Specifies the search space for the Optimizer. The paradox::ParamSet
describes either a subset of the domain
of the Objective or it describes
a set of parameters together with a trafo
function that transforms values
from the search space to values of the domain. Depending on the context, this
value defaults to the domain of the objective.
terminator
Terminator
Termination criterion.
keep_evals
(character(1)
)
Keep all
or only best
evaluations in archive?
check_values
(logical(1)
)
Should points before the evaluation and the results be checked for validity?
callbacks
(list of mlr3misc::Callback)
List of callbacks.
clone()
The objects of this class are cloneable with this method.
OptimInstanceSingleCrit$clone(deep = FALSE)
deep
Whether to make a deep clone.
The Optimizer
implements the optimization algorithm.
Optimizer
is an abstract base class that implements the base functionality each optimizer must provide.
A Optimizer
object describes the optimization strategy.
A Optimizer
object must write its result to the $assign_result()
method of the OptimInstance at the end in order to store the best point and its estimated performance vector.
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
id
(character(1)
)
Identifier of the object.
Used in tables, plot and text output.
param_set
paradox::ParamSet
Set of control parameters.
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
param_classes
(character()
)
Supported parameter classes that the optimizer can optimize, as given in the paradox::ParamSet
$class
field.
properties
(character()
)
Set of properties of the optimizer.
Must be a subset of bbotk_reflections$optimizer_properties
.
packages
(character()
)
Set of required packages.
A warning is signaled by the constructor if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
new()
Creates a new instance of this R6 class.
Optimizer$new( id = "optimizer", param_set, param_classes, properties, packages = character(), label = NA_character_, man = NA_character_ )
id
(character(1)
)
Identifier for the new instance.
param_set
(paradox::ParamSet)
Set of control parameters.
param_classes
(character()
)
Supported parameter classes that the optimizer can optimize, as given in the paradox::ParamSet
$class
field.
properties
(character()
)
Set of properties of the optimizer.
Must be a subset of bbotk_reflections$optimizer_properties
.
packages
(character()
)
Set of required packages.
A warning is signaled by the constructor if at least one of the packages is not installed, but loaded (not attached) later on-demand via requireNamespace()
.
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
format()
Helper for print outputs.
Optimizer$format(...)
...
(ignored).
print()
Print method.
Optimizer$print()
(character()
).
help()
Opens the corresponding help page referenced by field $man
.
Optimizer$help()
clone()
The objects of this class are cloneable with this method.
Optimizer$clone(deep = FALSE)
deep
Whether to make a deep clone.
The OptimizerAsync implements the asynchronous optimization algorithm. The optimization is performed asynchronously on a set of workers.
OptimizerAsync is the abstract base class for all asynchronous optimizers.
It provides the basic structure for asynchronous optimization algorithms.
The public method $optimize()
is the main entry point for the optimization and runs in the main process.
The method starts the optimization process by starting the workers and pushing the necessary objects to the workers.
Optionally, a set of points can be created, e.g. an initial design, and pushed to the workers.
The private method $.optimize()
is the actual optimization algorithm that runs on the workers.
Usually, the method proposes new points, evaluates them, and updates the archive.
bbotk::Optimizer
-> OptimizerAsync
optimize()
Performs the optimization on a OptimInstanceAsyncSingleCrit or OptimInstanceAsyncMultiCrit until termination. The single evaluations will be written into the ArchiveAsync. The result will be written into the instance object.
OptimizerAsync$optimize(inst)
clone()
The objects of this class are cloneable with this method.
OptimizerAsync$clone(deep = FALSE)
deep
Whether to make a deep clone.
Abstract OptimizerBatch
class that implements the base functionality each OptimizerBatch
subclass must provide.
A OptimizerBatch
object describes the optimization strategy.
A OptimizerBatch
object must write its result to the $assign_result()
method of the OptimInstance at the end in order to store the best point and its estimated performance vector.
$optimize()
supports progress bars via the package progressr
combined with a Terminator. Simply wrap the function in
progressr::with_progress()
to enable them. We recommend to use package
progress as backend; enable with progressr::handlers("progress")
.
bbotk::Optimizer
-> OptimizerBatch
optimize()
Performs the optimization and writes optimization result into OptimInstanceBatch. The optimization result is returned but the complete optimization path is stored in ArchiveBatch of OptimInstanceBatch.
OptimizerBatch$optimize(inst)
inst
clone()
The objects of this class are cloneable with this method.
OptimizerBatch$clone(deep = FALSE)
deep
Whether to make a deep clone.
Wraps progressr::progressor()
function and stores current progress.
progressor
(progressr::progressor()
).
max_steps
(integer(1)
).
current_steps
(integer(1)
).
unit
(character(1)
).
new()
Creates a new instance of this R6 class.
Progressor$new(progressor, unit)
progressor
(progressr::progressor()
)
Progressor function.
unit
(character(1)
)
Unit of progress.
update()
Updates progressr::progressor()
with current steps.
Progressor$update(terminator, archive)
terminator
(Terminator).
archive
(Archive).
clone()
The objects of this class are cloneable with this method.
Progressor$clone(deep = FALSE)
deep
Whether to make a deep clone.
Shrinks a paradox::ParamSet towards a point. Boundaries of numeric values are shrinked to an interval around the point of half of the previous length, while for discrete variables, a random (currently not chosen) level is dropped.
Note that for paradox::p_lgl()
s the value to be shrinked around is set as
the default
value instead of dropping a level. Also, a tag shrinked
is
added.
Note that the returned paradox::ParamSet has lost all its original
default
s, as they may have become infeasible.
If the paradox::ParamSet has a trafo, x
is expected to contain the
transformed values.
shrink_ps(param_set, x, check.feasible = FALSE)
shrink_ps(param_set, x, check.feasible = FALSE)
param_set |
(paradox::ParamSet) |
x |
(data.table::data.table) |
check.feasible |
( |
library(paradox) library(data.table) param_set = ps( x = p_dbl(lower = 0, upper = 10), x2 = p_int(lower = -10, upper = 10), x3 = p_fct(levels = c("a", "b", "c")), x4 = p_lgl() ) x = data.table(x1 = 5, x2 = 0, x3 = "b", x4 = FALSE) shrink_ps(param_set, x = x)
library(paradox) library(data.table) param_set = ps( x = p_dbl(lower = 0, upper = 10), x2 = p_int(lower = -10, upper = 10), x3 = p_fct(levels = c("a", "b", "c")), x4 = p_lgl() ) x = data.table(x1 = 5, x2 = 0, x3 = "b", x4 = FALSE) shrink_ps(param_set, x = x)
Error class for termination.
terminated_error(optim_instance)
terminated_error(optim_instance)
optim_instance |
OptimInstance |
Abstract Terminator
class that implements the base functionality each terminator must provide.
A terminator is an object that determines when to stop the optimization.
Termination of optimization works as follows:
Evaluations in a instance are performed in batches.
Before each batch evaluation, the Terminator is checked, and if it is positive, we stop.
The optimization algorithm itself might decide not to produce any more points, or even might decide to do a smaller batch in its last evaluation.
Therefore the following note seems in order: While it is definitely possible to execute a fine-grained control for termination, and for many optimization algorithms we can specify exactly when to stop, it might happen that too few or even too many evaluations are performed, especially if multiple points are evaluated in a single batch (c.f. batch size parameter of many optimization algorithms). So it is advised to check the size of the returned archive, in particular if you are benchmarking multiple optimization algorithms.
Terminator
subclasses can overwrite .status()
to support progress bars via the package progressr.
The method must return the maximum number of steps (max_steps
) and the currently achieved number of steps (current_steps
) as a named integer vector.
id
(character(1)
)
Identifier of the object.
Used in tables, plot and text output.
param_set
paradox::ParamSet
Set of control parameters.
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
properties
(character()
)
Set of properties of the terminator.
Must be a subset of bbotk_reflections$terminator_properties
.
unit
(character()
)
Unit of steps.
new()
Creates a new instance of this R6 class.
Terminator$new( id, param_set = ps(), properties = character(), unit = "percent", label = NA_character_, man = NA_character_ )
id
(character(1)
)
Identifier for the new instance.
param_set
(paradox::ParamSet)
Set of control parameters.
properties
(character()
)
Set of properties of the terminator.
Must be a subset of bbotk_reflections$terminator_properties
.
unit
(character()
)
Unit of steps.
label
(character(1)
)
Label for this object.
Can be used in tables, plot and text output instead of the ID.
man
(character(1)
)
String in the format [pkg]::[topic]
pointing to a manual page for this object.
The referenced help package can be opened via method $help()
.
format()
Helper for print outputs.
Terminator$format(with_params = FALSE, ...)
with_params
(logical(1)
)
Add parameter values to format string.
...
(ignored).
print()
Printer.
Terminator$print(...)
...
(ignored).
status()
Returns how many progression steps are made (current_steps
) and the
amount steps needed for termination (max_steps
).
Terminator$status(archive)
archive
(Archive).
named integer(2)
.
remaining_time()
Returns remaining runtime in seconds. If the terminator is not
time-based, the reaming runtime is Inf
.
Terminator$remaining_time(archive)
archive
(Archive).
integer(1)
.
clone()
The objects of this class are cloneable with this method.
Terminator$clone(deep = FALSE)
deep
Whether to make a deep clone.
Other Terminator:
mlr_terminators
,
mlr_terminators_clock_time
,
mlr_terminators_combo
,
mlr_terminators_evals
,
mlr_terminators_none
,
mlr_terminators_perf_reached
,
mlr_terminators_run_time
,
mlr_terminators_stagnation
,
mlr_terminators_stagnation_batch
,
mlr_terminators_stagnation_hypervolume
Transforms a given list()
to a list with transformed x values.
trafo_xs(xs, search_space)
trafo_xs(xs, search_space)
xs |
( |
search_space |
paradox::ParamSet |
This function complements mlr_terminators with functions in the spirit
of mlr_sugar
from mlr3.
trm(.key, ...) trms(.keys, ...)
trm(.key, ...) trms(.keys, ...)
.key |
( |
... |
(named |
.keys |
( |
Terminator for trm()
.
list of Terminator for trms()
.
trm("evals", n_evals = 10)
trm("evals", n_evals = 10)