Title: | Evolutionary Parameter Estimation for 'Repast Simphony' Models |
---|---|
Description: | The EvoPER, Evolutionary Parameter Estimation for Individual-based Models is an extensible package providing optimization driven parameter estimation methods using metaheuristics and evolutionary computation techniques (Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization for continuous domains, Tabu Search, Evolutionary Strategies, ...) which could be more efficient and require, in some cases, fewer model evaluations than alternatives relying on experimental design. Currently there are built in support for models developed with 'Repast Simphony' Agent-Based framework (<https://repast.github.io/>) and with NetLogo (<https://ccl.northwestern.edu/netlogo/>) which are the most used frameworks for Agent-based modeling. |
Authors: | Antonio Prestes Garcia [aut, cre], Alfonso Rodriguez-Paton [aut, ths] |
Maintainer: | Antonio Prestes Garcia <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.5.0 |
Built: | 2024-10-30 06:53:35 UTC |
Source: | CRAN |
An implementation of Ant Colony Optimization algorithm for continuous variables.
abm.acor(objective, options = NULL)
abm.acor(objective, options = NULL)
objective |
An instance of ObjectiveFunction (or subclass) class ObjectiveFunction |
options |
An apropiate instance from a sublclass of Options class |
[1] Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains. European Journal of Operational Research, 185(3), 1155-1173. http://doi.org/10.1016/j.ejor.2006.06.046
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) extremize("acor", f) ## End(Not run)
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) extremize("acor", f) ## End(Not run)
This function tries to provide a rough approximation to best solution when no information is available for the correct range of input parameters for the objective function. It can useful for studying the behavior of individual-based models with high variability in the output variables showing nonlinear behaviors.
abm.ees1(objective, options = NULL)
abm.ees1(objective, options = NULL)
objective |
An instance of ObjectiveFunction (or subclass) class ObjectiveFunction |
options |
An apropiate instance from a sublclass of Options class |
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) extremize("ees1", f) ## End(Not run)
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) extremize("ees1", f) ## End(Not run)
This function tries to provide a rough approximation to best solution when no information is available for the correct range of input parameters for the objective function. It can useful for studying the behavior of individual-based models with high variability in the output variables showing nonlinear behaviors.
abm.ees2(objective, options = NULL)
abm.ees2(objective, options = NULL)
objective |
An instance of ObjectiveFunction (or subclass) class ObjectiveFunction |
options |
An apropiate instance from a sublclass of Options class |
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) extremize("ees2", f) ## End(Not run)
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) extremize("ees2", f) ## End(Not run)
An implementaion of Particle Swarm Optimization method for parameter estimation of Individual-based models.
abm.pso(objective, options = NULL)
abm.pso(objective, options = NULL)
objective |
An instance of ObjectiveFunction (or subclass) class ObjectiveFunction |
options |
An apropiate instance from a sublclass of Options class |
[1] Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In Proceedings of ICNN 95 - International Conference on Neural Networks (Vol. 4, pp. 1942-1948). IEEE.
[2] Poli, R., Kennedy, J., & Blackwell, T. (2007). Particle swarm optimization. Swarm Intelligence, 1(1), 33-57.
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) extremize("pso", f) ## End(Not run)
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) extremize("pso", f) ## End(Not run)
An implementation of Simulated Annealing Algorithm optimization method for parameter estimation of Individual-based models.
abm.saa(objective, options = NULL)
abm.saa(objective, options = NULL)
objective |
An instance of ObjectiveFunction (or subclass) class ObjectiveFunction |
options |
An apropiate instance from a sublclass of Options class |
The best solution.
[1] Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by Simulated Annealing. Science, 220(4598).
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) extremize("saa", f) ## End(Not run) ## Not run: ## A Repast defined function f<- RepastFunction$new("/usr/models/BactoSim(HaldaneEngine-1.0)","ds::Output",300) ## or a plain function f1<- function(x1,x2,x3,x4) { 10 * (x1 - 1)^2 + 20 * (x2 - 2)^2 + 30 * (x3 - 3)^2 + 40 * (x4 - 4)^2 } f<- PlainFunction$new(f1) f$addFactor(name="cyclePoint",min=0,max=90) f$addFactor(name="conjugationCost",min=0,max=100) f$addFactor(name="pilusExpressionCost",min=0,max=100) f$addFactor(name="gamma0",min=1,max=10) abm.saa(f, 100, 1, 100, 0.75) ## End(Not run)
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) extremize("saa", f) ## End(Not run) ## Not run: ## A Repast defined function f<- RepastFunction$new("/usr/models/BactoSim(HaldaneEngine-1.0)","ds::Output",300) ## or a plain function f1<- function(x1,x2,x3,x4) { 10 * (x1 - 1)^2 + 20 * (x2 - 2)^2 + 30 * (x3 - 3)^2 + 40 * (x4 - 4)^2 } f<- PlainFunction$new(f1) f$addFactor(name="cyclePoint",min=0,max=90) f$addFactor(name="conjugationCost",min=0,max=100) f$addFactor(name="pilusExpressionCost",min=0,max=100) f$addFactor(name="gamma0",min=1,max=10) abm.saa(f, 100, 1, 100, 0.75) ## End(Not run)
An implementation of Tabu Search algorithm for parameter estimation
abm.tabu(objective, options = NULL)
abm.tabu(objective, options = NULL)
objective |
An instance of ObjectiveFunction (or subclass) class ObjectiveFunction |
options |
An apropiate instance from a sublclass of Options class |
[1] Fred Glover (1989). "Tabu Search - Part 1". ORSA Journal on Computing, 190-206. doi:10.1287/ijoc.1.3.190. [2] Fred Glover (1990). "Tabu Search - Part 2". ORSA Journal on Computing, 4-32. doi:10.1287/ijoc.2.1.4.
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) or f$Parameter0(name="x1",levels=c(0:4)) f$Parameter0(name="x2",levels=c(-2,-1,0,1,2)) extremize("tabu", f) ## End(Not run)
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) or f$Parameter0(name="x1",levels=c(0:4)) f$Parameter0(name="x2",levels=c(-2,-1,0,1,2)) extremize("tabu", f) ## End(Not run)
This function is used for creating and maintaining the ACOr archive 'T'. The function keeps the track of 'k' solotion in the archive.
acor.archive(s, f, w, k, T = NULL)
acor.archive(s, f, w, k, T = NULL)
s |
The solution 'ants' |
f |
The evaluation of solution |
w |
The weight vector |
k |
The archive size |
T |
The current archive |
The solution archive
[1] Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains. European Journal of Operational Research, 185(3), 1155-1173. http://doi.org/10.1016/j.ejor.2006.06.046
Helper function for extracting the 'F' function evaluations from archive ACOr 'T'
acor.F(T)
acor.F(T)
T |
The solution archive |
The F matrix
Given a weight vector calculate the probabilities of selecting the lth gaussian function and return the index of lht gaussian selected with probability p
acor.lthgaussian(W)
acor.lthgaussian(W)
W |
The vector of weights |
The index of lht gaussian function
[1] Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains. European Journal of Operational Research, 185(3), 1155-1173. http://doi.org/10.1016/j.ejor.2006.06.046
Helper function for getting the size of solution
acor.N(T)
acor.N(T)
T |
The solution archive |
The size 'n' of a solution 's'
Calculate the probability of choosing the lth Gaussian function
acor.probabilities(W, l = NULL)
acor.probabilities(W, l = NULL)
W |
The vector of weights |
l |
The lth element of algorithm solution archive T |
The vector of probabilities 'p'
[1] Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains. European Journal of Operational Research, 185(3), 1155-1173. http://doi.org/10.1016/j.ejor.2006.06.046
Helper function for extracting solution 'S' from archive 'T'
acor.S(T)
acor.S(T)
T |
The solution archive |
The solution matrix
Calculate the value of sigma
acor.sigma(Xi, k, T)
acor.sigma(Xi, k, T)
Xi |
The algorithm parameter |
k |
The solution archive size |
T |
The solution archive |
The sigma value
[1] Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains. European Journal of Operational Research, 185(3), 1155-1173. http://doi.org/10.1016/j.ejor.2006.06.046
Update the solution using the gaussian kernel
acor.updateants(S, N, W, t.mu, t.sigma)
acor.updateants(S, N, W, t.mu, t.sigma)
S |
The current solution ants |
N |
The numnber of required ants in solution |
W |
The weight vector |
t.mu |
The 'mean' from solution archive |
t.sigma |
The value of sigma from solution archive |
The new solution ants
[1] Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains. European Journal of Operational Research, 185(3), 1155-1173. http://doi.org/10.1016/j.ejor.2006.06.046
Helper function for extracting the 'W' function evaluations from archive ACOr 'T'
acor.W(T)
acor.W(T)
T |
The solution archive |
The weight vector
Calculates the weight element of ACOr algorithm for the solution archive.
acor.weigth(q, k, l)
acor.weigth(q, k, l)
q |
The Algorithm parameter. When small best-ranked solution is preferred |
k |
The Archive size |
l |
The lth element of algorithm solution archive T |
A scalar or a vector with calculated weigth.
[1] Socha, K., & Dorigo, M. (2008). Ant colony optimization for continuous domains. European Journal of Operational Research, 185(3), 1155-1173. http://doi.org/10.1016/j.ejor.2006.06.046
The assert function stop the execution if the logical
expression given by the parameter expresion
is false.
assert(expresion, string)
assert(expresion, string)
expresion |
Some logical expression |
string |
The text message to show if expression does not hold |
Given a set S of N solutions created with sortSolution, this function returns the fitness component fot the best solution.
bestFitness(S)
bestFitness(S)
S |
The solution set |
The best fitness value
Given a set S of N solutions created with sortSolution, this function returns the best solution found.
bestSolution(S)
bestSolution(S)
S |
The solution set |
The best solution
Simple implementation of a circular buffer.
cbuf(b, v, e)
cbuf(b, v, e)
b |
The variable holding the current buffer content |
v |
The new valued to be added to b |
e |
The length of circular buffer |
The buffer b plus the element v minus the least recently added element
Compare the number of function evalutions and convergence for the following optimization algorithms, ("saa","pso","acor","ees1").
compare.algorithms1(F, seeds = c(27, 2718282, 36190727, 3141593, -91190721, -140743, 1321))
compare.algorithms1(F, seeds = c(27, 2718282, 36190727, 3141593, -91190721, -140743, 1321))
F |
The function to be tested |
seeds |
The random seeds which will be used for testing algorithms |
## Not run: rm(list=ls()) d.cigar4<- compare.algorithms1(f0.cigar4) d.schaffer4<- compare.algorithms1(f0.schaffer4) d.griewank4<- compare.algorithms1(f0.griewank4) d.bohachevsky4<- compare.algorithms1(f0.bohachevsky4) d.rosenbrock4<- compare.algorithms1(f0.rosenbrock4) ## End(Not run)
## Not run: rm(list=ls()) d.cigar4<- compare.algorithms1(f0.cigar4) d.schaffer4<- compare.algorithms1(f0.schaffer4) d.griewank4<- compare.algorithms1(f0.griewank4) d.bohachevsky4<- compare.algorithms1(f0.bohachevsky4) d.rosenbrock4<- compare.algorithms1(f0.rosenbrock4) ## End(Not run)
Simple helper function for countour plots
contourplothelper(d, x, y, z, nbins = 32, binwidth = c(10, 10), points = c(300, 300), title = NULL)
contourplothelper(d, x, y, z, nbins = 32, binwidth = c(10, 10), points = c(300, 300), title = NULL)
d |
A data frame. |
x |
A string with the dataframe column name for x axis. |
y |
A string with the dataframe column name for y axis. |
z |
A string with the dataframe column name for z axis. |
nbins |
The number bins. The default is 32. |
binwidth |
The binwidths for 'kde2d'. Can be an scalar or a vector. |
points |
The number of grid points. Can be an scalar or a vector. |
title |
The optional plot title. May be omited. |
Repeat the evalution of best solution to tacke with variability.
ees1.challenge(solution, objective)
ees1.challenge(solution, objective)
solution |
The Problem solution |
objective |
The objective function |
Explore the solution space on the neighborhood of solution 's' in order to find a new best.
ees1.explore(s, weight, p = 0.01)
ees1.explore(s, weight, p = 0.01)
s |
The Problem solution |
weight |
The exploration intensity |
p |
The mutation probability |
This function 'mix' the elements present in the solution. The parameter 'mu' controls the intensity of mixing. Low values give preference to best solution components and high values make the values being select randomly.
ees1.mating(solution, mu)
ees1.mating(solution, mu)
solution |
The Problem solution |
mu |
The mixing intensity ratio, from 0 to 1. The mix intensity controls de the probability of chosing a worst solutions |
This function 'mix' the elements present in the solution. The parameter 'mu' controls the intensity of mixing. Low values give preference to best solution components and high values make the values being select randomly.
ees1.mating1(solution, mu)
ees1.mating1(solution, mu)
solution |
The Problem solution |
mu |
The mixing intensity ratio, from 0 to 1. The mix intensity controls de the probability of chosing a worst solutions |
Performs the mutation on generated solution
ees1.mutation(solution, mates, p = 0.01)
ees1.mutation(solution, mates, p = 0.01)
solution |
The Problem solution |
mates |
The mixed parents |
p |
The mutation probability |
Performs the recombination on solution
ees1.recombination(solution, mates)
ees1.recombination(solution, mates)
solution |
The Problem solution |
mates |
The mixed parents |
Select the elements with best fitness but accept uphill moves with probability 'kkappa'.
ees1.selection(s0, s1, kkappa)
ees1.selection(s0, s1, kkappa)
s0 |
The current best solution set |
s1 |
The new solution |
kkappa |
The selection pressure |
Wrapper for logging debug messages.
elog.debug(...)
elog.debug(...)
... |
Variable number of arguments including a format string. |
Wrapper for logging error messages.
elog.error(...)
elog.error(...)
... |
Variable number of arguments including a format string. |
Wrapper for logging info messages.
elog.info(...)
elog.info(...)
... |
Variable number of arguments including a format string. |
Configure the current log level
elog.level(level = NULL)
elog.level(level = NULL)
level |
The log level (ERROR|WARN|INFO|DEBUG) |
The log level
Checks if parameters fall within upper an lower bounds
enforceBounds(particles, factors)
enforceBounds(particles, factors)
particles |
The particle set |
factors |
the defined range for objective function parameters |
The particle inside the valid limits
For each element in solution 's' evaluate the respective fitness.
es.evaluate(f, s, enforce = TRUE)
es.evaluate(f, s, enforce = TRUE)
f |
A reference to an instance of objective function |
s |
The set of solutions |
enforce |
If true the values are enforced to fall within provided range |
The solution ordered by its fitness.
Entry point for optimization functions
extremize(type, objective, options = NULL)
extremize(type, objective, options = NULL)
type |
The optimization method (aco,pso,saa,sda) |
objective |
An instance of ObjectiveFunction (or subclass) class ObjectiveFunction |
options |
An apropiate instance from a sublclass of Options class |
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) extremize("pso", f) ## End(Not run)
## Not run: f<- PlainFunction$new(f0.rosenbrock2) f$Parameter(name="x1",min=-100,max=100) f$Parameter(name="x2",min=-100,max=100) extremize("pso", f) ## End(Not run)
The ackley function of N variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0. Domain xi E [-32.768, 32.768], for all i = 1, ..., d
f0.ackley(...)
f0.ackley(...)
... |
The variadic list of function variables. |
The function value
https://www.sfu.ca/~ssurjano/ackley.html
The ackley function of four variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f0.ackley4(x1, x2, x3, x4)
f0.ackley4(x1, x2, x3, x4)
x1 |
The first function variable |
x2 |
The second function variable |
x3 |
The third function variable |
x4 |
The fourth function variable |
The function value
Two variable Rosenbrock function with random additive noise.
f0.adtn.rosenbrock2(x1, x2)
f0.adtn.rosenbrock2(x1, x2)
x1 |
Parameter 1 |
x2 |
Parameter 2 |
The Bohachevsky function of N variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f0.bohachevsky(...)
f0.bohachevsky(...)
... |
The variadic list of function variables. |
The function value
http://deap.gel.ulaval.ca/doc/dev/api/benchmarks.html
The Bohachevsky function of four variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f0.bohachevsky4(x1, x2, x3, x4)
f0.bohachevsky4(x1, x2, x3, x4)
x1 |
The first function variable |
x2 |
The second function variable |
x3 |
The third function variable |
x4 |
The fourth function variable |
The function value
The Cigar function of N variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f0.cigar(...)
f0.cigar(...)
... |
The variadic list of function variables. |
The function value
http://deap.gel.ulaval.ca/doc/dev/api/benchmarks.html
The Cigar function of four variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f0.cigar4(x1, x2, x3, x4)
f0.cigar4(x1, x2, x3, x4)
x1 |
The first function variable |
x2 |
The second function variable |
x3 |
The third function variable |
x4 |
The fourth function variable |
The function value
The griewank function of N variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f0.griewank(...)
f0.griewank(...)
... |
The variadic list of function variables. |
The function value
http://deap.gel.ulaval.ca/doc/dev/api/benchmarks.html
The griewank function of four variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f0.griewank4(x1, x2, x3, x4)
f0.griewank4(x1, x2, x3, x4)
x1 |
The first function variable |
x2 |
The second function variable |
x3 |
The third function variable |
x4 |
The fourth function variable |
The function value
Two variable Rosenbrock function with random additive noise.
f0.nlnn.rosenbrock2(x1, x2)
f0.nlnn.rosenbrock2(x1, x2)
x1 |
Parameter 1 |
x2 |
Parameter 2 |
This function is an example on how EvoPER can be used for estimating the parameter values in order to produce oscilations with the desired period. It is not intended to be used directelly, the provided wrappers should be instead.
f0.periodtuningpp(x1, x2, x3, x4, period)
f0.periodtuningpp(x1, x2, x3, x4, period)
x1 |
The growth rate of prey |
x2 |
The decay rate of predator |
x3 |
The predating effect on prey |
x4 |
The predating effecto on predator |
period |
The desired oscilation period |
The solution fitness cost
This function is an example on how EvoPER can be used for estimating the parameter values in order to produce oscilations with the desired period.
f0.periodtuningpp12(x1, x2, x3, x4)
f0.periodtuningpp12(x1, x2, x3, x4)
x1 |
The growth rate of prey |
x2 |
The decay rate of predator |
x3 |
The predating effect on prey |
x4 |
The predating effecto on predator |
The solution fitness cost
## Not run: rm(list=ls()) set.seed(-27262565) f<- PlainFunction$new(f0.periodtuningpp12) f$Parameter(name="x1",min=0.5,max=2) f$Parameter(name="x2",min=0.5,max=2) f$Parameter(name="x3",min=0.5,max=2) f$Parameter(name="x4",min=0.5,max=2) extremize("pso", f) ## End(Not run)
## Not run: rm(list=ls()) set.seed(-27262565) f<- PlainFunction$new(f0.periodtuningpp12) f$Parameter(name="x1",min=0.5,max=2) f$Parameter(name="x2",min=0.5,max=2) f$Parameter(name="x3",min=0.5,max=2) f$Parameter(name="x4",min=0.5,max=2) extremize("pso", f) ## End(Not run)
This function is an example on how EvoPER can be used for estimating the parameter values in order to produce oscilations with the desired period.
f0.periodtuningpp24(x1, x2, x3, x4)
f0.periodtuningpp24(x1, x2, x3, x4)
x1 |
The growth rate of prey |
x2 |
The decay rate of predator |
x3 |
The predating effect on prey |
x4 |
The predating effecto on predator |
The solution fitness cost
## Not run: rm(list=ls()) set.seed(-27262565) f<- PlainFunction$new(f0.periodtuningpp24) f$Parameter(name="x1",min=0.5,max=2) f$Parameter(name="x2",min=0.5,max=2) f$Parameter(name="x3",min=0.5,max=2) f$Parameter(name="x4",min=0.5,max=2) extremize("pso", f) ## End(Not run)
## Not run: rm(list=ls()) set.seed(-27262565) f<- PlainFunction$new(f0.periodtuningpp24) f$Parameter(name="x1",min=0.5,max=2) f$Parameter(name="x2",min=0.5,max=2) f$Parameter(name="x3",min=0.5,max=2) f$Parameter(name="x4",min=0.5,max=2) extremize("pso", f) ## End(Not run)
This function is an example on how EvoPER can be used for estimating the parameter values in order to produce oscilations with the desired period.
f0.periodtuningpp48(x1, x2, x3, x4)
f0.periodtuningpp48(x1, x2, x3, x4)
x1 |
The growth rate of prey |
x2 |
The decay rate of predator |
x3 |
The predating effect on prey |
x4 |
The predating effecto on predator |
The solution fitness cost
## Not run: rm(list=ls()) set.seed(-27262565) f<- PlainFunction$new(f0.periodtuningpp24) f$Parameter(name="x1",min=0.5,max=2) f$Parameter(name="x2",min=0.5,max=2) f$Parameter(name="x3",min=0.5,max=2) f$Parameter(name="x4",min=0.5,max=2) extremize("pso", f) ## End(Not run)
## Not run: rm(list=ls()) set.seed(-27262565) f<- PlainFunction$new(f0.periodtuningpp24) f$Parameter(name="x1",min=0.5,max=2) f$Parameter(name="x2",min=0.5,max=2) f$Parameter(name="x3",min=0.5,max=2) f$Parameter(name="x4",min=0.5,max=2) extremize("pso", f) ## End(Not run)
This function is an example on how EvoPER can be used for estimating the parameter values in order to produce oscilations with the desired period.
f0.periodtuningpp72(x1, x2, x3, x4)
f0.periodtuningpp72(x1, x2, x3, x4)
x1 |
The growth rate of prey |
x2 |
The decay rate of predator |
x3 |
The predating effect on prey |
x4 |
The predating effecto on predator |
The solution fitness cost
## Not run: rm(list=ls()) set.seed(-27262565) f<- PlainFunction$new(f0.periodtuningpp24) f$Parameter(name="x1",min=0.5,max=2) f$Parameter(name="x2",min=0.5,max=2) f$Parameter(name="x3",min=0.5,max=2) f$Parameter(name="x4",min=0.5,max=2) extremize("pso", f) ## End(Not run)
## Not run: rm(list=ls()) set.seed(-27262565) f<- PlainFunction$new(f0.periodtuningpp24) f$Parameter(name="x1",min=0.5,max=2) f$Parameter(name="x2",min=0.5,max=2) f$Parameter(name="x3",min=0.5,max=2) f$Parameter(name="x4",min=0.5,max=2) extremize("pso", f) ## End(Not run)
Two variable Rosenbrock function, where f(1,1) = 0
f0.rosenbrock2(x1, x2)
f0.rosenbrock2(x1, x2)
x1 |
Parameter 1 |
x2 |
Parameter 2 |
The rosenbrock function of 4 variables for testing optimization methods. The global optima for the function is given by xi = 1, forall i E 1...N, f(x) = 0.
f0.rosenbrock4(x1, x2, x3, x4)
f0.rosenbrock4(x1, x2, x3, x4)
x1 |
The first function variable |
x2 |
The second function variable |
x3 |
The third function variable |
x4 |
The fourth function variable |
The function value
The rosenbrock function of N variables for testing optimization methods. The global optima for the function is given by xi = 1, forall i E 1...N, f(x) = 0.
f0.rosenbrockn(...)
f0.rosenbrockn(...)
... |
The variadic list of function variables. |
The function value
http://deap.gel.ulaval.ca/doc/dev/api/benchmarks.html
The schaffer function of N variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f0.schaffer(...)
f0.schaffer(...)
... |
The variadic list of function variables. |
The function value
http://deap.gel.ulaval.ca/doc/dev/api/benchmarks.html
The Schaffer function of four variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f0.schaffer4(x1, x2, x3, x4)
f0.schaffer4(x1, x2, x3, x4)
x1 |
The first function variable |
x2 |
The second function variable |
x3 |
The third function variable |
x4 |
The fourth function variable |
The function value
The schwefel function of N variables for testing optimization methods. The global optima for the function is given by xi = 420.96874636, forall i E 1...N, f(x) = 0. The range of xi is [-500,500]
f0.schwefel(...)
f0.schwefel(...)
... |
The variadic list of function variables. |
The function value
http://deap.gel.ulaval.ca/doc/dev/api/benchmarks.html
The schwefel function of N variables for testing optimization methods. The global optima for the function is given by xi = 420.96874636, forall i E 1...N, f(x) = 0. The range of xi is [-500,500]
f0.schwefel4(x1, x2, x3, x4)
f0.schwefel4(x1, x2, x3, x4)
x1 |
The first function variable |
x2 |
The second function variable |
x3 |
The third function variable |
x4 |
The fourth function variable |
The function value
Simple test function f(1,2,3,4) = 0
f0.test(x1, x2, x3, x4)
f0.test(x1, x2, x3, x4)
x1 |
Parameter 1 |
x2 |
Parameter 2 |
x3 |
Parameter 3 |
x4 |
Parameter 4 |
The ackley function of N variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0. Domain xi E [-32.768, 32.768], for all i = 1, ..., d
f1.ackley(x)
f1.ackley(x)
x |
The vector of function parameters |
The function value
https://www.sfu.ca/~ssurjano/ackley.html
Two variable Rosenbrock function with random additive noise.
f1.adtn.rosenbrock2(x)
f1.adtn.rosenbrock2(x)
x |
Parameter vector |
The Bohachevsky function of N variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f1.bohachevsky(x)
f1.bohachevsky(x)
x |
The vector of function parameters |
The function value
http://deap.gel.ulaval.ca/doc/dev/api/benchmarks.html
The Cigar function of N variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f1.cigar(x)
f1.cigar(x)
x |
The vector of function variables. |
The function value
http://deap.gel.ulaval.ca/doc/dev/api/benchmarks.html
The griewank function of N variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f1.griewank(x)
f1.griewank(x)
x |
The vector of function parameters |
The function value
http://deap.gel.ulaval.ca/doc/dev/api/benchmarks.html
Two variable Rosenbrock function with random additive noise.
f1.nlnn.rosenbrock2(x)
f1.nlnn.rosenbrock2(x)
x |
Parameter vector |
Two variable Rosenbrock function, where f(c(1,1)) = 0
f1.rosenbrock2(x)
f1.rosenbrock2(x)
x |
Parameter vector |
The rosenbrock function of N variables for testing optimization methods. The global optima for the function is given by xi = 1, forall i E 1...N, f(x) = 0.
f1.rosenbrockn(x)
f1.rosenbrockn(x)
x |
The vector of function parameters |
The function value
http://deap.gel.ulaval.ca/doc/dev/api/benchmarks.html
The schaffer function of N variables for testing optimization methods. The global optima for the function is given by xi = 0, forall i E 1...N, f(x) = 0.
f1.schaffer(x)
f1.schaffer(x)
x |
The vector of function parameters |
The function value
http://deap.gel.ulaval.ca/doc/dev/api/benchmarks.html
The schwefel function of N variables for testing optimization methods. The global optima for the function is given by xi = 420.96874636, forall i E 1...N, f(x) = 0. The range of xi is [-500,500]
f1.schwefel(x)
f1.schwefel(x)
x |
The vector of function variables. |
The function value
http://deap.gel.ulaval.ca/doc/dev/api/benchmarks.html
Simple test function f(c(1,2,3,4)) = 0
f1.test(x)
f1.test(x)
x |
Parameter vector |
Coerce dataframe columns to a specic type.
fixdfcolumns(df, cols = c(), skip = TRUE, type = as.numeric)
fixdfcolumns(df, cols = c(), skip = TRUE, type = as.numeric)
df |
The data frame. |
cols |
The dataframe columns to be skiped or included. |
skip |
If TRUE the column names in 'cols' are skiped. When FALSE logic is inverted. |
type |
The type for which data frame columns must be converted. |
The data frame with converted column types.
Generates a problema solution using discrete leves
generateSolution(parameters, size)
generateSolution(parameters, size)
parameters |
The Objective Function parameter list |
size |
The solution size |
The solution set
Given a set S of N solutions created with sortSolution, this function returns the solution component fot the best solution.
getFitness(S, i = NULL)
getFitness(S, i = NULL)
S |
The solution set |
i |
The fitness index, if null return the whole column. |
The selected fitness entry
Given a set S of N solutions created with sortSolution, this function returns the solution component. A solutions is a set of solutions and their associated fitness
getSolution(S)
getSolution(S)
S |
The solution set |
The solution set
Simple implementation for geometric mean
gm.mean(x)
gm.mean(x)
x |
data |
geometric mean for data
Simple implementation for geometric standard deviation
gm.sd(x, mu = NULL)
gm.sd(x, mu = NULL)
x |
data |
mu |
The geometric mean. If not provided it is calculated. |
geometric standard deviation for data
Simple helper for ploting histograms
histplothelper(d, x, title = NULL)
histplothelper(d, x, title = NULL)
d |
A data frame. |
x |
A string with the dataframe column name for histogram |
title |
The plot title |
A ggplot2 plot object
Creates the initial Solution population taking into account the lower an upper bounds of provided experiment factors.
initSolution(parameters, N = 20, sampling = "mcs")
initSolution(parameters, N = 20, sampling = "mcs")
parameters |
The Objective Function parameter list |
N |
The size of Solution population |
sampling |
The population sampling scheme, namelly <mcs|lhs|ffs> standing respectively for montecarlo sampling, latin hypercube sampling and full factorial sampling |
A random set of solutions
Checks if parameters is greater than the lower bounds
lowerBound(particles, factors)
lowerBound(particles, factors)
particles |
The particle set |
factors |
the defined range for objective function parameters |
The particle greater than or equal to lower limit
Calculates the magnitude order for a given value
Magnitude(v)
Magnitude(v)
v |
The numerical value |
The magnitude order
A naive approach for finding the period in a series of data points
naiveperiod(d)
naiveperiod(d)
d |
The data to search period |
A list with the average period and amplitude
Search for the netlogo jar file on the provided path
NLWrapper.FindJar(path)
NLWrapper.FindJar(path)
path |
The base path for searching |
The path for NetLogo jar file
Gets the value of a model parameter
NLWrapper.GetParameter(obj, name)
NLWrapper.GetParameter(obj, name)
obj |
The object retuned by NLWrapper.Model |
name |
The parameter name string or the collection of parameter names |
The parameter values
## Not run: rm(list=ls()) p<- "C:/Program Files/NetLogo 6.0.4/app" m<- file.path(nlpath, "models", "Sample Models", "Biology", "Wolf Sheep Predation.nlogo") o<- NLWrapper.Model(p, m) v<- NLWrapper.GetParameter(o, c("initial-number-sheep")) or v<- NLWrapper.GetParameter(o, c("initial-number-sheep","initial-number-wolves"))) ## End(Not run)
## Not run: rm(list=ls()) p<- "C:/Program Files/NetLogo 6.0.4/app" m<- file.path(nlpath, "models", "Sample Models", "Biology", "Wolf Sheep Predation.nlogo") o<- NLWrapper.Model(p, m) v<- NLWrapper.GetParameter(o, c("initial-number-sheep")) or v<- NLWrapper.GetParameter(o, c("initial-number-sheep","initial-number-wolves"))) ## End(Not run)
This wrapper prepares the environment and instantiates the model
NLWrapper.Model(netlogodir, modelfile, dataset, maxtime)
NLWrapper.Model(netlogodir, modelfile, dataset, maxtime)
netlogodir |
The base path of NetLogo installation |
modelfile |
The absolute path for NetLogo model file |
dataset |
The names of model variables |
maxtime |
The total number of iterations |
## Not run: rm(list=ls()) p<- "C:/Program Files/NetLogo 6.0.4/app" output<- c("count sheep", "count wolves") m<- file.path(nlpath, "models", "Sample Models", "Biology", model, "Wolf Sheep Predation.nlogo") o<- NLWrapper.Model(p, m, output, 150) ## End(Not run)
## Not run: rm(list=ls()) p<- "C:/Program Files/NetLogo 6.0.4/app" output<- c("count sheep", "count wolves") m<- file.path(nlpath, "models", "Sample Models", "Biology", model, "Wolf Sheep Predation.nlogo") o<- NLWrapper.Model(p, m, output, 150) ## End(Not run)
Executes a NetLogo Model using rNetLogo
NLWrapper.Run(obj, r = 1, seed = c())
NLWrapper.Run(obj, r = 1, seed = c())
obj |
The object retuned by NLWrapper.Model |
r |
The number of replications |
seed |
The collection of random seeds |
Executes a NetLogo Model using rNetLogo
NLWrapper.RunExperiment(obj, r = 1, design, FUN)
NLWrapper.RunExperiment(obj, r = 1, design, FUN)
obj |
The object retuned by NLWrapper.Model |
r |
The number of replications |
design |
The desing matrix holding parameter sampling |
FUN |
THe calibration function. |
A list containing the the parameters, the calibration functio output and the whole resultset
## Not run: rm(list=ls()) objectivefn<- function(params, results) { 0 } f<- AddFactor(name="initial-number-sheep",min=100,max=250) f<- AddFactor(factors=f, name="initial-number-wolves",min=50,max=150) f<- AddFactor(factors=f, name="grass-regrowth-time",min=30,max=100) f<- AddFactor(factors=f, name="sheep-gain-from-food",min=1,max=50) f<- AddFactor(factors=f, name="wolf-gain-from-food",min=1,max=100) f<- AddFactor(factors=f, name="sheep-reproduce",min=1,max=20) f<- AddFactor(factors=f, name="wolf-reproduce",min=1,max=20) design<- AoE.LatinHypercube(factors=f) p<- "C:/Program Files/NetLogo 6.0.4/app" m<- file.path(p, "models", "Sample Models", "Biology", "Wolf Sheep Predation.nlogo") output<- c("count sheep", "count wolves") o<- NLWrapper.Model(p, m, output, 150) v<- RunExperiment(o, r=1, design, objectivefn) NLWrapper.Shutdown(o) ## End(Not run)
## Not run: rm(list=ls()) objectivefn<- function(params, results) { 0 } f<- AddFactor(name="initial-number-sheep",min=100,max=250) f<- AddFactor(factors=f, name="initial-number-wolves",min=50,max=150) f<- AddFactor(factors=f, name="grass-regrowth-time",min=30,max=100) f<- AddFactor(factors=f, name="sheep-gain-from-food",min=1,max=50) f<- AddFactor(factors=f, name="wolf-gain-from-food",min=1,max=100) f<- AddFactor(factors=f, name="sheep-reproduce",min=1,max=20) f<- AddFactor(factors=f, name="wolf-reproduce",min=1,max=20) design<- AoE.LatinHypercube(factors=f) p<- "C:/Program Files/NetLogo 6.0.4/app" m<- file.path(p, "models", "Sample Models", "Biology", "Wolf Sheep Predation.nlogo") output<- c("count sheep", "count wolves") o<- NLWrapper.Model(p, m, output, 150) v<- RunExperiment(o, r=1, design, objectivefn) NLWrapper.Shutdown(o) ## End(Not run)
Set parameter values
NLWrapper.SetParameter(obj, parameters)
NLWrapper.SetParameter(obj, parameters)
obj |
The object retuned by NLWrapper.Model |
parameters |
The data frame containing the paramters |
## Not run: rm(list=ls()) p<- "C:/Program Files/NetLogo 6.0.4/app" m<- file.path(nlpath, "models", "Sample Models", "Biology", "Wolf Sheep Predation.nlogo") o<- NLWrapper.Model(p, m) ## End(Not run)
## Not run: rm(list=ls()) p<- "C:/Program Files/NetLogo 6.0.4/app" m<- file.path(nlpath, "models", "Sample Models", "Biology", "Wolf Sheep Predation.nlogo") o<- NLWrapper.Model(p, m) ## End(Not run)
Configures the random seed
NLWrapper.SetRandomSeed(obj, seed)
NLWrapper.SetRandomSeed(obj, seed)
obj |
The object retuned by NLWrapper.Model |
seed |
The new random seed |
This wrapper terminates RNetLogo execution environment
NLWrapper.Shutdown(obj)
NLWrapper.Shutdown(obj)
obj |
The object retuned by NLWrapper.Model |
The base class for optimization functions.
object
The raw output of objective function
objective
The objective function
parameters
The parameter list for objective function
value
The results from objective function
The base class for the options for the optimization metaheuristics
type
The configuration type
neighborhood
The neighborhood function for population methods
discrete
Flag indicating that and specific algorithm is discrete or continuous
nlevelz
Default value for generating parameter levels when range is provided, default value is 5
container
The object holding the configuration otions
Instantiate the Options class required for the specific metaheuristic method.
OptionsFactory(type, v = NULL)
OptionsFactory(type, v = NULL)
type |
The metaheuristic method |
v |
The options object |
Options object
Convert parameter from continuous to discrete and vice-versa if needed
paramconverter(parameters, discrete, levelz = 5)
paramconverter(parameters, discrete, levelz = 5)
parameters |
The current parameter set |
discrete |
The desired parameter type |
levelz |
When discrete is true the number of levels to be generated |
The parameter collection casted to desired mode
Creates the initial Solution population taking into account the lower an upper bounds of provided experiment factors. This method works by dividing the solution space into partitions of size 'd' and then creating a full factorial combination of partitions.
partSolutionSpace(parameters, d = 4)
partSolutionSpace(parameters, d = 4)
parameters |
The Objective Function parameter list |
d |
The partition size. Default value 4. |
A set of solutions
pop an element
pop.first(x)
pop.first(x)
x |
The element collection |
The first element added to list FIFO
pop an element
pop.last(x)
pop.last(x)
x |
The element collection |
The last element added to list LIFO
The solver for Lotka-Volterra differential equation.
predatorprey(x1, x2, x3, x4)
predatorprey(x1, x2, x3, x4)
x1 |
The growth rate of prey |
x2 |
The decay rate of predator |
x3 |
The predating effect on prey |
x4 |
The predating effecto on predator |
The ODE solution
Generate a plot for the predator-prey ODE output.
predatorprey.plot0(x1, x2, x3, x4, title = NULL)
predatorprey.plot0(x1, x2, x3, x4, title = NULL)
x1 |
The growth rate of prey |
x2 |
The decay rate of predator |
x3 |
The predating effect on prey |
x4 |
The predating effect on predator |
title |
The optional plot title. May be omited. |
An ggplot2 object
## Not run: predatorprey.plot0(1.351888, 1.439185, 1.337083, 0.9079049) ## End(Not run)
## Not run: predatorprey.plot0(1.351888, 1.439185, 1.337083, 0.9079049) ## End(Not run)
Simple wrapper for 'predatorprey.plot0' accepting the parameters as a list.
predatorprey.plot1(x, title = NULL)
predatorprey.plot1(x, title = NULL)
x |
A list containing the values of predator/prey parameters c1, c2, c3 and c4 denoting respectivelly the growth rate of prey, the decay rate of predator, the predating effect on prey and the predating effect on predator |
title |
The optional plot title. May be omited. |
An ggplot2 object
## Not run: rm(list=ls()) predatorprey.plot1(v$getBest()[1:4]) ## End(Not run)
## Not run: rm(list=ls()) predatorprey.plot1(v$getBest()[1:4]) ## End(Not run)
Search for the best particle solution which minimize the objective function.
pso.best(objective, particles)
pso.best(objective, particles)
objective |
The results of evaluating the objective function |
particles |
The particles tested |
The best particle
Implementation of constriction coefficient
pso.chi(phi1, phi2)
pso.chi(phi1, phi2)
phi1 |
Acceleration coefficient toward the previous best |
phi2 |
Acceleration coefficient toward the global best |
The calculated constriction coefficient
Finds the lbest for the particle 'i' using the topology function given by the topology parameter.
pso.lbest(i, pbest, topology)
pso.lbest(i, pbest, topology)
i |
The particle position |
pbest |
The pbest particle collection |
topology |
The desired topology function |
The lbes for i th particle
The neighborhood function for a simple linear topology where every particle has k = 2 neighbors
pso.neighborhood.K2(i, n)
pso.neighborhood.K2(i, n)
i |
The particle position |
n |
the size of particle population |
The von neumann neighborhood function for a lattice-based topology where every particle has k = 4 neighbors
pso.neighborhood.K4(i, n)
pso.neighborhood.K4(i, n)
i |
The particle position |
n |
the size of particle population |
Simple helper method for 'gbest' neighborhood
pso.neighborhood.KN(i, n)
pso.neighborhood.KN(i, n)
i |
The particle position |
n |
the size of particle population |
Shows the best particle of each of simulated generations
pso.printbest(objective, particles, generation, title)
pso.printbest(objective, particles, generation, title)
objective |
An instance of ObjectiveFunction (or subclass) class ObjectiveFunction |
particles |
The current particle population |
generation |
The current generation |
title |
Some informational text to be shown |
Calculates the PSO Velocity
pso.Velocity(W = 1, Vi, phi1, phi2, Pi, Pg, Xi)
pso.Velocity(W = 1, Vi, phi1, phi2, Pi, Pg, Xi)
W |
Weight (Inertia weight or constriction coefficient) |
Vi |
Current Velocity vector |
phi1 |
Acceleration coefficient toward the previous best |
phi2 |
Acceleration coefficient toward the global best |
Pi |
Personal best |
Pg |
Neighborhood best |
Xi |
Particle vector |
Updated velocity
push an element
push(x, v)
push(x, v)
x |
The collection of elements |
v |
The value to be pushed |
The collection of elements
A simple randon seed generator
random.wheel()
random.wheel()
A random number for seeding
Temperature function boltzmann
saa.bolt(t0, k)
saa.bolt(t0, k)
t0 |
The current temperature |
k |
The annealing value |
The new temperature
Generates neighbor solutions for simulated annealing
saa.neighborhood(f, S, d, n)
saa.neighborhood(f, S, d, n)
f |
An instance of ObjectiveFunction (or subclass) class ObjectiveFunction |
S |
The current solution to find a neighbor |
d |
The distance from current solution S |
n |
The number of parameters to be perturbed |
The neighbor of solution S
Generates neighbor solutions perturbing one parameter from current
solution S
picked randonly.
saa.neighborhood1(f, S, d)
saa.neighborhood1(f, S, d)
f |
An instance of ObjectiveFunction (or subclass) class ObjectiveFunction |
S |
The current solution to find a neighbor |
d |
The distance from current solution S |
The neighbor of solution of S
Generates neighbor solutions perturbing half parameters from current
solution S
.
saa.neighborhoodH(f, S, d)
saa.neighborhoodH(f, S, d)
f |
An instance of ObjectiveFunction (or subclass) class ObjectiveFunction |
S |
The current solution to find a neighbor |
d |
The distance from current solution S |
The neighbor of solution of S
Generates neighbor solutions perturbing all parameters from current
solution S
.
saa.neighborhoodN(f, S, d)
saa.neighborhoodN(f, S, d)
f |
An instance of ObjectiveFunction (or subclass) class ObjectiveFunction |
S |
The current solution to find a neighbor |
d |
The distance from current solution S |
The neighbor of solution of S
Temperature function t/k
saa.tbyk(t0, k)
saa.tbyk(t0, k)
t0 |
The current temperature |
k |
The annealing value |
The new temperature
Temperature function cte * t0
saa.tcte(t0, k)
saa.tcte(t0, k)
t0 |
The current temperature |
k |
The annealing value |
The new temperature
Temperature function exponential
saa.texp(t0, k)
saa.texp(t0, k)
t0 |
The current temperature |
k |
The annealing value |
The new temperature
Simple helper for ploting 3d scaterplots
scatterplotlothelper(d, x, y, z, title = NULL)
scatterplotlothelper(d, x, y, z, title = NULL)
d |
A data frame. |
x |
A string with the dataframe column name for x axis |
y |
A string with the dataframe column name for y axis |
z |
A string with the dataframe column name for z axis |
title |
The optional plot title. May be omited. |
A scatter3D plot
Search for a value value on a matrix
searchrow(ddata, value)
searchrow(ddata, value)
ddata |
The matrix containing the dataset |
value |
The value to search for |
Boolean TRUE for those indexes matching value
Generates a barplot comparing the number of evalutions for algorithms ("saa","pso","acor","ees1").
show.comp1(mydata, what, title = NULL)
show.comp1(mydata, what, title = NULL)
mydata |
The data generated with 'summarize.comp1' |
what |
The name of variable to plot on 'y' axis |
title |
the plot title |
## Not run: p.a<- show.comp1(d.cigar4,"evals","(a) Cigar function") p.b<- show.comp1(d.schaffer4,"evals","(b) Schafer function") p.c<- show.comp1(d.griewank4,"evals","(c) Griewank function") p.d<- show.comp1(d.bohachevsky4,"evals","(d) Bohachevsky function") ## End(Not run)
## Not run: p.a<- show.comp1(d.cigar4,"evals","(a) Cigar function") p.b<- show.comp1(d.schaffer4,"evals","(b) Schafer function") p.c<- show.comp1(d.griewank4,"evals","(c) Griewank function") p.d<- show.comp1(d.bohachevsky4,"evals","(d) Bohachevsky function") ## End(Not run)
Simple function for calculate the slope on the ith element position
slope(x, y, i)
slope(x, y, i)
x |
The x vector |
y |
The y vector |
i |
The position |
The slope
Calcule all slopes for the discrete x,y series
slopes(x, y)
slopes(x, y)
x |
The x vector |
y |
The y vector |
A vector with all slopes
Sort solution by its respective fitness
sortSolution(s, f)
sortSolution(s, f)
s |
Problem solution |
f |
The function evaluation for s |
Provides as summary with averged values of experimental setup
summarize.comp1(mydata)
summarize.comp1(mydata)
mydata |
The data frame generated with 'compare.algorithms1' |
The summarized data
create neighbor solutions
tabu.getNeighbors(tabu, parameters, solution, size)
tabu.getNeighbors(tabu, parameters, solution, size)
tabu |
The tabu list |
parameters |
The parameter set |
solution |
The current solution |
size |
The neigborhood size |
The neighbor for solution
Check whether a solution is present on tabulist
tabu.istabu(tabulist, solution)
tabu.istabu(tabulist, solution)
tabulist |
The matrix of tabu solutions |
solution |
The solution value to be checked |
Boolean TRUE tabulist contains the solution
Checks if parameters is below the upper bounds
upperBound(particles, factors)
upperBound(particles, factors)
particles |
The particle set |
factors |
the defined range for objective function parameters |
The particle inside the valid upper bound
Calculates confidence interval of mean for provided data with desired confidence level. This functions uses bootstrap resampling scheme for estimanting the CI.
xmeanci1(x, alpha = 0.95)
xmeanci1(x, alpha = 0.95)
x |
The data set for which CI will be calculated |
alpha |
The confidence level. The default value is 0.95 (95%) |
The confidence interval for the mean calculated using 'boot.ci'
Calculates confidence interval of mean for provided data with desired confidence level.
xmeanci2(x, alpha = 0.95)
xmeanci2(x, alpha = 0.95)
x |
The data set for which CI will be calculated |
alpha |
The confidence level. The default value is 0.95 (95%) |
The confidence interval for the mean
Simple helper for ploting xy dispersion points.
xyplothelper(d, x, y, title = NULL)
xyplothelper(d, x, y, title = NULL)
d |
A data frame. |
x |
A string with the dataframe column name for x axis |
y |
A string with the dataframe column name for y axis |
title |
The optional plot title. May be omited. |
A ggplot2 plot object