Title: | Gradient-Enhanced Kriging |
---|---|
Description: | Gradient-Enhanced Kriging as an emulator for computer experiments based on Maximum-Likelihood estimation. |
Authors: | Carmen van Meegen [aut, cre]
|
Maintainer: | Carmen van Meegen <[email protected]> |
License: | GPL (>= 2) |
Version: | 1.1.0 |
Built: | 2025-03-14 14:18:12 UTC |
Source: | CRAN |
Rosenbrock's banana function is defined by
with for
and
.
banana(x) bananaGrad(x)
banana(x) bananaGrad(x)
x |
a numeric |
The gradient of Rosenbrock's banana function is
Rosenbrock's banana function has one global minimum at
.
banana
returns the function value of Rosenbrock's banana function at x
.
bananaGrad
returns the gradient of Rosenbrock's banana function at x
.
Carmen van Meegen
Jamil, M. and Yang, X.-S. (2013). A Literature Survey of Benchmark Functions for Global Optimization Problems. International Journal of Mathematical Modelling and Numerical Optimisation, 4(2):150-–194. doi:10.1504/IJMMNO.2013.055204.
Rosenbrock, H. H. (1960). An Automatic Method for Finding the Greatest or least Value of a Function. The Computer Journal, 3(3):175–184. doi:10.1093/comjnl/3.3.175.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
# Contour plot of Rosenbrock's banana function n.grid <- 50 x1 <- seq(-2, 2, length.out = n.grid) x2 <- seq(-1, 3, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) banana(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Rosenbrock's banana function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
# Contour plot of Rosenbrock's banana function n.grid <- 50 x1 <- seq(-2, 2, length.out = n.grid) x2 <- seq(-1, 3, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) banana(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Rosenbrock's banana function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
blockChol
calculates the block Cholesky decomposition of a partitioned matrix of the form
blockChol(K, R = NULL, S = NULL, tol = NULL)
blockChol(K, R = NULL, S = NULL, tol = NULL)
K |
a real symmetric positive-definite square submatrix. |
R |
an (optinal) rectangular submatrix. |
S |
an (optional) real symmetric positive-definite square submatrix. |
tol |
an (optional) numeric tolerance, see ‘Details’. |
To obtain the block Cholesky decomposition
the following steps are performed:
Calculate with upper triangular matrix
.
Solve via forward substitution.
Compute the Schur complement of the block
of the matrix
.
Determine with upper triangular matrix
.
The upper triangular matrices and
in step 1 and 4 are obtained by
chol
.
Forward substitution in step 2 is carried out with backsolve
and the option transpose = TRUE
.
If tol
is specified a regularization of the form is conducted.
Here,
tol
is the upper bound for the logarithmic condition number of .
Then
is chosen as the minimal "nugget" that is added to the diagonal of to ensure
tol
.
Within gek this function is used to calculate the block Cholesky decomposition of the correlation matrix with derivatives.
Here K
is the Kriging correlation matrix.
R
is the matrix containing the first partial derivatives and S
consists of the second partial derivatives of the correlation matrix K
.
blockChol
returns a list with the following components:
L |
the upper triangular factor of the Cholesky decomposition of |
Q |
the solution of the triangular system |
M |
the upper triangular factor of the Cholesky decomposition of the Schur complement |
If R
or S
are not specified, Q
and M
are set to NULL
,
i.e. only the Cholesky decomposition of K
is calculated.
The attribute "eps"
gives the minimum “nugget” that is added to the diagonal.
As in chol
there is no check for symmetry.
Carmen van Meegen
Chen, J., Jin, Z., Shi, Q., Qiu, J., and Liu, W. (2013). Block Algorithm and Its Implementation for Cholesky Factorization.
Gustavson, F. G. and Jonsson, I. (2000). Minimal-storage high-performance Cholesky factorization via blocking and recursion. IBM Journal of Research and Development, 44(6):823–850. doi:10.1147/rd.446.0823.
Ranjan, P., Haynes, R. and Karsten, R. (2011). A Computationally Stable Approach to Gaussian Process Interpolation of Deterministic Computer Simulation Data. Technometrics, 53:366–378. doi:10.1198/TECH.2011.09141.
chol
for the Cholesky decomposition.
backsolve
for backward substitution.
blockCor
for computing a correlation matrix with derivatives.
# Construct correlation matrix x <- matrix(seq(0, 1, length = 5), ncol = 1) res <- blockCor(x, theta = 1, covtype = "gaussian", derivatives = TRUE) A <- cbind(rbind(res$K, res$R), rbind(t(res$R), res$S)) # Cholesky decomposition of correlation matix without derivatives cholK <- blockChol(res$K) cholK cholK$L == chol(res$K) # Cholesky decomposition of correlation matix with derivatives cholA <- blockChol(res$K, res$R, res$S) cholA <- cbind(rbind(cholA$L, matrix(0, ncol(cholA$Q), nrow(cholA$Q))), rbind(cholA$Q, cholA$M)) cholA cholA == chol(A) # Cholesky decomposition of correlation matix with derivatives with regularization res <- blockCor(x, theta = 2, covtype = "gaussian", derivatives = TRUE) A <- cbind(rbind(res$K, res$R), rbind(t(res$R), res$S)) try(blockChol(res$K, res$R, res$S)) blockChol(res$K, res$R, res$S, tol = 35)
# Construct correlation matrix x <- matrix(seq(0, 1, length = 5), ncol = 1) res <- blockCor(x, theta = 1, covtype = "gaussian", derivatives = TRUE) A <- cbind(rbind(res$K, res$R), rbind(t(res$R), res$S)) # Cholesky decomposition of correlation matix without derivatives cholK <- blockChol(res$K) cholK cholK$L == chol(res$K) # Cholesky decomposition of correlation matix with derivatives cholA <- blockChol(res$K, res$R, res$S) cholA <- cbind(rbind(cholA$L, matrix(0, ncol(cholA$Q), nrow(cholA$Q))), rbind(cholA$Q, cholA$M)) cholA cholA == chol(A) # Cholesky decomposition of correlation matix with derivatives with regularization res <- blockCor(x, theta = 2, covtype = "gaussian", derivatives = TRUE) A <- cbind(rbind(res$K, res$R), rbind(t(res$R), res$S)) try(blockChol(res$K, res$R, res$S)) blockChol(res$K, res$R, res$S, tol = 35)
Calculation of a correlation matrix with or without derivatives according to the specification of a correlation structure.
blockCor(x, ...) ## Default S3 method: blockCor(x, theta, covtype = c("matern5_2", "matern3_2", "gaussian"), derivatives = FALSE, ...) ## S3 method for class 'gekm' blockCor(x, ...)
blockCor(x, ...) ## Default S3 method: blockCor(x, theta, covtype = c("matern5_2", "matern3_2", "gaussian"), derivatives = FALSE, ...) ## S3 method for class 'gekm' blockCor(x, ...)
x |
|
theta |
|
covtype |
|
derivatives |
|
... |
further arguments passed to or from other methods. |
The correlation matrix with derivatives is defined as a block matrix of the form
Three correlation functions are currently implemented in blockCor
:
Matérn 5/2 kernel with covtype = "matern5_2"
:
Matérn 3/2 kernel with covtype = "matern3_2"
:
Gaussian kernel with covtype = "gaussian"
:
blockCor
returns a list with the following components:
K |
The correlation matrix without derivatives. |
R |
If |
S |
If |
The components of the list can be combined in the following form to constructed the complete correlation matrix with derivatives:
cbind(rbind(K, R), rbind(t(R), S))
.
Carmen van Meegen
Koehler, J. and Owen, A. (1996). Computer Experiments. In Ghosh, S. and Rao, C. (eds.), Design and Analysis of Experiments, volume 13 of Handbook of Statistics, pp. 261–308. Elsevier Science. doi:10.1016/S0169-7161(96)13011-X.
Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian Processes for Machine Learning. The MIT Press. https://gaussianprocess.org/gpml/.
Stein, M. L. (1999). Interpolation of Spatial Data: Some Theory for Kriging. Springer Series in Statistics. Springer-Verlag. doi:10.1007/978-1-4612-1494-6.
blockChol
for block Cholesky decomposition.
tangents
for drawing tangent lines.
# Some examples of correlation matrices: x <- matrix(1:4, ncol = 1) blockCor(x, theta = 1) blockCor(x, theta = 1, covtype = "gaussian") blockCor(x, theta = 1, covtype = "gaussian", derivatives = TRUE) blockCor(x, theta = 1, covtype = "matern3_2", derivatives = TRUE) res <- blockCor(x, theta = 2, covtype = "gaussian", derivatives = TRUE) cbind(rbind(res$K, res$R), rbind(t(res$R), res$S)) # Illustration of correlation functions and their derivatives: x <- seq(0, 1, length = 100) X <- matrix(x, ncol = 1) gaussian <- blockCor(X, theta = 0.25, covtype = "gaussian", derivatives = TRUE) matern5_2 <- blockCor(X, theta = 0.25, covtype = "matern5_2", derivatives = TRUE) matern3_2 <- blockCor(X, theta = 0.25, covtype = "matern3_2", derivatives = TRUE) # Correlation functions and first partial derivatives: index <- c(10, 20, 40, 80) par(mar = c(5.1, 5.1, 4.1, 2.1)) # Matern 3/2 plot(x, matern3_2$K[1, ], type = "l", xlab = expression(group("|", x - x*minute, "|")), ylab = expression(phi(x, x*minute, theta == 0.25)), lwd = 2) tangents(x[index], matern3_2$K[index, 1], matern3_2$R[index, 1], length = 0.15, lwd = 2, col = 2) points(x[index], matern3_2$K[index, 1], pch = 16) # Matern 5/2 lines(x, matern5_2$K[1, ], lwd = 2, col = 3) tangents(x[index], matern5_2$K[index, 1], matern5_2$R[index, 1], length = 0.15, lwd = 2, col = 2) points(x[index], matern5_2$K[index, 1], pch = 16) # Gaussian lines(x, gaussian$K[1, ], lwd = 2, col = 4) tangents(x[index], gaussian$K[index, 1], gaussian$R[index, 1], length = 0.15, lwd = 2, col = 2) points(x[index], gaussian$K[index, 1], pch = 16) legend("topright", lty = 1, lwd = 2, col = c(1, 3, 4), bty = "n", legend = c("Matern 3/2", "Matern 5/2", "Gaussian")) # First and second partial derivatives of correlation functions: index <- c(5, 10, 20, 40, 80) par(mar = c(5.1, 6.1, 4.1, 2.1)) # Gaussian plot(x, matern3_2$R[1, ], type = "l", xlab = expression(group("|", x - x*minute, "|")), ylab = expression(frac(partialdiff * phi(x, x*minute, theta == 0.25), partialdiff * x * minute)), lwd = 2) tangents(x[index], matern3_2$R[1, index], matern3_2$S[index, 1], length = 0.15, lwd = 2, col = 2) points(x[index], matern3_2$R[1, index], pch = 16) # Matern 5/2 lines(x, matern5_2$R[1, ], lwd = 2, col = 3) tangents(x[index], matern5_2$R[1, index], matern5_2$S[index, 1], length = 0.15, lwd = 2, col = 2) points(x[index], matern5_2$R[1, index], pch = 16) # Matern 3/2 lines(x, gaussian$R[1, ], lwd = 2, col = 4) tangents(x[index], gaussian$R[1, index], gaussian$S[index, 1], length = 0.15, lwd = 2, col = 2) points(x[index], gaussian$R[1, index], pch = 16) legend("topright", lty = 1, lwd = 2, col = c(1, 3, 4), bty = "n", legend = c("Matern 3/2", "Matern 5/2", "Gaussian"))
# Some examples of correlation matrices: x <- matrix(1:4, ncol = 1) blockCor(x, theta = 1) blockCor(x, theta = 1, covtype = "gaussian") blockCor(x, theta = 1, covtype = "gaussian", derivatives = TRUE) blockCor(x, theta = 1, covtype = "matern3_2", derivatives = TRUE) res <- blockCor(x, theta = 2, covtype = "gaussian", derivatives = TRUE) cbind(rbind(res$K, res$R), rbind(t(res$R), res$S)) # Illustration of correlation functions and their derivatives: x <- seq(0, 1, length = 100) X <- matrix(x, ncol = 1) gaussian <- blockCor(X, theta = 0.25, covtype = "gaussian", derivatives = TRUE) matern5_2 <- blockCor(X, theta = 0.25, covtype = "matern5_2", derivatives = TRUE) matern3_2 <- blockCor(X, theta = 0.25, covtype = "matern3_2", derivatives = TRUE) # Correlation functions and first partial derivatives: index <- c(10, 20, 40, 80) par(mar = c(5.1, 5.1, 4.1, 2.1)) # Matern 3/2 plot(x, matern3_2$K[1, ], type = "l", xlab = expression(group("|", x - x*minute, "|")), ylab = expression(phi(x, x*minute, theta == 0.25)), lwd = 2) tangents(x[index], matern3_2$K[index, 1], matern3_2$R[index, 1], length = 0.15, lwd = 2, col = 2) points(x[index], matern3_2$K[index, 1], pch = 16) # Matern 5/2 lines(x, matern5_2$K[1, ], lwd = 2, col = 3) tangents(x[index], matern5_2$K[index, 1], matern5_2$R[index, 1], length = 0.15, lwd = 2, col = 2) points(x[index], matern5_2$K[index, 1], pch = 16) # Gaussian lines(x, gaussian$K[1, ], lwd = 2, col = 4) tangents(x[index], gaussian$K[index, 1], gaussian$R[index, 1], length = 0.15, lwd = 2, col = 2) points(x[index], gaussian$K[index, 1], pch = 16) legend("topright", lty = 1, lwd = 2, col = c(1, 3, 4), bty = "n", legend = c("Matern 3/2", "Matern 5/2", "Gaussian")) # First and second partial derivatives of correlation functions: index <- c(5, 10, 20, 40, 80) par(mar = c(5.1, 6.1, 4.1, 2.1)) # Gaussian plot(x, matern3_2$R[1, ], type = "l", xlab = expression(group("|", x - x*minute, "|")), ylab = expression(frac(partialdiff * phi(x, x*minute, theta == 0.25), partialdiff * x * minute)), lwd = 2) tangents(x[index], matern3_2$R[1, index], matern3_2$S[index, 1], length = 0.15, lwd = 2, col = 2) points(x[index], matern3_2$R[1, index], pch = 16) # Matern 5/2 lines(x, matern5_2$R[1, ], lwd = 2, col = 3) tangents(x[index], matern5_2$R[1, index], matern5_2$S[index, 1], length = 0.15, lwd = 2, col = 2) points(x[index], matern5_2$R[1, index], pch = 16) # Matern 3/2 lines(x, gaussian$R[1, ], lwd = 2, col = 4) tangents(x[index], gaussian$R[1, index], gaussian$S[index, 1], length = 0.15, lwd = 2, col = 2) points(x[index], gaussian$R[1, index], pch = 16) legend("topright", lty = 1, lwd = 2, col = c(1, 3, 4), bty = "n", legend = c("Matern 3/2", "Matern 5/2", "Gaussian"))
The borehole function is defined by
with .
borehole(x) boreholeGrad(x)
borehole(x) boreholeGrad(x)
x |
a numeric |
The borehole function calculates the water flow rate through a borehole.
Input | Domain | Distribution | Description |
|
|
|
radius of borehole in |
|
|
|
radius of influence in |
|
|
|
transmissivity of upper aquifer in |
|
|
|
potentiometric head of upper aquifer in |
|
|
|
transmissivity of lower aquifer in |
|
|
|
potentiometric head of lower aquifer in |
|
|
|
length of borehole in |
|
|
|
hydraulic conductivity of borehole in |
Note, represents the normal distribution with expected value
and standard deviation
and
is the log-normal distribution with mean
and standard deviation
of the logarithm.
Further,
denotes the continuous uniform distribution over the interval
.
borehole
returns the function value of borehole function at x
.
boreholeGrad
returns the gradient of borehole function at x
.
Carmen van Meegen
Harper, W. V. and Gupta, S. K. (1983). Sensitivity/Uncertainty Analysis of a Borehole Scenario Comparing Latin Hypercube Sampling and Deterministic Sensitivity Approaches. BMI/ONWI-516, Office of Nuclear Waste Isolation, Battelle Memorial Institute, Columbus, OH.
Morris, M., Mitchell, T., and Ylvisaker, D. (1993). Bayesian Design and Analysis of Computer Experiments: Use of Derivatives in Surface Prediction. Technometrics, 35(3):243–255. doi:10.1080/00401706.1993.10485320.
gekm
for another example.
# List of inputs with their distributions and their respective ranges inputs <- list("r_w" = list(dist = "norm", mean = 0.1, sd = 0.0161812, min = 0.05, max = 0.15), "r" = list(dist = "lnorm", meanlog = 7.71, sdlog = 1.0056, min = 100, max = 50000), "T_u" = list(dist = "unif", min = 63070, max = 115600), "H_u" = list(dist = "unif", min = 990, max = 1110), "T_l" = list(dist = "unif", min = 63.1, max = 116), "H_l" = list(dist = "unif", min = 700, max = 820), "L" = list(dist = "unif", min = 1120, max = 1680), # for a more nonlinear, nonadditive function, see Morris et al. (1993) "K_w" = list(dist = "unif", min = 1500, max = 15000)) # Function for Monte Carlo simulation samples <- function(x, N = 10^5){ switch(x$dist, "norm" = rnorm(N, x$mean, x$sd), "lnorm" = rlnorm(N, x$meanlog, x$sdlog), "unif" = runif(N, x$min, x$max)) } # Uncertainty distribution of the water flow rate set.seed(1) X <- sapply(inputs, samples) y <- borehole(X) hist(y, breaks = 50, xlab = expression(paste("Water flow rate ", group("[", m^3/yr, "]"))), main = "", freq = FALSE)
# List of inputs with their distributions and their respective ranges inputs <- list("r_w" = list(dist = "norm", mean = 0.1, sd = 0.0161812, min = 0.05, max = 0.15), "r" = list(dist = "lnorm", meanlog = 7.71, sdlog = 1.0056, min = 100, max = 50000), "T_u" = list(dist = "unif", min = 63070, max = 115600), "H_u" = list(dist = "unif", min = 990, max = 1110), "T_l" = list(dist = "unif", min = 63.1, max = 116), "H_l" = list(dist = "unif", min = 700, max = 820), "L" = list(dist = "unif", min = 1120, max = 1680), # for a more nonlinear, nonadditive function, see Morris et al. (1993) "K_w" = list(dist = "unif", min = 1500, max = 15000)) # Function for Monte Carlo simulation samples <- function(x, N = 10^5){ switch(x$dist, "norm" = rnorm(N, x$mean, x$sd), "lnorm" = rlnorm(N, x$meanlog, x$sdlog), "unif" = runif(N, x$min, x$max)) } # Uncertainty distribution of the water flow rate set.seed(1) X <- sapply(inputs, samples) y <- borehole(X) hist(y, breaks = 50, xlab = expression(paste("Water flow rate ", group("[", m^3/yr, "]"))), main = "", freq = FALSE)
The Branin-Hoo function is defined by
with and
.
branin(x) braninGrad(x)
branin(x) braninGrad(x)
x |
a numeric |
The gradient of the Branin-Hoo function is
The Branin-Hoo function has three global minima at
,
and
.
branin
returns the function value of the Branin-Hoo function at x
.
braninGrad
returns the gradient of the Branin-Hoo function at x
.
Carmen van Meegen
Branin, Jr., F. H. (1972). Widely Convergent Method of Finding Multiple Solutions of Simultaneous Nonlinear Equations. IBM Journal of Research and Development, 16(5):504–522.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
# Contour plot of Branin-Hoo function n.grid <- 50 x1 <- seq(-5, 10, length.out = n.grid) x2 <- seq(0, 15, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) branin(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Branin-Hoo function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
# Contour plot of Branin-Hoo function n.grid <- 50 x1 <- seq(-5, 10, length.out = n.grid) x2 <- seq(0, 15, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) branin(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Branin-Hoo function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
The three-hump camel function is defined by
with .
camel3(x) camel3Grad(x)
camel3(x) camel3Grad(x)
x |
a numeric |
The gradient of the three-hump camel function is
The three-hump camel function has one global minimum at
.
camel3
returns the function value of the three-hump camel function at x
.
camel3Grad
returns the gradient of the three-hump camel function at x
.
Carmen van Meegen
Jamil, M. and Yang, X.-S. (2013). A Literature Survey of Benchmark Functions for Global Optimization Problems. International Journal of Mathematical Modelling and Numerical Optimisation, 4(2):150-–194. doi:10.1504/IJMMNO.2013.055204.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
# Contour plot of three-hump camel function n.grid <- 50 x1 <- x2 <- seq(-2, 2, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) camel3(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of three-hump camel function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
# Contour plot of three-hump camel function n.grid <- 50 x1 <- x2 <- seq(-2, 2, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) camel3(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of three-hump camel function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
The six-hump camel function is defined by
with and
.
camel6(x) camel6Grad(x)
camel6(x) camel6Grad(x)
x |
a numeric |
The gradient of the six-hump camel function is
The six-hump camel function has two global minima at
and
.
camel6
returns the function value of the six-hump camel function function at x
.
camel6Grad
returns the gradient of the six-hump camel function function at x
.
Carmen van Meegen
Jamil, M. and Yang, X.-S. (2013). A Literature Survey of Benchmark Functions for Global Optimization Problems. International Journal of Mathematical Modelling and Numerical Optimisation, 4(2):150-–194. doi:10.1504/IJMMNO.2013.055204.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
# Contour plot of six-hump camel function n.grid <- 50 x1 <- seq(-2, 2, length.out = n.grid) x2 <- seq(-1, 1, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) camel6(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of six-hump camel function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
# Contour plot of six-hump camel function n.grid <- 50 x1 <- seq(-2, 2, length.out = n.grid) x2 <- seq(-1, 1, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) camel6(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of six-hump camel function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
The Bent cigar function is defined by
with for
.
cigar(x) cigarGrad(x)
cigar(x) cigarGrad(x)
x |
a numeric vector of length |
The gradient of the bent cigar function is
The bent cigar function has one global minimum at
.
cigar
returns the function value of the bent cigar function at x
.
cigarGrad
returns the gradient of the bent cigar function at x
.
Carmen van Meegen
Plevris, V. and Solorzano, G. (2022). A Collection of 30 Multidimensional Functions for Global Optimization Benchmarking. Data, 7(4):46. doi:10.3390/data7040046.
tangents
for drawing tangent lines.
# 1-dimensional Cigar function with tangents curve(cigar(x), from = -5, to = 5, n = 200) x <- seq(-4.5, 4.5, length = 5) y <- cigar(x) dy <- cigarGrad(x) tangents(x, y, dy, length = 2, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of Cigar function n.grid <- 50 x1 <- x2 <- seq(-100, 100, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) cigar(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Cigar function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
# 1-dimensional Cigar function with tangents curve(cigar(x), from = -5, to = 5, n = 200) x <- seq(-4.5, 4.5, length = 5) y <- cigar(x) dy <- cigarGrad(x) tangents(x, y, dy, length = 2, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of Cigar function n.grid <- 50 x1 <- x2 <- seq(-100, 100, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) cigar(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Cigar function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
Determine the derivatives of a model matrix.
derivModelMatrix(object, ...) ## Default S3 method: derivModelMatrix(object, data, ...) ## S3 method for class 'gekm' derivModelMatrix(object, ...) ## S3 method for class 'gekm' model.matrix(object, ...)
derivModelMatrix(object, ...) ## Default S3 method: derivModelMatrix(object, data, ...) ## S3 method for class 'gekm' derivModelMatrix(object, ...) ## S3 method for class 'gekm' model.matrix(object, ...)
object |
an object of an appropriate |
data |
a |
... |
further arguments, yet not used. |
derivModelMatrix
makes use of the function deriv
.
Accordingly, the calculation of derivatives is only possible for functions that are contained in the derivatives table of deriv
.
Note, in contrast to model.matrix
, factors
are not supported.
The derivatives of the model (or design) matrix.
As in model.matrix
there is an attribute "assign"
.
Carmen van Meegen
deriv
for more details on supported arithmetic operators and functions.
model.matrix
for construction of a design (or model) matrix.
## Several examples for the derivatives of a model matrix dat <- data.frame(x1 = seq(-2, 2, length.out = 5)) model.matrix(~ 1, dat) derivModelMatrix(~ 1, dat) model.matrix(~ ., dat) derivModelMatrix(~ ., dat) model.matrix(~ . - 1, dat) derivModelMatrix(~ . - 1, dat) model.matrix(~ sin(x1) + I(x1^2), dat) derivModelMatrix(~ sin(x1) + I(x1^2), dat) dat <- cbind(dat, x2 = seq(1, 5, length.out = 5)) model.matrix(~ 1, dat) derivModelMatrix(~ 1, dat) model.matrix(~ .^2, dat) derivModelMatrix(~ .^2, dat) model.matrix(~ log(x2), dat) derivModelMatrix(~ log(x2), dat) model.matrix(~ x1:x2, dat) derivModelMatrix(~ x1:x2, dat) model.matrix(~ I(x1^2) * I(x2^3), dat) derivModelMatrix(~ I(x1^2) * I(x2^3), dat) model.matrix(~ sin(x1) + cos(x2) + atan(x1 * x2), dat) derivModelMatrix(~ sin(x1) + cos(x2) + atan(x1 * x2), dat)
## Several examples for the derivatives of a model matrix dat <- data.frame(x1 = seq(-2, 2, length.out = 5)) model.matrix(~ 1, dat) derivModelMatrix(~ 1, dat) model.matrix(~ ., dat) derivModelMatrix(~ ., dat) model.matrix(~ . - 1, dat) derivModelMatrix(~ . - 1, dat) model.matrix(~ sin(x1) + I(x1^2), dat) derivModelMatrix(~ sin(x1) + I(x1^2), dat) dat <- cbind(dat, x2 = seq(1, 5, length.out = 5)) model.matrix(~ 1, dat) derivModelMatrix(~ 1, dat) model.matrix(~ .^2, dat) derivModelMatrix(~ .^2, dat) model.matrix(~ log(x2), dat) derivModelMatrix(~ log(x2), dat) model.matrix(~ x1:x2, dat) derivModelMatrix(~ x1:x2, dat) model.matrix(~ I(x1^2) * I(x2^3), dat) derivModelMatrix(~ I(x1^2) * I(x2^3), dat) model.matrix(~ sin(x1) + cos(x2) + atan(x1 * x2), dat) derivModelMatrix(~ sin(x1) + cos(x2) + atan(x1 * x2), dat)
Estimation of a Kriging model with or without derivatives.
gekm(formula, data, deriv, covtype = c("matern5_2", "matern3_2", "gaussian"), theta = NULL, tol = NULL, optimizer = c("NMKB", "L-BFGS-B"), lower = NULL, upper = NULL, start = NULL, ncalls = 20, control = NULL, model = TRUE, x = FALSE, y = FALSE, dx = FALSE, dy = FALSE, ...) ## S3 method for class 'gekm' print(x, digits = 4L, ...)
gekm(formula, data, deriv, covtype = c("matern5_2", "matern3_2", "gaussian"), theta = NULL, tol = NULL, optimizer = c("NMKB", "L-BFGS-B"), lower = NULL, upper = NULL, start = NULL, ncalls = 20, control = NULL, model = TRUE, x = FALSE, y = FALSE, dx = FALSE, dy = FALSE, ...) ## S3 method for class 'gekm' print(x, digits = 4L, ...)
formula |
a |
data |
a |
deriv |
an optional |
covtype |
a |
theta |
a |
tol |
a tolerance for the conditional number of the correlation matrix, see |
optimizer |
an optional |
lower |
an optional lower bound for the optimization of the correlation parameters. |
upper |
an optional upper bound for the optimization of the correlation parameters. |
start |
an optional |
ncalls |
an optional |
control |
a |
model |
|
x |
|
y |
|
dx |
|
dy |
|
digits |
number of digits to be used for the |
... |
further arguments, currently not used. |
Parameter estimation is performed via maximum likelihood.
The optimizer
argument can be used to select one of the optimization algorithms "NMBK"
or "L-BFGS-B"
.
In the case of the "L-BFGS-B"
, analytical gradients of the “concentrated” log likelihood are used.
For one-dimensional problems, optimize
is called and the algorithm selected via optimizer
is ignored.
gekm
returns an object of class
"gekm"
whose underlying structure is a list containing at least the following components:
coefficients |
the estimated regression coefficients. |
sigma |
the estimated process standard deviation. |
theta |
the (estimated) correlation parameters. |
covtype |
the name of the correlation function. |
chol |
(the components of) the upper triangular matrix of the Cholesky decomposition of the correlation matrix. |
optimizer |
the optimization algorithm. |
convergence |
the convergence code. |
message |
information from the optimizer. |
logLik |
the value of the negative “concentrated” log likelihood at the estimated parameters. |
derivatives |
|
data |
the |
deriv |
if |
call |
the matched call. |
terms |
the |
model |
if requested (the default), the model frame used. |
x |
if requested, the model matrix. |
y |
if requested, the response vector. |
dx |
if requested, the derivatives of the model matrix. |
dx |
if requested, the vector of derivatives of the response. |
Carmen van Meegen
Cressie, N. A. C. (1993). Statistics for Spartial Data. John Wiley & Sons. doi:10.1002/9781119115151.
Koehler, J. and Owen, A. (1996). Computer Experiments. In Ghosh, S. and Rao, C. (eds.), Design and Analysis of Experiments, volume 13 of Handbook of Statistics, pp. 261–308. Elsevier Science. doi:10.1016/S0169-7161(96)13011-X.
Krige, D. G. (1951). A Statistical Approach to Some Basic Mine Valuation Problems on the Witwatersrand. Journal of the Southern African Institute of Mining and Metallurgy, 52(6):199–139.
Laurent, L., Le Riche, R., Soulier, B., and Boucard, PA. (2019). An Overview of Gradient-Enhanced Metamodels with Applications. Archives of Computational Methods in Engineering, 26(1):61–106. doi:10.1007/s11831-017-9226-3.
Martin, J. D. and Simpson, T. W. (2005). Use of Kriging Models to Approximate Deterministic Computer Models. AIAA Journal, 43(4):853–863. doi:10.2514/1.8650.
Morris, M., Mitchell, T., and Ylvisaker, D. (1993). Bayesian Design and Analysis of Computer Experiments: Use of Derivatives in Surface Prediction. Technometrics, 35(3):243–255. doi:10.1080/00401706.1993.10485320.
Oakley, J. and O'Hagan, A. (2002). Bayesian Inference for the Uncertainty Distribution of Computer Model Outputs. Biometrika, 89(4):769–784. doi:10.1093/biomet/89.4.769.
O'Hagan, A., Kennedy, M. C., and Oakley, J. E. (1999). Uncertainty Analysis and Other Inference Tools for Complex Computer Codes. In Bayesian Statistics 6, Ed. J. M. Bernardo, J. O. Berger, A. P. Dawid and A .F. M. Smith, 503–524. Oxford University Press.
O'Hagan, A. (2006). Bayesian Analysis of Computer Code Outputs: A Tutorial. Reliability Engineering & System Safet, 91(10):1290–1300. doi:10.1016/j.ress.2005.11.025.
Park, J.-S. and Beak, J. (2001). Efficient Computation of Maximum Likelihood Estimators in a Spatial Linear Model with Power Exponential Covariogram. Computers & Geosciences, 27(1):1–7. doi:10.1016/S0098-3004(00)00016-9.
Ranjan, P., Haynes, R. and Karsten, R. (2011). A Computationally Stable Approach to Gaussian Process Interpolation of Deterministic Computer Simulation Data. Technometrics, 53:366–378. doi:10.1198/TECH.2011.09141.
Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian Processes for Machine Learning. The MIT Press. https://gaussianprocess.org/gpml/.
Ripley, B. D. (1981). Spatial Statistics. John Wiley & Sons. doi:10.1002/0471725218.
Sacks, J., Welch, W. J., Mitchell, T. J., and Wynn, H. P. (1989). Design and Analysis of Computer Experiments. Statistical Science, 4(4):409–423. doi:10.1214/ss/1177012413.
Santner, T. J., Williams, B. J., and Notz, W. I. (2018). The Design and Analysis of Computer Experiments. 2nd edition. Springer-Verlag.
Stein, M. L. (1999). Interpolation of Spatial Data: Some Theory for Kriging. Springer Series in Statistics. Springer-Verlag. doi:10.1007/978-1-4612-1494-6.
predict.gekm
for prediction at new data points based on a model of class "gekm"
.
simulate.gekm
for simulation of process paths conditional on a model of class "gekm"
.
## 1-dimensional example: Oakley and O’Hagan (2002) # Define test function and its gradient f <- function(x) 5 + x + cos(x) fGrad <- function(x) 1 - sin(x) # Generate coordinates and calculate slopes x <- seq(-5, 5, length = 5) y <- f(x) dy <- fGrad(x) dat <- data.frame(x, y) deri <- data.frame(x = dy) # Fit Kriging model km.1d <- gekm(y ~ x, data = dat, covtype = "gaussian", theta = 1) km.1d # Fit Gradient-Enhanced Kriging model gekm.1d <- gekm(y ~ x, data = dat, deriv = deri, covtype = "gaussian", theta = 1) gekm.1d ## 2-dimensional example: Morris et al. (1993) # List of inputs with their distributions and their respective ranges inputs <- list("r_w" = list(dist = "norm", mean = 0.1, sd = 0.0161812, min = 0.05, max = 0.15), "r" = list(dist = "lnorm", meanlog = 7.71, sdlog = 1.0056, min = 100, max = 50000), "T_u" = list(dist = "unif", min = 63070, max = 115600), "H_u" = list(dist = "unif", min = 990, max = 1110), "T_l" = list(dist = "unif", min = 63.1, max = 116), "H_l" = list(dist = "unif", min = 700, max = 820), "L" = list(dist = "unif", min = 1120, max = 1680), # for a more nonlinear, nonadditive function, see Morris et al. (1993) "K_w" = list(dist = "unif", min = 1500, max = 15000)) # Generate design design <- data.frame("r_w" = c(0, 0.268, 1), "r" = rep(0, 3), "T_u" = rep(0, 3), "H_u" = rep(0, 3), "T_l" = rep(0, 3), "H_l" = rep(0, 3), "L" = rep(0, 3), "K_w" = c(0, 1, 0.268)) # Function to transform design onto input range transform <- function(x, data){ for(p in names(data)){ data[ , p] <- (x[[p]]$max - x[[p]]$min) * data[ , p] + x[[p]]$min } data } # Function to transform derivatives deriv.transform <- function(x, data){ for(p in colnames(data)){ data[ , p] <- data[ , p] * (x[[p]]$max - x[[p]]$min) } data } # Generate outcome and derivatives design.trans <- transform(inputs, design) design$y <- borehole(design.trans) deri.trans <- boreholeGrad(design.trans) deri <- data.frame(deriv.transform(inputs, deri.trans)) # Design and data cbind(design[ , c("r_w", "K_w", "y")], deri[ , c("r_w", "K_w")]) # Fit gradient-enhanced Kriging model with Gaussian correlation function mod <- gekm(y ~ 1, data = design[ , c("r_w", "K_w", "y")], deriv = deri[ , c("r_w", "K_w")], covtype = "gaussian") mod ## Compare results with Morris et al. (1993): # Estimated hyperparameters # in Morris et al. (1993): 0.429 and 0.467 1 / (2 * mod$theta^2) # Estimated intercept # in Morris et al. (1993): 69.15 coef(mod) # Estimated standard deviation # in Morris et al. (1993): 135.47 mod$sigma # Posterior mean and standard deviation at (0.5, 0.5) # in Morris et al. (1993): 69.4 and 2.7 predict(mod, data.frame("r_w" = 0.5, "K_w" = 0.5)) # Posterior mean and standard deviation at (1, 1) # in Morris et al. (1993): 230.0 and 19.2 predict(mod, data.frame("r_w" = 1, "K_w" = 1)) ## Graphical comparison: # Generate a 21 x 21 grid for prediction n_grid <- 21 x <- seq(0, 1, length.out = n_grid) grid <- expand.grid("r_w" = x, "K_w" = x) pred <- predict(mod, grid) # Compute ground truth on (transformed) grid newdata <- data.frame("r_w" = grid[ , "r_w"], "r" = 0, "T_u" = 0, "H_u" = 0, "T_l" = 0, "H_l" = 0, "L" = 0, "K_w" = grid[ , "K_w"]) newdata <- transform(inputs, newdata) truth <- borehole(newdata) # Contour plots of predicted and actual output par(mfrow = c(1, 2), oma = c(3.5, 3.5, 0, 0.2), mar = c(0, 0, 1.5, 0)) contour(x, x, matrix(pred$fit, nrow = n_grid, ncol = n_grid, byrow = TRUE), levels = c(seq(10, 50, 10), seq(100, 250, 50)), main = "Predicted output") points(design[ , c("K_w", "r_w")], pch = 16) contour(x, x, matrix(truth, nrow = n_grid, ncol = n_grid, byrow = TRUE), levels = c(seq(10, 50, 10), seq(100, 250, 50)), yaxt = "n", main = "Ground truth") points(design[ , c("K_w", "r_w")], pch = 16) mtext(side = 1, outer = TRUE, line = 2.5, "Normalized hydraulic conductivity of borehole") mtext(side = 2, outer = TRUE, line = 2.5, "Normalized radius of borehole")
## 1-dimensional example: Oakley and O’Hagan (2002) # Define test function and its gradient f <- function(x) 5 + x + cos(x) fGrad <- function(x) 1 - sin(x) # Generate coordinates and calculate slopes x <- seq(-5, 5, length = 5) y <- f(x) dy <- fGrad(x) dat <- data.frame(x, y) deri <- data.frame(x = dy) # Fit Kriging model km.1d <- gekm(y ~ x, data = dat, covtype = "gaussian", theta = 1) km.1d # Fit Gradient-Enhanced Kriging model gekm.1d <- gekm(y ~ x, data = dat, deriv = deri, covtype = "gaussian", theta = 1) gekm.1d ## 2-dimensional example: Morris et al. (1993) # List of inputs with their distributions and their respective ranges inputs <- list("r_w" = list(dist = "norm", mean = 0.1, sd = 0.0161812, min = 0.05, max = 0.15), "r" = list(dist = "lnorm", meanlog = 7.71, sdlog = 1.0056, min = 100, max = 50000), "T_u" = list(dist = "unif", min = 63070, max = 115600), "H_u" = list(dist = "unif", min = 990, max = 1110), "T_l" = list(dist = "unif", min = 63.1, max = 116), "H_l" = list(dist = "unif", min = 700, max = 820), "L" = list(dist = "unif", min = 1120, max = 1680), # for a more nonlinear, nonadditive function, see Morris et al. (1993) "K_w" = list(dist = "unif", min = 1500, max = 15000)) # Generate design design <- data.frame("r_w" = c(0, 0.268, 1), "r" = rep(0, 3), "T_u" = rep(0, 3), "H_u" = rep(0, 3), "T_l" = rep(0, 3), "H_l" = rep(0, 3), "L" = rep(0, 3), "K_w" = c(0, 1, 0.268)) # Function to transform design onto input range transform <- function(x, data){ for(p in names(data)){ data[ , p] <- (x[[p]]$max - x[[p]]$min) * data[ , p] + x[[p]]$min } data } # Function to transform derivatives deriv.transform <- function(x, data){ for(p in colnames(data)){ data[ , p] <- data[ , p] * (x[[p]]$max - x[[p]]$min) } data } # Generate outcome and derivatives design.trans <- transform(inputs, design) design$y <- borehole(design.trans) deri.trans <- boreholeGrad(design.trans) deri <- data.frame(deriv.transform(inputs, deri.trans)) # Design and data cbind(design[ , c("r_w", "K_w", "y")], deri[ , c("r_w", "K_w")]) # Fit gradient-enhanced Kriging model with Gaussian correlation function mod <- gekm(y ~ 1, data = design[ , c("r_w", "K_w", "y")], deriv = deri[ , c("r_w", "K_w")], covtype = "gaussian") mod ## Compare results with Morris et al. (1993): # Estimated hyperparameters # in Morris et al. (1993): 0.429 and 0.467 1 / (2 * mod$theta^2) # Estimated intercept # in Morris et al. (1993): 69.15 coef(mod) # Estimated standard deviation # in Morris et al. (1993): 135.47 mod$sigma # Posterior mean and standard deviation at (0.5, 0.5) # in Morris et al. (1993): 69.4 and 2.7 predict(mod, data.frame("r_w" = 0.5, "K_w" = 0.5)) # Posterior mean and standard deviation at (1, 1) # in Morris et al. (1993): 230.0 and 19.2 predict(mod, data.frame("r_w" = 1, "K_w" = 1)) ## Graphical comparison: # Generate a 21 x 21 grid for prediction n_grid <- 21 x <- seq(0, 1, length.out = n_grid) grid <- expand.grid("r_w" = x, "K_w" = x) pred <- predict(mod, grid) # Compute ground truth on (transformed) grid newdata <- data.frame("r_w" = grid[ , "r_w"], "r" = 0, "T_u" = 0, "H_u" = 0, "T_l" = 0, "H_l" = 0, "L" = 0, "K_w" = grid[ , "K_w"]) newdata <- transform(inputs, newdata) truth <- borehole(newdata) # Contour plots of predicted and actual output par(mfrow = c(1, 2), oma = c(3.5, 3.5, 0, 0.2), mar = c(0, 0, 1.5, 0)) contour(x, x, matrix(pred$fit, nrow = n_grid, ncol = n_grid, byrow = TRUE), levels = c(seq(10, 50, 10), seq(100, 250, 50)), main = "Predicted output") points(design[ , c("K_w", "r_w")], pch = 16) contour(x, x, matrix(truth, nrow = n_grid, ncol = n_grid, byrow = TRUE), levels = c(seq(10, 50, 10), seq(100, 250, 50)), yaxt = "n", main = "Ground truth") points(design[ , c("K_w", "r_w")], pch = 16) mtext(side = 1, outer = TRUE, line = 2.5, "Normalized hydraulic conductivity of borehole") mtext(side = 2, outer = TRUE, line = 2.5, "Normalized radius of borehole")
Griewank function is defined by
with for
.
griewank(x) griewankGrad(x)
griewank(x) griewankGrad(x)
x |
a numeric |
The gradient of Griewank function is
Griewank function has one global minimum at
.
griewank
returns the function value of Griewank function at x
.
griewankGrad
returns the gradient of Griewank function at x
.
Carmen van Meegen
Plevris, V. and Solorzano, G. (2022). A Collection of 30 Multidimensional Functions for Global Optimization Benchmarking. Data, 7(4):46. doi:10.3390/data7040046.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
tangents
for drawing tangent lines.
# 1-dimensional Griewank function with tangents curve(griewank(x), from = -5, to = 5, n = 200) x <- seq(-5, 5, length = 5) y <- griewank(x) dy <- griewankGrad(x) tangents(x, y, dy, length = 2, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of Griewank function n.grid <- 50 x1 <- x2 <- seq(-5, 5, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) griewank(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Griewank function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
# 1-dimensional Griewank function with tangents curve(griewank(x), from = -5, to = 5, n = 200) x <- seq(-5, 5, length = 5) y <- griewank(x) dy <- griewankGrad(x) tangents(x, y, dy, length = 2, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of Griewank function n.grid <- 50 x1 <- x2 <- seq(-5, 5, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) griewank(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Griewank function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
Himmelblau's function is defined by
with .
himmelblau(x) himmelblauGrad(x)
himmelblau(x) himmelblauGrad(x)
x |
a numeric |
The gradient of Himmelblau's function is
Himmelblau's function has four global minima at
,
,
and
.
himmelblau
returns the function value of Himmelblau's function at x
.
himmelblauGrad
returns the gradient of Himmelblau's function at x
.
Carmen van Meegen
Himmelblau, D. (1972). Applied Nonlinear Programming. McGraw-Hill. ISBN 0-07-028921-2.
Jamil, M. and Yang, X.-S. (2013). A Literature Survey of Benchmark Functions for Global Optimization Problems. International Journal of Mathematical Modelling and Numerical Optimisation, 4(2):150-–194. doi:10.1504/IJMMNO.2013.055204.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
# Contour plot of Himmelblau's function n.grid <- 50 x1 <- x2 <- seq(-5, 5, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) himmelblau(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Himmelblau's function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
# Contour plot of Himmelblau's function n.grid <- 50 x1 <- x2 <- seq(-5, 5, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) himmelblau(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Himmelblau's function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
Predicted values and standard deviations based on a gekm
model.
## S3 method for class 'gekm' predict(object, newdata, ...)
## S3 method for class 'gekm' predict(object, newdata, ...)
object |
an object of class |
newdata |
a |
... |
further arguments, currently not used. |
fit |
predicted mean computed at newdata. |
sd.fit |
predicted standard deviation of predicted mean. |
Carmen van Meegen
Cressie, N. A. C. (1993). Statistics for Spartial Data. John Wiley & Sons. doi:10.1002/9781119115151.
Koehler, J. and Owen, A. (1996). Computer Experiments. In Ghosh, S. and Rao, C. (eds.), Design and Analysis of Experiments, volume 13 of Handbook of Statistics, pp. 261–308. Elsevier Science. doi:10.1016/S0169-7161(96)13011-X.
Krige, D. G. (1951). A Statistical Approach to Some Basic Mine Valuation Problems on the Witwatersrand. Journal of the Southern African Institute of Mining and Metallurgy, 52(6):199–139.
Laurent, L., Le Riche, R., Soulier, B., and Boucard, PA. (2019). An Overview of Gradient-Enhanced Metamodels with Applications. Archives of Computational Methods in Engineering, 26(1):61–106. doi:10.1007/s11831-017-9226-3.
Martin, J. D. and Simpson, T. W. (2005). Use of Kriging Models to Approximate Deterministic Computer Models. AIAA Journal, 43(4):853–863. doi:10.2514/1.8650.
Morris, M., Mitchell, T., and Ylvisaker, D. (1993). Bayesian Design and Analysis of Computer Experiments: Use of Derivatives in Surface Prediction. Technometrics, 35(3):243–255. doi:10.1080/00401706.1993.10485320.
Oakley, J. and O'Hagan, A. (2002). Bayesian Inference for the Uncertainty Distribution of Computer Model Outputs. Biometrika, 89(4):769–784. doi:10.1093/biomet/89.4.769.
O'Hagan, A., Kennedy, M. C., and Oakley, J. E. (1999). Uncertainty Analysis and Other Inference Tools for Complex Computer Codes. In Bayesian Statistics 6, Ed. J. M. Bernardo, J. O. Berger, A. P. Dawid and A .F. M. Smith, 503–524. Oxford University Press.
O'Hagan, A. (2006). Bayesian Analysis of Computer Code Outputs: A Tutorial. Reliability Engineering & System Safet, 91(10):1290–1300. doi:10.1016/j.ress.2005.11.025.
Park, J.-S. and Beak, J. (2001). Efficient Computation of Maximum Likelihood Estimators in a Spatial Linear Model with Power Exponential Covariogram. Computers & Geosciences, 27(1):1–7. doi:10.1016/S0098-3004(00)00016-9.
Ranjan, P., Haynes, R. and Karsten, R. (2011). A Computationally Stable Approach to Gaussian Process Interpolation of Deterministic Computer Simulation Data. Technometrics, 53:366–378. doi:10.1198/TECH.2011.09141.
Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian Processes for Machine Learning. The MIT Press. https://gaussianprocess.org/gpml/.
Ripley, B. D. (1981). Spatial Statistics. John Wiley & Sons. doi:10.1002/0471725218.
Sacks, J., Welch, W. J., Mitchell, T. J., and Wynn, H. P. (1989). Design and Analysis of Computer Experiments. Statistical Science, 4(4):409–423. doi:10.1214/ss/1177012413.
Santner, T. J., Williams, B. J., and Notz, W. I. (2018). The Design and Analysis of Computer Experiments. 2nd edition. Springer-Verlag.
Stein, M. L. (1999). Interpolation of Spatial Data: Some Theory for Kriging. Springer Series in Statistics. Springer-Verlag. doi:10.1007/978-1-4612-1494-6.
gekm
for fitting a (gradient-enhanced) Kriging model.
tangents
for drawing tangent lines.
## 1-dimensional example: Oakley and O’Hagan (2002) # Define test function and its gradient f <- function(x) 5 + x + cos(x) fGrad <- function(x) 1 - sin(x) # Generate coordinates and calculate slopes x <- seq(-5, 5, length = 5) y <- f(x) dy <- fGrad(x) dat <- data.frame(x, y) deri <- data.frame(x = dy) # Fit Kriging model km.1d <- gekm(y ~ x, data = dat, covtype = "gaussian", theta = 1) # Fit gradient-enhanced Kriging model gekm.1d <- gekm(y ~ x, data = dat, deriv = deri, covtype = "gaussian", theta = 1) # Generate new data for prediction newdat <- data.frame(x = seq(-6, 6, length = 100)) # Prediction for both models pred.km.1d <- predict(km.1d, newdat) pred.gekm.1d <- predict(gekm.1d, newdat) # Draw curves curve(f(x), from = -6, to = 6, lwd = 2) lines(newdat$x, pred.km.1d$fit, type = "l", col = rgb(0.5, 0.5, 0.5), lwd = 2, lty = 2) lines(newdat$x, pred.gekm.1d$fit, type = "l", col = rgb(0, 0.5, 1), lwd = 2, lty = 2) # Add pointswise confidence intervals lower.km.1d <- pred.km.1d$fit - qnorm(0.975) * pred.km.1d$sd upper.km.1d <- pred.km.1d$fit + qnorm(0.975) * pred.km.1d$sd lower.gekm.1d <- pred.gekm.1d$fit - qnorm(0.975) * pred.gekm.1d$sd upper.gekm.1d <- pred.gekm.1d$fit + qnorm(0.975) * pred.gekm.1d$sd polygon(c(newdat$x, rev(newdat$x)), c(lower.km.1d, rev(upper.km.1d)), col = rgb(0.5, 0.5, 0.5, alpha = 0.1), border = NA) polygon(c(newdat$x, rev(newdat$x)), c(lower.gekm.1d, rev(upper.gekm.1d)), col = rgb(0, 0.5, 1, alpha = 0.1), border = NA) # Add tangent lines and observations tangents(x, y, dy, lwd = 2, col = "red") points(x, y, pch = 16) # Add legend legend("topleft", lty = c(1, 2, 2), col = c("black", rgb(0.5, 0.5, 0.5), rgb(0, 0.5, 1)), legend = c("Test function", "Kriging", "GEK"), lwd = 2, bty = "n") ## 2-dimensional example: Branin-Hoo function # Generate a grid for training n <- 5 x1 <- seq(-5, 10, length = n) x2 <- seq(0, 15, length = n) x <- expand.grid(x1 = x1, x2 = x2) y <- branin(x) dy <- braninGrad(x) dat <- data.frame(x, y) deri <- data.frame(dy) # Fit Kriging model km.2d <- gekm(y ~ ., data = dat) km.2d # Fit gradient-enhanced Kriging model gekm.2d <- gekm(y ~ ., data = dat, deriv = deri) gekm.2d # Generate new data for prediction n.grid <- 50 x1.grid <- seq(-5, 10, length = n.grid) x2.grid <- seq(0, 15, length = n.grid) newdat <- expand.grid(x1 = x1.grid, x2 = x2.grid) # Prediction for both models and actual outcome pred.km.2d <- predict(km.2d, newdat) pred.gekm.2d <- predict(gekm.2d, newdat) truth <- outer(x1.grid, x2.grid, function(x1, x2) branin(cbind(x1, x2))) # Contour plots of predicted and actual output par(mfrow = c(1, 3), oma = c(3.5, 3.5, 0, 0.2), mar = c(0, 0, 1.5, 0)) contour(x1.grid, x2.grid, truth, nlevels = 30, levels = seq(0, 300, 10), main = "Branin-Hoo") points(x, pch = 16) contour(x1.grid, x2.grid, matrix(pred.km.2d$fit, nrow = n.grid, ncol = n.grid), levels = seq(0, 300, 10), main = "Kriging", yaxt = "n") points(x, pch = 16) contour(x1.grid, x2.grid, matrix(pred.gekm.2d$fit, nrow = n.grid, ncol = n.grid), levels = seq(0, 300, 10), main = "GEK", yaxt = "n") points(x, pch = 16) mtext(side = 1, outer = TRUE, line = 2.5, expression(x[1])) mtext(side = 2, outer = TRUE, line = 2, expression(x[2])) # Contour plots of predicted variance par(mfrow = c(1, 2), oma = c(3.5, 3.5, 0, 0.2), mar = c(0, 0, 1.5, 0)) contour(x1.grid, x2.grid, matrix(pred.km.2d$sd.fit^2, nrow = n.grid, ncol = n.grid), main = "Kriging variance") points(x, pch = 16) contour(x1.grid, x2.grid, matrix(pred.gekm.2d$sd.fit^2, nrow = n.grid, ncol = n.grid), main = "GEK variance", yaxt = "n") points(x, pch = 16) mtext(side = 1, outer = TRUE, line = 2.5, expression(x[1])) mtext(side = 2, outer = TRUE, line = 2, expression(x[2]))
## 1-dimensional example: Oakley and O’Hagan (2002) # Define test function and its gradient f <- function(x) 5 + x + cos(x) fGrad <- function(x) 1 - sin(x) # Generate coordinates and calculate slopes x <- seq(-5, 5, length = 5) y <- f(x) dy <- fGrad(x) dat <- data.frame(x, y) deri <- data.frame(x = dy) # Fit Kriging model km.1d <- gekm(y ~ x, data = dat, covtype = "gaussian", theta = 1) # Fit gradient-enhanced Kriging model gekm.1d <- gekm(y ~ x, data = dat, deriv = deri, covtype = "gaussian", theta = 1) # Generate new data for prediction newdat <- data.frame(x = seq(-6, 6, length = 100)) # Prediction for both models pred.km.1d <- predict(km.1d, newdat) pred.gekm.1d <- predict(gekm.1d, newdat) # Draw curves curve(f(x), from = -6, to = 6, lwd = 2) lines(newdat$x, pred.km.1d$fit, type = "l", col = rgb(0.5, 0.5, 0.5), lwd = 2, lty = 2) lines(newdat$x, pred.gekm.1d$fit, type = "l", col = rgb(0, 0.5, 1), lwd = 2, lty = 2) # Add pointswise confidence intervals lower.km.1d <- pred.km.1d$fit - qnorm(0.975) * pred.km.1d$sd upper.km.1d <- pred.km.1d$fit + qnorm(0.975) * pred.km.1d$sd lower.gekm.1d <- pred.gekm.1d$fit - qnorm(0.975) * pred.gekm.1d$sd upper.gekm.1d <- pred.gekm.1d$fit + qnorm(0.975) * pred.gekm.1d$sd polygon(c(newdat$x, rev(newdat$x)), c(lower.km.1d, rev(upper.km.1d)), col = rgb(0.5, 0.5, 0.5, alpha = 0.1), border = NA) polygon(c(newdat$x, rev(newdat$x)), c(lower.gekm.1d, rev(upper.gekm.1d)), col = rgb(0, 0.5, 1, alpha = 0.1), border = NA) # Add tangent lines and observations tangents(x, y, dy, lwd = 2, col = "red") points(x, y, pch = 16) # Add legend legend("topleft", lty = c(1, 2, 2), col = c("black", rgb(0.5, 0.5, 0.5), rgb(0, 0.5, 1)), legend = c("Test function", "Kriging", "GEK"), lwd = 2, bty = "n") ## 2-dimensional example: Branin-Hoo function # Generate a grid for training n <- 5 x1 <- seq(-5, 10, length = n) x2 <- seq(0, 15, length = n) x <- expand.grid(x1 = x1, x2 = x2) y <- branin(x) dy <- braninGrad(x) dat <- data.frame(x, y) deri <- data.frame(dy) # Fit Kriging model km.2d <- gekm(y ~ ., data = dat) km.2d # Fit gradient-enhanced Kriging model gekm.2d <- gekm(y ~ ., data = dat, deriv = deri) gekm.2d # Generate new data for prediction n.grid <- 50 x1.grid <- seq(-5, 10, length = n.grid) x2.grid <- seq(0, 15, length = n.grid) newdat <- expand.grid(x1 = x1.grid, x2 = x2.grid) # Prediction for both models and actual outcome pred.km.2d <- predict(km.2d, newdat) pred.gekm.2d <- predict(gekm.2d, newdat) truth <- outer(x1.grid, x2.grid, function(x1, x2) branin(cbind(x1, x2))) # Contour plots of predicted and actual output par(mfrow = c(1, 3), oma = c(3.5, 3.5, 0, 0.2), mar = c(0, 0, 1.5, 0)) contour(x1.grid, x2.grid, truth, nlevels = 30, levels = seq(0, 300, 10), main = "Branin-Hoo") points(x, pch = 16) contour(x1.grid, x2.grid, matrix(pred.km.2d$fit, nrow = n.grid, ncol = n.grid), levels = seq(0, 300, 10), main = "Kriging", yaxt = "n") points(x, pch = 16) contour(x1.grid, x2.grid, matrix(pred.gekm.2d$fit, nrow = n.grid, ncol = n.grid), levels = seq(0, 300, 10), main = "GEK", yaxt = "n") points(x, pch = 16) mtext(side = 1, outer = TRUE, line = 2.5, expression(x[1])) mtext(side = 2, outer = TRUE, line = 2, expression(x[2])) # Contour plots of predicted variance par(mfrow = c(1, 2), oma = c(3.5, 3.5, 0, 0.2), mar = c(0, 0, 1.5, 0)) contour(x1.grid, x2.grid, matrix(pred.km.2d$sd.fit^2, nrow = n.grid, ncol = n.grid), main = "Kriging variance") points(x, pch = 16) contour(x1.grid, x2.grid, matrix(pred.gekm.2d$sd.fit^2, nrow = n.grid, ncol = n.grid), main = "GEK variance", yaxt = "n") points(x, pch = 16) mtext(side = 1, outer = TRUE, line = 2.5, expression(x[1])) mtext(side = 2, outer = TRUE, line = 2, expression(x[2]))
Qing function is defined by
with for
.
qing(x) qingGrad(x)
qing(x) qingGrad(x)
x |
a numeric |
The gradient of Qing function is
Qing function has global minimum
at
.
qing
returns the function value of Qing function at x
.
qingGrad
returns the gradient of Qing function at x
.
Carmen van Meegen
Qing, A. (2006). Dynamic Differential Evolution Strategy and Applications in Electromagnetic Inverse Scattering Problems. IEEE Transactions on Geoscience and Remote Sensing, 44(1):116-–125. doi:10.1109/TGRS.2005.859347.
Jamil, M. and Yang, X.-S. (2013). A Literature Survey of Benchmark Functions for Global Optimization Problems. International Journal of Mathematical Modelling and Numerical Optimisation, 4(2):150-–194. doi:10.1504/IJMMNO.2013.055204.
# 1-dimensional Qing function with tangents curve(qing(x), from = -1.7, to = 1.7) x <- seq(-1.5, 1.5, length = 5) y <- qing(x) dy <- qingGrad(x) tangents(x, y, dy, length = 1, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of Qing function n.grid <- 50 x1 <- seq(-2, 2, length.out = n.grid) x2 <- seq(-2, 2, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) qing(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Qing function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
# 1-dimensional Qing function with tangents curve(qing(x), from = -1.7, to = 1.7) x <- seq(-1.5, 1.5, length = 5) y <- qing(x) dy <- qingGrad(x) tangents(x, y, dy, length = 1, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of Qing function n.grid <- 50 x1 <- seq(-2, 2, length.out = n.grid) x2 <- seq(-2, 2, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) qing(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Qing function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
Rastrigin function is defined by
with for
.
rastrigin(x) rastriginGrad(x)
rastrigin(x) rastriginGrad(x)
x |
a numeric |
The gradient of Rastrigin function is
Rastrigin function has one global minimum at
.
rastrigin
returns the function value of Rastrigin function at x
.
rastriginGrad
returns the gradient of Rastrigin function at x
.
Carmen van Meegen
Plevris, V. and Solorzano, G. (2022). A Collection of 30 Multidimensional Functions for Global Optimization Benchmarking. Data, 7(4):46. doi:10.3390/data7040046.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
tangents
for drawing tangent lines.
# 1-dimensional Rastrigin function with tangents curve(rastrigin(x), from = -5, to = 5, n = 200) x <- seq(-4.5, 4.5, length = 5) y <- rastrigin(x) dy <- rastriginGrad(x) tangents(x, y, dy, length = 2, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of Rastrigin function n.grid <- 100 x1 <- x2 <- seq(-5.12, 5.12, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) rastrigin(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Rastrigin function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
# 1-dimensional Rastrigin function with tangents curve(rastrigin(x), from = -5, to = 5, n = 200) x <- seq(-4.5, 4.5, length = 5) y <- rastrigin(x) dy <- rastriginGrad(x) tangents(x, y, dy, length = 2, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of Rastrigin function n.grid <- 100 x1 <- x2 <- seq(-5.12, 5.12, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) rastrigin(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Rastrigin function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
The Schwefel function is defined by
with for
.
schwefel(x) schwefelGrad(x)
schwefel(x) schwefelGrad(x)
x |
a numeric vector of length |
The gradient of the Schwefel function is
The Schwefel function has one global minimum at
.
schwefel
returns the function value of the Schwefel function at x
.
schwefelGrad
returns the gradient of the Schwefel function at x
.
Carmen van Meegen
Plevris, V. and Solorzano, G. (2022). A Collection of 30 Multidimensional Functions for Global Optimization Benchmarking. Data, 7(4):46. doi:10.3390/data7040046.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
tangents
for drawing tangent lines.
# 1-dimensional Schwefel function with tangents curve(schwefel(x), from = -500, to = 500, n = 500) x <- seq(-450, 450, length = 5) y <- schwefel(x) dy <- schwefelGrad(x) tangents(x, y, dy, length = 200, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of Schwefel function n.grid <- 75 x1 <- x2 <- seq(-500, 500, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) schwefel(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Schwefel function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
# 1-dimensional Schwefel function with tangents curve(schwefel(x), from = -500, to = 500, n = 500) x <- seq(-450, 450, length = 5) y <- schwefel(x) dy <- schwefelGrad(x) tangents(x, y, dy, length = 200, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of Schwefel function n.grid <- 75 x1 <- x2 <- seq(-500, 500, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) schwefel(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Schwefel function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
The short column function is defined by
with .
short(x, b = 5, h = 15) shortGrad(x, b = 5, h = 15)
short(x, b = 5, h = 15) shortGrad(x, b = 5, h = 15)
x |
a numeric |
b |
width of the cross-section in |
h |
depth of the cross-section in |
The short column function describes the limite state function of a short column with uncertain material properties and loads.
Input | Distribution | Mean | Standard deviation | Description |
|
|
|
|
yield stress in |
|
|
|
|
bending moment in |
|
|
|
|
axial force in |
The bending moment and the axial force are correlated with .
Note,
represents the normal distribution and
is the log-normal distribution.
short
returns the function value of short column function at x
.
shortGrad
returns the gradient of short column function at x
.
Carmen van Meegen
Kuschel, N. and Rackwitz, R. (1997). Two Basic Problems in Reliability-Based Structural Optimization. Mathematical Methods of Operations Research, 46(3):309–333.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
Simulates process paths conditional on a fitted gekm
model.
## S3 method for class 'gekm' simulate(object, nsim = 1, seed = NULL, newdata = NULL, tol = NULL, ...)
## S3 method for class 'gekm' simulate(object, nsim = 1, seed = NULL, newdata = NULL, tol = NULL, ...)
object |
an object of class |
nsim |
number of simulated process paths. Default is |
seed |
argument is not supported. |
newdata |
a |
tol |
a tolerance for the conditional number of the conditional correlation matrix of |
... |
further arguments, not used. |
val |
a |
Carmen van Meegen
Cressie, N. A. C. (1993). Statistics for Spartial Data. John Wiley & Sons. doi:10.1002/9781119115151.
Ripley, B. D. (1981). Spatial Statistics. John Wiley & Sons. doi:10.1002/0471725218.
gekm
for fitting a (gradient-enhanced) Kriging model.
predict.gekm
for prediction at new data points based on a model of class "gekm"
.
## 1-dimensional example # Define test function and its gradient from Oakley and O’Hagan (2002) f <- function(x) 5 + x + cos(x) fGrad <- function(x) 1 - sin(x) # Generate coordinates and calculate slopes x <- seq(-5, 5, length = 5) y <- f(x) dy <- fGrad(x) dat <- data.frame(x, y) deri <- data.frame(x = dy) # Fit Kriging model km.1d <- gekm(y ~ x, data = dat, covtype = "gaussian", theta = 1) # Fit Gradient-Enhanced Kriging model gekm.1d <- gekm(y ~ x, data = dat, deriv = deri, covtype = "gaussian", theta = 1) # Generate new data for prediction and simulation newdat <- data.frame(x = seq(-6, 6, length = 600)) # Prediction for both models pred.km.1d <- predict(km.1d, newdat) pred.gekm.1d <- predict(gekm.1d, newdat) # Simulate process path conditional on fitted models set.seed(1) n <- 50 sim.km.1d <- simulate(km.1d, nsim = n, newdata = newdat, tol = 35) sim.gekm.1d <- simulate(gekm.1d, nsim = n, newdata = newdat, tol = 35) par(mfrow = c(1, 2), oma = c(3.5, 3.5, 0, 0.2), mar = c(0, 0, 1.5, 0)) matplot(newdat$x, sim.km.1d, type = "l", lty = 1, col = 2:8, lwd = 1, ylim = c(-1, 12), main = "Kriging") lines(newdat$x, pred.km.1d$fit + qnorm(0.975) * pred.km.1d$sd, lwd = 1.5) lines(newdat$x, pred.km.1d$fit - qnorm(0.975) * pred.km.1d$sd, lwd = 1.5) points(x, y, pch = 16, cex = 1) matplot(newdat$x, sim.gekm.1d, type = "l", lty = 1, col = 2:8, lwd = 1, ylim = c(-1, 12), main = "GEK", yaxt = "n") lines(newdat$x, pred.gekm.1d$fit + qnorm(0.975) * pred.gekm.1d$sd, lwd = 1.5) lines(newdat$x, pred.gekm.1d$fit - qnorm(0.975) * pred.gekm.1d$sd, lwd = 1.5) points(x, y, pch = 16, cex = 1) mtext(side = 1, outer = TRUE, line = 2.5, "x") mtext(side = 2, outer = TRUE, line = 2.5, "f(x)")
## 1-dimensional example # Define test function and its gradient from Oakley and O’Hagan (2002) f <- function(x) 5 + x + cos(x) fGrad <- function(x) 1 - sin(x) # Generate coordinates and calculate slopes x <- seq(-5, 5, length = 5) y <- f(x) dy <- fGrad(x) dat <- data.frame(x, y) deri <- data.frame(x = dy) # Fit Kriging model km.1d <- gekm(y ~ x, data = dat, covtype = "gaussian", theta = 1) # Fit Gradient-Enhanced Kriging model gekm.1d <- gekm(y ~ x, data = dat, deriv = deri, covtype = "gaussian", theta = 1) # Generate new data for prediction and simulation newdat <- data.frame(x = seq(-6, 6, length = 600)) # Prediction for both models pred.km.1d <- predict(km.1d, newdat) pred.gekm.1d <- predict(gekm.1d, newdat) # Simulate process path conditional on fitted models set.seed(1) n <- 50 sim.km.1d <- simulate(km.1d, nsim = n, newdata = newdat, tol = 35) sim.gekm.1d <- simulate(gekm.1d, nsim = n, newdata = newdat, tol = 35) par(mfrow = c(1, 2), oma = c(3.5, 3.5, 0, 0.2), mar = c(0, 0, 1.5, 0)) matplot(newdat$x, sim.km.1d, type = "l", lty = 1, col = 2:8, lwd = 1, ylim = c(-1, 12), main = "Kriging") lines(newdat$x, pred.km.1d$fit + qnorm(0.975) * pred.km.1d$sd, lwd = 1.5) lines(newdat$x, pred.km.1d$fit - qnorm(0.975) * pred.km.1d$sd, lwd = 1.5) points(x, y, pch = 16, cex = 1) matplot(newdat$x, sim.gekm.1d, type = "l", lty = 1, col = 2:8, lwd = 1, ylim = c(-1, 12), main = "GEK", yaxt = "n") lines(newdat$x, pred.gekm.1d$fit + qnorm(0.975) * pred.gekm.1d$sd, lwd = 1.5) lines(newdat$x, pred.gekm.1d$fit - qnorm(0.975) * pred.gekm.1d$sd, lwd = 1.5) points(x, y, pch = 16, cex = 1) mtext(side = 1, outer = TRUE, line = 2.5, "x") mtext(side = 2, outer = TRUE, line = 2.5, "f(x)")
The sphere function is defined by
with for
.
sphere(x) sphereGrad(x)
sphere(x) sphereGrad(x)
x |
a numeric |
The gradient of the sphere function is
The sphere function has one global minimum at
.
sphere
returns the function value of the sphere function at x
.
sphereGrad
returns the gradient of the sphere function at x
.
Carmen van Meegen
Plevris, V. and Solorzano, G. (2022). A Collection of 30 Multidimensional Functions for Global Optimization Benchmarking. Data, 7(4):46. doi:10.3390/data7040046.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
tangents
for drawing tangent lines.
# 1-dimensional sphere function with tangents curve(sphere(x), from = -5, to = 5) x <- seq(-4.5, 4.5, length = 5) y <- sphere(x) dy <- sphereGrad(x) tangents(x, y, dy, length = 2, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of sphere function n.grid <- 50 x1 <- x2 <- seq(-5.12, 5.12, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) sphere(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of sphere function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
# 1-dimensional sphere function with tangents curve(sphere(x), from = -5, to = 5) x <- seq(-4.5, 4.5, length = 5) y <- sphere(x) dy <- sphereGrad(x) tangents(x, y, dy, length = 2, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of sphere function n.grid <- 50 x1 <- x2 <- seq(-5.12, 5.12, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) sphere(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of sphere function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
The steel column function is defined by
with ,
and
.
steel(x, L = 7500) steelGrad(x, L = 7500)
steel(x, L = 7500) steelGrad(x, L = 7500)
x |
a numeric |
L |
length in |
The steel column function describes the limite state function of a steel column with uncertain parameters.
Input | Distribution | Mean | Standard Deviation | Description |
|
|
|
|
yield stress in |
|
|
|
|
dead weight load in |
|
|
|
|
variable load in |
|
|
|
|
variable load in |
|
|
|
|
flange breadth in |
|
|
|
|
flange thickness in |
|
|
|
|
profile height in |
|
|
|
|
initial deflection in |
|
|
|
|
Young's modulus in |
Here, is the normal distribution and
is the log-normal distribution.
Further,
represents the Gumbel distribution and
denotes the Weibull distribution.
steel
returns the function value of steel column function at x
.
steelGrad
returns the gradient of steel column function at x
.
Carmen van Meegen
Kuschel, N. and Rackwitz, R. (1997). Two Basic Problems in Reliability-Based Structural Optimization. Mathematical Methods of Operations Research, 46(3):309–333.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
Styblinski-Tang function is defined by
with for
.
styblinski(x) styblinskiGrad(x)
styblinski(x) styblinskiGrad(x)
x |
a numeric |
The gradient of Styblinski-Tang function is
Styblinski-Tang function has one global minimum at
.
styblinski
returns the function value of Styblinski-Tang function at x
.
styblinskiGrad
returns the gradient of Styblinski-Tang function at x
.
Carmen van Meegen
Jamil, M. and Yang, X.-S. (2013). A Literature Survey of Benchmark Functions for Global Optimization Problems. International Journal of Mathematical Modelling and Numerical Optimisation, 4(2):150-–194. doi:10.1504/IJMMNO.2013.055204.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
tangents
for drawing tangent lines.
# 1-dimensional Styblinski-Tang function with tangents curve(styblinski(x), from = -5, to = 5) x <- seq(-4.5, 4.5, length = 5) y <- styblinski(x) dy <- styblinskiGrad(x) tangents(x, y, dy, length = 2, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of Styblinski-Tang function n.grid <- 50 x1 <- seq(-5, 5, length.out = n.grid) x2 <- seq(-5, 5, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) styblinski(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Styblinski-Tang function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
# 1-dimensional Styblinski-Tang function with tangents curve(styblinski(x), from = -5, to = 5) x <- seq(-4.5, 4.5, length = 5) y <- styblinski(x) dy <- styblinskiGrad(x) tangents(x, y, dy, length = 2, lwd = 2, col = "red") points(x, y, pch = 16) # Contour plot of Styblinski-Tang function n.grid <- 50 x1 <- seq(-5, 5, length.out = n.grid) x2 <- seq(-5, 5, length.out = n.grid) y <- outer(x1, x2, function(x1, x2) styblinski(cbind(x1, x2))) contour(x1, x2, y, xaxs = "i", yaxs = "i", nlevels = 25, xlab = "x1", ylab = "x2") # Perspective plot of Styblinski-Tang function col.pal <- colorRampPalette(c("#00007F", "blue", "#007FFF", "cyan", "#7FFF7F", "yellow", "#FF7F00", "red", "#7F0000")) colors <- col.pal(100) y.facet.center <- (y[-1, -1] + y[-1, -n.grid] + y[-n.grid, -1] + y[-n.grid, -n.grid])/4 y.facet.range <- cut(y.facet.center, 100) persp(x1, x2, y, phi = 30, theta = -315, expand = 0.75, ticktype = "detailed", col = colors[y.facet.range])
The sulfur function is defined by
with .
sulfur(x, S_0 = 1366, A = 5.1e+14) sulfurGrad(x, S_0 = 1366, A = 5.1e+14)
sulfur(x, S_0 = 1366, A = 5.1e+14) sulfurGrad(x, S_0 = 1366, A = 5.1e+14)
x |
a numeric |
S_0 |
solar constant in |
A |
surface area of the earth in |
The sulfur model function calculates the direct radiative forcing by sulfate aerosols .
Input | Central value | Uncertainty factor | Description |
|
|
|
source strength of anthropogenic sulfur in |
|
|
|
fraction of oxidized to |
|
|
|
average lifetime of atmospheric in |
|
|
|
aerosol mass scattering efficiency in |
|
|
|
fraction of light scattering into upward hemisphere |
|
|
|
fractional increase in aerosol scattering efficiency due to hygroscopic growth |
|
|
|
atmospheric transmittance above aerosol layer |
|
|
|
fraction of earth not covered by cloud |
|
|
|
surface coalbedo |
The inputs are all log-normally distributed.
sulfur
returns the function value of sulfur function at x
.
sulfurGrad
returns the gradient of sulfur function at x
.
Carmen van Meegen
Charlson, R. J., Schwartz, S. E., Hales, J. M., Cess, R. D., Coakley, Jr., J. A., Hansen, J. E., and Hoffman, D. J. (1992). Climate Forcing by Anthropogenic Aerosols. Science, 255:423–430. doi:10.1126/science.255.5043.423.
Penner, J. E., Charlson, R. J., Hales, J. M., Laulainen, N. S., Leifer, R., Novakov, T., Ogren, J., Radke, L. F., Schwartz, S. E., and Travis, L. (1994). Quantifying and Minimizing Uncertainty of Climate Forcing by Anthropogenic Aerosols. Bulletin of the American Meteorological Society, 75(3):375–400. doi:10.1175/1520-0477(1994)075<0375:QAMUOC>2.0.CO;2.
Tatang, M. A., Pan, W., Prinn, R. G., and McRae, G. J. (1997). An Efficient Method for Parametric Uncertainty Analysis of Numerical Geophysical Model. Journal of Geophysical Research Atmospheres, 102(18):21925–21932. doi:10.1029/97JD01654.
Draw tangent lines to an existing plot.
tangents(x, y, slope, length = 1, ...)
tangents(x, y, slope, length = 1, ...)
x , y
|
coordinate vectors of points |
slope |
vector of slopes at the points |
length |
desired length of tangent lines, see ‘Details’. |
... |
further graphical parameters to be passed to |
The length of the tangent lines is scaled according to the current aspect ratio of the existing plot.
Carmen van Meegen
segments
for drawing line segments between pairs of points.
# Define test function and its gradient from Oakley and O'Hagan (2002) f <- function(x) 5 + x + cos(x) fGrad <- function(x) 1 - sin(x) # Generate coordinates and calculate slopes x <- seq(-5, 5, length = 5) y <- f(x) dy <- fGrad(x) # Draw curve and tangent lines curve(f(x), from = -6, to = 6) tangents(x, y, dy, length = 2, lwd = 2, col = 2:6) points(x, y, pch = 16)
# Define test function and its gradient from Oakley and O'Hagan (2002) f <- function(x) 5 + x + cos(x) fGrad <- function(x) 1 - sin(x) # Generate coordinates and calculate slopes x <- seq(-5, 5, length = 5) y <- f(x) dy <- fGrad(x) # Draw curve and tangent lines curve(f(x), from = -6, to = 6) tangents(x, y, dy, length = 2, lwd = 2, col = 2:6) points(x, y, pch = 16)
Overview of testfunctions available in gek.
branin
: Branin-Hoo function
camel3
: Three-hump camel function
camel6
: Six-hump camel function
himmelblau
: Himmelblaus's function
banana
: Rosenbrock's Banana function
cigar
: Bent Cigar function
griewank
: Griewank function
qing
: Qing function
rastrigin
: Rastrigin function
schwefel
: Schwefel function
sphere
: Sphere function
styblinski
: Styblinski-Tang function
borehole
: Borehole function
steel
: Steel column function
short
: Short column function
sulfur
: Sulfur model function
Carmen van Meegen
Branin, Jr., F. H. (1972). Widely Convergent Method of Finding Multiple Solutions of Simultaneous Nonlinear Equations. IBM Journal of Research and Development, 16(5):504–522.
Jamil, M. and Yang, X.-S. (2013). A Literature Survey of Benchmark Functions for Global Optimization Problems. International Journal of Mathematical Modelling and Numerical Optimisation, 4(2):150-–194. doi:10.1504/IJMMNO.2013.055204.
Harper, W. V. and Gupta, S. K. (1983). Sensitivity/Uncertainty Analysis of a Borehole Scenario Comparing Latin Hypercube Sampling and Deterministic Sensitivity Approaches. BMI/ONWI-516, Office of Nuclear Waste Isolation, Battelle Memorial Institute, Columbus, OH.
Himmelblau, D. (1972). Applied Nonlinear Programming. McGraw-Hill. ISBN 0-07-028921-2.
Kuschel, N. and Rackwitz, R. (1997). Two Basic Problems in Reliability-Based Structural Optimization. Mathematical Methods of Operations Research, 46(3):309–333.
Morris, M., Mitchell, T., and Ylvisaker, D. (1993). Bayesian Design and Analysis of Computer Experiments: Use of Derivatives in Surface Prediction. Technometrics, 35(3):243–255. doi:10.1080/00401706.1993.10485320.
Plevris, V. and Solorzano, G. (2022). A Collection of 30 Multidimensional Functions for Global Optimization Benchmarking. Data, 7(4):46. doi:10.3390/data7040046.
Rosenbrock, H. H. (1960). An Automatic Method for Finding the Greatest or least Value of a Function. The Computer Journal, 3(3):175–184. doi:10.1093/comjnl/3.3.175.
Surjanovic, S. and Bingham, D. (2013). Virtual Library of Simulation Experiments: Test Functions and Datasets. https://www.sfu.ca/~ssurjano/ (retrieved January 19, 2024).
banana
, bananaGrad
,
borehole
, boreholeGrad
,
branin
, braninGrad
,
camel3
, camel3Grad
,
camel6
, camel6Grad
,
cigar
, cigarGrad
,
griewank
, griewankGrad
,
himmelblau
, himmelblauGrad
,
qing
, qingGrad
,
rastrigin
, rastriginGrad
,
schwefel
, schwefelGrad
,
short
, shortGrad
,
sphere
, sphereGrad
,
steel
, steelGrad
,
styblinski
, styblinskiGrad
,
sulfur
, sulfurGrad