Title: | Gaussian Process Based Design and Analysis for the Active Subspace Method |
---|---|
Description: | The active subspace method is a sensitivity analysis technique that finds important linear combinations of input variables for a simulator. This package provides functions allowing estimation of the active subspace without gradient information using Gaussian processes as well as sequential experimental design tools to minimize the amount of data required to do so. Implements Wycoff et al. (JCGS, 2021) <doi:10.48550/arXiv.1907.11572>. |
Authors: | Nathan Wycoff, Mickael Binois |
Maintainer: | Nathan Wycoff <[email protected]> |
License: | BSD_3_clause + file LICENSE |
Version: | 1.1.1 |
Built: | 2024-12-22 06:40:14 UTC |
Source: | CRAN |
Active subspace estimation with Gaussian processes
The primary function for analysis of the active subspace given some set of function evaluations is C_GP.
C_var, C_var2, and C_tr give three possible acquisition functions for sequential design. Either C_var or C_var2 is recommended, see Wycoff et al for details and the example below for usage.
Nathan Wycoff, Mickael Binois
N. Wycoff, M. Binois, S. Wild (2019+), Sequential Learning of Active Subspaces, preprint.
P. Constantine (2015) Active Subspaces: Emerging Ideas for Dimension Reduction in Parameter Studies, SIAM Spotlights
################################################################################ ### Sequential learning of active subspaces ################################################################################ library(hetGP); library(lhs) set.seed(42) nvar <- 2 n <- 20 nits <- 20 # theta gives the subspace direction f <- function(x, theta, nugget = 1e-6){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) return(100*erf((xact + 0.5)*5) + hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } theta_dir <- pi/6 act_dir <- c(cos(theta_dir), -sin(theta_dir)) # Create design of experiments and initial GP model design <- X <- matrix(signif(maximinLHS(n, nvar), 2), ncol = nvar) response <- Y <- apply(design, 1, f, theta = theta_dir) model <- mleHomGP(design, response, lower = rep(1e-4, nvar), upper = rep(0.5,nvar), known = list(g = 1e-6, beta0 = 0)) C_hat <- C_GP(model) ngrid <- 51 xgrid <- seq(0, 1,, ngrid) Xgrid <- as.matrix(expand.grid(xgrid, xgrid)) filled.contour(matrix(f(Xgrid, theta = theta_dir), ngrid)) ssd <- rep(NA, nits) # Main loop for(nit in 1:nits) { cat(nit) cat(" ") af <- function(x, C) C_var(C, x, grad = FALSE) af_gr <- function(x, C) C_var(C, x, grad = TRUE) Ctr_grid <- apply(Xgrid, 1, af, C = C_hat) # CVAR # Best candidate point opt_cand <- matrix(Xgrid[which.max(Ctr_grid),], 1) # Refine with gradient based optimization opt <- optim(opt_cand, af, af_gr, method = 'L-BFGS-B', lower = rep(0, nvar), C = C_hat, upper = rep(1, nvar), hessian = TRUE, control = list(fnscale=-1, trace = 0, maxit = 10)) # Criterion surface with best initial point and corresponding local optimum filled.contour(matrix(Ctr_grid, ngrid), color.palette = terrain.colors, plot.axes = {axis(1); axis(2); points(X, pch = 20); points(opt_cand, pch = 20, col = 'blue'); points(opt$par, pch = 20, col = 'red')}) X <- rbind(X, opt$par) Ynew <- f(opt$par, theta = theta_dir) Y <- c(Y, Ynew) model <- update(model, Xnew = opt$par, Znew = Ynew) ## periodically restart model fit if(nit %% 5 == 0){ mod2 <- mleHomGP(X = list(X0 = model$X0, Z0 = model$Z0, mult = model$mult), Z = model$Z, known = model$used_args$known, lower = model$used_args$lower, upper = model$used_args$upper) if(mod2$ll > model$ll) model <- mod2 } C_hat <- C_GP(model) # Compute subspace distance ssd[nit] <- subspace_dist(C_hat, matrix(act_dir, nrow = nvar), r = 1) } plot(ssd, type = 'b')
################################################################################ ### Sequential learning of active subspaces ################################################################################ library(hetGP); library(lhs) set.seed(42) nvar <- 2 n <- 20 nits <- 20 # theta gives the subspace direction f <- function(x, theta, nugget = 1e-6){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) return(100*erf((xact + 0.5)*5) + hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } theta_dir <- pi/6 act_dir <- c(cos(theta_dir), -sin(theta_dir)) # Create design of experiments and initial GP model design <- X <- matrix(signif(maximinLHS(n, nvar), 2), ncol = nvar) response <- Y <- apply(design, 1, f, theta = theta_dir) model <- mleHomGP(design, response, lower = rep(1e-4, nvar), upper = rep(0.5,nvar), known = list(g = 1e-6, beta0 = 0)) C_hat <- C_GP(model) ngrid <- 51 xgrid <- seq(0, 1,, ngrid) Xgrid <- as.matrix(expand.grid(xgrid, xgrid)) filled.contour(matrix(f(Xgrid, theta = theta_dir), ngrid)) ssd <- rep(NA, nits) # Main loop for(nit in 1:nits) { cat(nit) cat(" ") af <- function(x, C) C_var(C, x, grad = FALSE) af_gr <- function(x, C) C_var(C, x, grad = TRUE) Ctr_grid <- apply(Xgrid, 1, af, C = C_hat) # CVAR # Best candidate point opt_cand <- matrix(Xgrid[which.max(Ctr_grid),], 1) # Refine with gradient based optimization opt <- optim(opt_cand, af, af_gr, method = 'L-BFGS-B', lower = rep(0, nvar), C = C_hat, upper = rep(1, nvar), hessian = TRUE, control = list(fnscale=-1, trace = 0, maxit = 10)) # Criterion surface with best initial point and corresponding local optimum filled.contour(matrix(Ctr_grid, ngrid), color.palette = terrain.colors, plot.axes = {axis(1); axis(2); points(X, pch = 20); points(opt_cand, pch = 20, col = 'blue'); points(opt$par, pch = 20, col = 'red')}) X <- rbind(X, opt$par) Ynew <- f(opt$par, theta = theta_dir) Y <- c(Y, Ynew) model <- update(model, Xnew = opt$par, Znew = Ynew) ## periodically restart model fit if(nit %% 5 == 0){ mod2 <- mleHomGP(X = list(X0 = model$X0, Z0 = model$Z0, mult = model$mult), Z = model$Z, known = model$used_args$known, lower = model$used_args$lower, upper = model$used_args$upper) if(mod2$ll > model$ll) model <- mod2 } C_hat <- C_GP(model) # Compute subspace distance ssd[nit] <- subspace_dist(C_hat, matrix(act_dir, nrow = nvar), r = 1) } plot(ssd, type = 'b')
Given a const_C object, extracts the actual matrix itself.
## S3 method for class 'const_C' as.matrix(x, ...)
## S3 method for class 'const_C' as.matrix(x, ...)
x |
A const_C object with field 'mat'. |
... |
Additional parameters. Not used. |
The mat entry of C, a matrix.
Computes the integral over the input domain of the outer product of the gradients of a Gaussian process. The corresponding matrix is the C matrix central in active subspace methodology.
C_GP( modelX, y, measure = "lebesgue", xm = NULL, xv = NULL, S = NULL, verbose = TRUE )
C_GP( modelX, y, measure = "lebesgue", xm = NULL, xv = NULL, S = NULL, verbose = TRUE )
modelX |
This may be either 1) a |
y |
A vector of responses corresponding to the design matrix; may be ommited if a GP fit is provided in the modelX argument. |
measure |
One of c("lebesgue", "gaussian", "trunc_gaussian", "sample", "discrete"), indiciating the probability distribution with respect to which the input points are drawn in the definition of the active subspace. "lebesgue" uses the Lebesgue or Uniform measure over the unit hypercube [0,1]^d. "gaussian" uses a Gaussian or Normal distribution, in which case xm and xv should be specified. "trunc_gaussian" gives a truncated Gaussian or Normal distribution over the unit hypercube [0,1]^d, in which case xm and xv should be specified. "sample" gives the Sample or Empirical measure (dirac deltas located at each design point), which is equivalent to calculating the average expected gradient outer product at the design points. "discrete" gives a measure which puts equal weight at points in the input space specified via the S parameter, which should be a matrix with one row for each atom of the measure. |
xm |
If measure is "gaussian" or "trunc_gaussian", gives the mean vector. |
xv |
If measure is "gaussian" or "trunc_gaussian", gives the marginal variance vector. The covariance matrix is assumed to be diagonal. |
S |
If measure is "discrete", gives the locations of the measure's atoms. S is a matrix, each row of which gives an atom. |
verbose |
Should we print progress? |
a const_C
object with elements
model
: GP model provided or estimated;
mat
: C matrix estimated;
Wij
: list of W matrices, of size number of variables;
ct
: covariance type (1 for "Gaussian", 2 for "Matern3_2", 3 for "Matern5_2").
N. Wycoff, M. Binois, S. Wild (2019+), Sequential Learning of Active Subspaces, preprint.
P. Constantine (2015), Active Subspaces, Philadelphia, PA: SIAM.
################################################################################ ### Active subspace of a Gaussian process ################################################################################ library(hetGP); library(lhs) set.seed(42) nvar <- 2 n <- 100 # theta gives the subspace direction f <- function(x, theta, nugget = 1e-3){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } theta_dir <- pi/6 act_dir <- c(cos(theta_dir), -sin(theta_dir)) # Create design of experiments and initial GP model design <- X <- matrix(signif(maximinLHS(n, nvar), 2), ncol = nvar) response <- Y <- apply(design, 1, f, theta = theta_dir) model <- mleHomGP(design, response, known = list(beta0 = 0)) C_hat <- C_GP(model) # Subspace distance to true subspace: print(subspace_dist(C_hat, matrix(act_dir, nrow = nvar), r = 1)) plot(design %*% eigen(C_hat$mat)$vectors[,1], response, main = "Projection along estimated active direction") plot(design %*% eigen(C_hat$mat)$vectors[,2], response, main = "Projection along estimated inactive direction") # For other plots: # par(mfrow = c(1, 3)) # uncomment to have all plots together plot(C_hat) # par(mfrow = c(1, 1)) # restore graphical window
################################################################################ ### Active subspace of a Gaussian process ################################################################################ library(hetGP); library(lhs) set.seed(42) nvar <- 2 n <- 100 # theta gives the subspace direction f <- function(x, theta, nugget = 1e-3){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } theta_dir <- pi/6 act_dir <- c(cos(theta_dir), -sin(theta_dir)) # Create design of experiments and initial GP model design <- X <- matrix(signif(maximinLHS(n, nvar), 2), ncol = nvar) response <- Y <- apply(design, 1, f, theta = theta_dir) model <- mleHomGP(design, response, known = list(beta0 = 0)) C_hat <- C_GP(model) # Subspace distance to true subspace: print(subspace_dist(C_hat, matrix(act_dir, nrow = nvar), r = 1)) plot(design %*% eigen(C_hat$mat)$vectors[,1], response, main = "Projection along estimated active direction") plot(design %*% eigen(C_hat$mat)$vectors[,2], response, main = "Projection along estimated inactive direction") # For other plots: # par(mfrow = c(1, 3)) # uncomment to have all plots together plot(C_hat) # par(mfrow = c(1, 1)) # restore graphical window
CI on Eigenvalues via Monte Carlo/GP
C_GP_ci(model, B = 100)
C_GP_ci(model, B = 100)
model |
A homGP model |
B |
Monte Carlo iterates |
A list with elements ci giving 95
################################################################################ ## Example of uncertainty quantification on C estimate ################################################################################ library(hetGP); library(lhs) set.seed(42) nvar <- 2 n <- 20 nits <- 20 # theta gives the subspace direction f <- function(x, theta, nugget = 1e-6){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } theta_dir <- pi/6 act_dir <- c(cos(theta_dir), -sin(theta_dir)) # Create design of experiments and initial GP model design <- X <- matrix(signif(maximinLHS(n, nvar), 2), ncol = nvar) response <- Y <- apply(design, 1, f, theta = theta_dir) model <- mleHomGP(design, response, known = list(beta0 = 0)) res <- C_GP_ci(model) plot(c(1, 2), log(c(mean(res$eigen_draws[,1]), mean(res$eigen_draws[,2]))), ylim = range(log(res$eigen_draws)), ylab = "Eigenvalue", xlab = "Index") segments(1, log(res$ci[1,1]), 1, log(res$ci[2,1])) segments(2, log(res$ci[1,2]), 2, log(res$ci[2,2]))
################################################################################ ## Example of uncertainty quantification on C estimate ################################################################################ library(hetGP); library(lhs) set.seed(42) nvar <- 2 n <- 20 nits <- 20 # theta gives the subspace direction f <- function(x, theta, nugget = 1e-6){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } theta_dir <- pi/6 act_dir <- c(cos(theta_dir), -sin(theta_dir)) # Create design of experiments and initial GP model design <- X <- matrix(signif(maximinLHS(n, nvar), 2), ncol = nvar) response <- Y <- apply(design, 1, f, theta = theta_dir) model <- mleHomGP(design, response, known = list(beta0 = 0)) res <- C_GP_ci(model) plot(c(1, 2), log(c(mean(res$eigen_draws[,1]), mean(res$eigen_draws[,2]))), ylim = range(log(res$eigen_draws)), ylab = "Eigenvalue", xlab = "Index") segments(1, log(res$ci[1,1]), 1, log(res$ci[2,1])) segments(2, log(res$ci[1,2]), 2, log(res$ci[2,2]))
Expected variance of trace of C
C_tr(C, xnew, grad = FALSE)
C_tr(C, xnew, grad = FALSE)
C |
A const_C object, the result of a call to |
xnew |
The new design point |
grad |
If |
A real number giving the expected variance of the trace of C given the current design.
N. Wycoff, M. Binois, S. Wild (2019+), Sequential Learning of Active Subspaces, preprint.
################################################################################ ### Variance of trace criterion landscape ################################################################################ library(hetGP) set.seed(42) nvar <- 2 n <- 20 # theta gives the subspace direction f <- function(x, theta = pi/6, nugget = 1e-6){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } design <- matrix(signif(runif(nvar*n), 2), ncol = nvar) response <- apply(design, 1, f) model <- mleHomGP(design, response, lower = rep(1e-4, nvar), upper = rep(0.5,nvar), known = list(g = 1e-4)) C_hat <- C_GP(model) ngrid <- 101 xgrid <- seq(0, 1,, ngrid) Xgrid <- as.matrix(expand.grid(xgrid, xgrid)) filled.contour(matrix(f(Xgrid), ngrid)) Ctr_grid <- apply(Xgrid, 1, C_tr, C = C_hat) filled.contour(matrix(Ctr_grid, ngrid), color.palette = terrain.colors, plot.axes = {axis(1); axis(2); points(design, pch = 20)})
################################################################################ ### Variance of trace criterion landscape ################################################################################ library(hetGP) set.seed(42) nvar <- 2 n <- 20 # theta gives the subspace direction f <- function(x, theta = pi/6, nugget = 1e-6){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } design <- matrix(signif(runif(nvar*n), 2), ncol = nvar) response <- apply(design, 1, f) model <- mleHomGP(design, response, lower = rep(1e-4, nvar), upper = rep(0.5,nvar), known = list(g = 1e-4)) C_hat <- C_GP(model) ngrid <- 101 xgrid <- seq(0, 1,, ngrid) Xgrid <- as.matrix(expand.grid(xgrid, xgrid)) filled.contour(matrix(f(Xgrid), ngrid)) Ctr_grid <- apply(Xgrid, 1, C_tr, C = C_hat) filled.contour(matrix(Ctr_grid, ngrid), color.palette = terrain.colors, plot.axes = {axis(1); axis(2); points(design, pch = 20)})
Element-wise Cn+1 variance
C_var(C, xnew, grad = FALSE)
C_var(C, xnew, grad = FALSE)
C |
A const_C object, the result of a call to |
xnew |
The new design point |
grad |
If |
A real number giving the expected elementwise variance of C given the current design.
N. Wycoff, M. Binois, S. Wild (2019+), Sequential Learning of Active Subspaces, preprint.
################################################################################ ### Norm of the variance of C criterion landscape ################################################################################ library(hetGP) set.seed(42) nvar <- 2 n <- 20 # theta gives the subspace direction f <- function(x, theta = pi/6, nugget = 1e-6){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } design <- matrix(signif(runif(nvar*n), 2), ncol = nvar) response <- apply(design, 1, f) model <- mleHomGP(design, response, lower = rep(1e-4, nvar), upper = rep(0.5,nvar), known = list(g = 1e-4)) C_hat <- C_GP(model) ngrid <- 51 xgrid <- seq(0, 1,, ngrid) Xgrid <- as.matrix(expand.grid(xgrid, xgrid)) filled.contour(matrix(f(Xgrid), ngrid)) cvar_crit <- function(C, xnew){ return(sqrt(sum(C_var(C, xnew)^2))) } Cvar_grid <- apply(Xgrid, 1, cvar_crit, C = C_hat) filled.contour(matrix(Cvar_grid, ngrid), color.palette = terrain.colors, plot.axes = {axis(1); axis(2); points(design, pch = 20)})
################################################################################ ### Norm of the variance of C criterion landscape ################################################################################ library(hetGP) set.seed(42) nvar <- 2 n <- 20 # theta gives the subspace direction f <- function(x, theta = pi/6, nugget = 1e-6){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } design <- matrix(signif(runif(nvar*n), 2), ncol = nvar) response <- apply(design, 1, f) model <- mleHomGP(design, response, lower = rep(1e-4, nvar), upper = rep(0.5,nvar), known = list(g = 1e-4)) C_hat <- C_GP(model) ngrid <- 51 xgrid <- seq(0, 1,, ngrid) Xgrid <- as.matrix(expand.grid(xgrid, xgrid)) filled.contour(matrix(f(Xgrid), ngrid)) cvar_crit <- function(C, xnew){ return(sqrt(sum(C_var(C, xnew)^2))) } Cvar_grid <- apply(Xgrid, 1, cvar_crit, C = C_hat) filled.contour(matrix(Cvar_grid, ngrid), color.palette = terrain.colors, plot.axes = {axis(1); axis(2); points(design, pch = 20)})
Defined as E[(C - E[C])^2], where A^2 = AA (not elementwise multiplication).
C_var2(C, xnew, grad = FALSE)
C_var2(C, xnew, grad = FALSE)
C |
A const_C object, the result of a call to |
xnew |
The new design point |
grad |
If |
A real number giving the expected variance of C defined via matrix multiplication given the current design.
N. Wycoff, M. Binois, S. Wild (2019+), Sequential Learning of Active Subspaces, preprint.
################################################################################ ### Norm of the variance of C criterion landscape ################################################################################ library(hetGP) set.seed(42) nvar <- 2 n <- 20 # theta gives the subspace direction f <- function(x, theta = pi/6, nugget = 1e-6){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } design <- matrix(signif(runif(nvar*n), 2), ncol = nvar) response <- apply(design, 1, f) model <- mleHomGP(design, response, lower = rep(1e-4, nvar), upper = rep(0.5,nvar), known = list(g = 1e-4)) C_hat <- C_GP(model) ngrid <- 51 xgrid <- seq(0, 1,, ngrid) Xgrid <- as.matrix(expand.grid(xgrid, xgrid)) filled.contour(matrix(f(Xgrid), ngrid)) cvar_crit <- function(C, xnew){ return(sqrt(sum(C_var(C, xnew)^2))) } Cvar_grid <- apply(Xgrid, 1, cvar_crit, C = C_hat) filled.contour(matrix(Cvar_grid, ngrid), color.palette = terrain.colors, plot.axes = {axis(1); axis(2); points(design, pch = 20)})
################################################################################ ### Norm of the variance of C criterion landscape ################################################################################ library(hetGP) set.seed(42) nvar <- 2 n <- 20 # theta gives the subspace direction f <- function(x, theta = pi/6, nugget = 1e-6){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } design <- matrix(signif(runif(nvar*n), 2), ncol = nvar) response <- apply(design, 1, f) model <- mleHomGP(design, response, lower = rep(1e-4, nvar), upper = rep(0.5,nvar), known = list(g = 1e-4)) C_hat <- C_GP(model) ngrid <- 51 xgrid <- seq(0, 1,, ngrid) Xgrid <- as.matrix(expand.grid(xgrid, xgrid)) filled.contour(matrix(f(Xgrid), ngrid)) cvar_crit <- function(C, xnew){ return(sqrt(sum(C_var(C, xnew)^2))) } Cvar_grid <- apply(Xgrid, 1, cvar_crit, C = C_hat) filled.contour(matrix(Cvar_grid, ngrid), color.palette = terrain.colors, plot.axes = {axis(1); axis(2); points(design, pch = 20)})
Given an m dimensional function whose inputs live in bounded intervals [a1, b1], ..., [am, bm], return a wrapped version of the function whose inputs live in R^m. Transformed using the logit function.
domain_to_R(f, domain)
domain_to_R(f, domain)
f |
The function to wrap, should have a single vector-valued input. |
domain |
A list of real tuples, indicating the original domain of the function. |
A function wrapping f.
Given an m dimensional function whose inputs live in bounded intervals [a1, b1], ..., [am, bm], return a wrapped version of the function whose inputs live in [-1, 1], ..., [-1, 1].
domain_to_unit(f, domain)
domain_to_unit(f, domain)
f |
The function to wrap, should have a single vector-valued input. |
domain |
A list of real tuples, indicating the original domain of the function. |
A function wrapping f.
Looks between [-1, 1]
grad_est_subspace(f, r, m, M = NULL, scale = FALSE)
grad_est_subspace(f, r, m, M = NULL, scale = FALSE)
f |
The function to eval |
r |
The max dim of the active subspace |
m |
The dimension of the underlying/embedding space. |
M |
optional budget of evaluations, default to |
scale |
Scale all gradients to have norm 1? |
A list with sub, the active subspace, sv, the singular values (all m of them), fs, which gives function values, gs, function grads, and X, which gives sampled locations.
Hessian of the log-likelihood with respect to lengthscales hyperparameters Works for homGP and hetGP models from the hetGP package for now.
logLikHessian(model)
logLikHessian(model)
model |
homGP model |
A matrix giving the Hessian of the GP loglikelihood.
Computes a matrix square root of C = Lt
Lt_GP(..., C)
Lt_GP(..., C)
... |
Parameters to be passed to C_GP, if C was not provided. |
C |
the result of a call to C_GP. If provided, all other arguments are ignored. |
The matrix Lt which can be used for sensitivity prewarping, i.e. by computing Xw = X
f:[-1, 1] -> R Becomes f:[0,1] -> R
n11_2_01(f)
n11_2_01(f)
f |
initial function |
The same function with domain shifted.
Plot const_C objectc
## S3 method for class 'const_C' plot(x, output = c("all", "matrix", "logvals", "projfn"), ...)
## S3 method for class 'const_C' plot(x, output = c("all", "matrix", "logvals", "projfn"), ...)
x |
A const_C object, the result of a call to C_GP |
output |
one of |
... |
Additional parameters. Not used. |
Print const_C objects
## S3 method for class 'const_C' print(x, ...)
## S3 method for class 'const_C' print(x, ...)
x |
A const_C object, the result of a call to C_GP |
... |
Additional parameters. Not used. |
Get the distance between subspaces defined as the ranges of A and B
subspace_dist(A, B, r)
subspace_dist(A, B, r)
A |
A matrix or const_C object. |
B |
Another matrix with the same number of rows as A, or const_C object of the same dimension. |
r |
A scalar integer, the dimension of the subspace to compare (only necessary if either A or B is a const_C object). |
A nonnegative scalar giving the cosine of the first principle angle between the two subspaces.
Update Constantine's C, using update formula
update_C2(C, xnew, ynew)
update_C2(C, xnew, ynew)
C |
A const_C object, the result of a call to |
xnew |
The new design point |
ynew |
The new response |
Updated C matrix, a const_C object.
N. Wycoff, M. Binois, S. Wild (2019+), Sequential Learning of Active Subspaces, preprint.
Update Constantine's C with new point(s) for a GP
## S3 method for class 'const_C' update(object, Xnew, Znew, ...)
## S3 method for class 'const_C' update(object, Xnew, Znew, ...)
object |
A const_C object, the result of a call to the C_GP function. |
Xnew |
matrix (one point per row) corresponding to the new designs |
Znew |
vector of size |
... |
not used (for consistency of update method) |
The updated const_C object originally provided.
C_GP
to generate const_C objects from mleHomGP
objects; update_C2
for an update using faster expressions.
################################################################################ ### Active subspace of a Gaussian process ################################################################################ library(hetGP); library(lhs) set.seed(42) nvar <- 2 n <- 100 # theta gives the subspace direction f <- function(x, theta, nugget = 1e-3){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } theta_dir <- pi/6 act_dir <- c(cos(theta_dir), -sin(theta_dir)) # Create design of experiments and initial GP model design <- X <- matrix(signif(maximinLHS(n, nvar), 2), ncol = nvar) response <- Y <- apply(design, 1, f, theta = theta_dir) model <- mleHomGP(design, response, known = list(beta0 = 0)) C_hat <- C_GP(model) print(C_hat) print(subspace_dist(C_hat, matrix(act_dir, nrow = nvar), r = 1)) # New designs Xnew <- matrix(runif(2), 1) Znew <- f(Xnew, theta_dir) C_new <- update(C_hat, Xnew, Znew) print(C_new) subspace_dist(C_new, matrix(act_dir, nrow = nvar), r = 1)
################################################################################ ### Active subspace of a Gaussian process ################################################################################ library(hetGP); library(lhs) set.seed(42) nvar <- 2 n <- 100 # theta gives the subspace direction f <- function(x, theta, nugget = 1e-3){ if(is.null(dim(x))) x <- matrix(x, 1) xact <- cos(theta) * x[,1] - sin(theta) * x[,2] return(hetGP::f1d(xact) + rnorm(n = nrow(x), sd = rep(nugget, nrow(x)))) } theta_dir <- pi/6 act_dir <- c(cos(theta_dir), -sin(theta_dir)) # Create design of experiments and initial GP model design <- X <- matrix(signif(maximinLHS(n, nvar), 2), ncol = nvar) response <- Y <- apply(design, 1, f, theta = theta_dir) model <- mleHomGP(design, response, known = list(beta0 = 0)) C_hat <- C_GP(model) print(C_hat) print(subspace_dist(C_hat, matrix(act_dir, nrow = nvar), r = 1)) # New designs Xnew <- matrix(runif(2), 1) Znew <- f(Xnew, theta_dir) C_new <- update(C_hat, Xnew, Znew) print(C_new) subspace_dist(C_new, matrix(act_dir, nrow = nvar), r = 1)