Title: | Likelihood-Free Parameter Estimation using Neural Networks |
---|---|
Description: | An 'R' interface to the 'Julia' package 'NeuralEstimators.jl'. The package facilitates the user-friendly development of neural point estimators, which are neural networks that map data to a point summary of the posterior distribution. These estimators are likelihood-free and amortised, in the sense that, after an initial setup cost, inference from observed data can be made in a fraction of the time required by conventional approaches; see Sainsbury-Dale, Zammit-Mangion, and Huser (2024) <doi:10.1080/00031305.2023.2249522> for further details and an accessible introduction. The package also enables the construction of neural networks that approximate the likelihood-to-evidence ratio in an amortised manner, allowing one to perform inference based on the likelihood function or the entire posterior distribution; see Zammit-Mangion, Sainsbury-Dale, and Huser (2024, Sec. 5.2) <doi:10.48550/arXiv.2404.12484>, and the references therein. The package accommodates any model for which simulation is feasible by allowing the user to implicitly define their model through simulated data. |
Authors: | Matthew Sainsbury-Dale [aut, cre] |
Maintainer: | Matthew Sainsbury-Dale <[email protected]> |
License: | GPL (>= 2) |
Version: | 0.1.1 |
Built: | 2024-11-04 03:31:19 UTC |
Source: | CRAN |
An 'R' interface to the 'Julia' package 'NeuralEstimators.jl'. The package facilitates the user-friendly development of neural point estimators, which are neural networks that map data to a point summary of the posterior distribution. These estimators are likelihood-free and amortised, in the sense that, after an initial setup cost, inference from observed data can be made in a fraction of the time required by conventional approaches; see Sainsbury-Dale, Zammit-Mangion, and Huser (2024) doi:10.1080/00031305.2023.2249522 for further details and an accessible introduction. The package also enables the construction of neural networks that approximate the likelihood-to-evidence ratio in an amortised manner, allowing one to perform inference based on the likelihood function or the entire posterior distribution; see Zammit-Mangion, Sainsbury-Dale, and Huser (2024, Sec. 5.2) doi:10.48550/arXiv.2404.12484, and the references therein. The package accommodates any model for which simulation is feasible by allowing the user to implicitly define their model through simulated data.
Maintainer: Matthew Sainsbury-Dale [email protected]
assess a neural estimator
assess( estimators, parameters, Z, estimator_names = NULL, parameter_names = NULL, use_gpu = TRUE, verbose = TRUE )
assess( estimators, parameters, Z, estimator_names = NULL, parameter_names = NULL, use_gpu = TRUE, verbose = TRUE )
estimators |
a list of (neural) estimators |
parameters |
true parameters, stored as a pxK matrix, where p is the number of parameters in the statistical model and K is the number of sampled parameter vectors |
Z |
data simulated conditionally on the |
estimator_names |
list of names of the estimators (sensible defaults provided) |
parameter_names |
list of names of the parameters (sensible defaults provided) |
use_gpu |
a boolean indicating whether to use the GPU if it is available (default true) |
verbose |
a boolean indicating whether information should be printed to the console |
a list of two data frames: runtimes
, contains the
total time taken for each estimator, while estimates
is a long-form
data frame with columns:
"estimator"; the name of the estimator
"parameter"; the name of the parameter
"truth"; the true value of the parameter
"estimate"; the estimated value of the parameter
"m"; the sample size (number of iid replicates)
"k"; the index of the parameter vector in the test set
"j"; the index of the data set
risk()
, rmse()
, bias()
, plotestimates()
, and plotdistribution()
for computing various empirical diagnostics and visualisations based on an assessment object
computes a Monte Carlo approximation of an estimator's bias
bias(assessment, ...)
bias(assessment, ...)
assessment |
an object returned by |
... |
optional arguments inherited from |
a dataframe giving the estimated bias
Compute bootstrap estimates from a neural estimator
bootstrap(estimator, Z, B = 400, blocks = NULL, use_gpu = TRUE)
bootstrap(estimator, Z, B = 400, blocks = NULL, use_gpu = TRUE)
estimator |
a neural estimator |
Z |
either a list of data sets simulated conditionally on the fitted parameters (parametric bootstrap); or a single observed data set containing independent replicates, which will be sampled with replacement |
B |
number of non-parametric bootstrap estimates (default 400) |
blocks |
integer vector specifying the blocks in non-parameteric bootstrap (default |
use_gpu |
a boolean indicating whether to use the GPU if it is available (default |
p × B matrix, where p is the number of parameters in the model and B is the number of bootstrap samples
for data Z
with missing (NA
) entries, returns an augmented data set (U, W) where W encodes the missingness pattern as an indicator vector and U is the original data Z with missing entries replaced by a fixed constant c
.
The indicator vector W is stored in the second-to-last dimension of Z
, which should be singleton. If the second-to-last dimension is not singleton, then two singleton dimensions will be added to the array, and W will be stored in the new second-to-last dimension.
encodedata(Z, c = 0)
encodedata(Z, c = 0)
Z |
data containing |
c |
fixed constant with which to replace |
Augmented data set (U, W). If Z
is provided as a list, the return type will be a JuliaProxy
object; these objects can be indexed in the usual manner (e.g., using [[
), or converted to an R object using juliaGet()
(note however that juliaGet()
can be slow for large data sets).
## Not run: library("NeuralEstimators") Z <- matrix(c(1, 2, NA, NA, 5, 6, 7, NA, 9), nrow = 3) encodedata(Z) encodedata(list(Z, Z)) ## End(Not run)
## Not run: library("NeuralEstimators") Z <- matrix(c(1, 2, NA, NA, 5, 6, 7, NA, 9), nrow = 3) encodedata(Z) encodedata(list(Z, Z)) ## End(Not run)
estimate parameters from observed data using a neural estimator
estimate(estimator, Z, theta = NULL, use_gpu = TRUE)
estimate(estimator, Z, theta = NULL, use_gpu = TRUE)
estimator |
a neural estimator |
Z |
data; format should be amenable to the architecture of |
theta |
parameter vectors (only for neural estimators that take both the data and parameters as input, e.g., neural ratio estimators) |
use_gpu |
a boolean indicating whether to use the GPU if it is available (default true) |
a matrix of parameter estimates (i.e., estimator
applied to Z
)
## Not run: library("NeuralEstimators") library("JuliaConnectoR") ## Observed data: 100 replicates of a univariate random variable Z = matrix(rnorm(100), nrow = 1) ## Construct an (un-trained) point estimator estimator <- initialise_estimator(1, architecture = "MLP") ## Apply the estimator estimate(estimator, Z) ## End(Not run)
## Not run: library("NeuralEstimators") library("JuliaConnectoR") ## Observed data: 100 replicates of a univariate random variable Z = matrix(rnorm(100), nrow = 1) ## Construct an (un-trained) point estimator estimator <- initialise_estimator(1, architecture = "MLP") ## Apply the estimator estimate(estimator, Z) ## End(Not run)
Helper function for initialising a neural estimator.
The estimator is couched in the DeepSets framework so that it can be applied to data with an arbitrary number of independent replicates (including the special case of a single replicate).
initialise_estimator( p, architecture, d = 1, estimator_type = "point", depth = 3, width = 32, activation = "relu", activation_output = "identity", variance_stabiliser = NULL, kernel_size = NULL, weight_by_distance = TRUE, probs = c(0.025, 0.975) )
initialise_estimator( p, architecture, d = 1, estimator_type = "point", depth = 3, width = 32, activation = "relu", activation_output = "identity", variance_stabiliser = NULL, kernel_size = NULL, weight_by_distance = TRUE, probs = c(0.025, 0.975) )
p |
number of unknown parameters in the statistical model |
architecture |
a string: for unstructured data, one may use a fully-connected MLP ("MLP"); for data collected over a grid, a convolutional neural network ("CNN"); and for graphical or irregular spatial data, a graphical neural network ("GNN"). |
d |
for unstructured multivariate data (i.e., when |
estimator_type |
the type of estimator; either "point" or "interval". |
depth |
the number of hidden layers. Either a single integer or an integer vector of length two specifying the depth of inner (summary) and outer (inference) network of the DeepSets framework. Since there is an input and an output layer, the total number of layers is |
width |
a single integer or an integer vector of length |
activation |
the (non-linear) activation function of each hidden layer. Accepts a string of Julia code (default |
activation_output |
the activation function of the output layer layer. Accepts a string of Julia code (default |
variance_stabiliser |
a function that will be applied directly to the input, usually to stabilise the variance.: a string ('log' for the natural logarithm, or 'cbrt' for the cube-root function), or a string of Julia code that will be converted to a Julia function using |
kernel_size |
(applicable only to CNNs) a list of length |
weight_by_distance |
(applicable only to GNNs) flag indicating whether the estimator will weight by spatial distance; if true (default), a |
probs |
(applicable only if |
the initialised neural estimator, a JuliaProxy object
## Not run: library("NeuralEstimators") p = 2 initialise_estimator(p, architecture = "MLP") initialise_estimator(p, architecture = "GNN") ## 1D convolution initialise_estimator(p, architecture = "CNN", kernel_size = list(10, 5, 3)) ## 2D convolution initialise_estimator(p, architecture = "CNN", kernel_size = list(c(10, 10), c(5, 5), c(3, 3))) ## End(Not run)
## Not run: library("NeuralEstimators") p = 2 initialise_estimator(p, architecture = "MLP") initialise_estimator(p, architecture = "GNN") ## 1D convolution initialise_estimator(p, architecture = "CNN", kernel_size = list(10, 5, 3)) ## 2D convolution initialise_estimator(p, architecture = "CNN", kernel_size = list(c(10, 10), c(5, 5), c(3, 3))) ## End(Not run)
load a saved state of a neural estimator
loadstate(estimator, filename)
loadstate(estimator, filename)
estimator |
the neural estimator that we wish to load the state into |
filename |
file (including absolute path) of the neural-network state in a |
estimator
updated with the saved state
load a collection of saved weights of a neural estimator
loadweights(estimator, filename)
loadweights(estimator, filename)
estimator |
the neural estimator that we wish to load weights into |
filename |
file (including absolute path) of the neural-network weights saved as a |
estimator
updated with the saved weights
Given data Z
, a neural likelihood-to-evidence-ratio estimator
, and a prior
, computes the implied approximate maximum a posteriori (MAP) estimate
If a vector theta0
of initial parameter estimates is given, the approximate posterior density is maximised by gradient descent. Otherwise, if a matrix of parameters theta_grid
is given, the approximate posterior density is maximised by grid search.
mapestimate( estimator, Z, prior = NULL, theta_grid = NULL, theta0 = NULL, use_gpu = TRUE )
mapestimate( estimator, Z, prior = NULL, theta_grid = NULL, theta0 = NULL, use_gpu = TRUE )
estimator |
a neural likelihood-to-evidence-ratio estimator |
Z |
data; it's format should be amenable to the architecture of |
prior |
the prior (default uniform), specified as a Julia or R function |
theta_grid |
a (fine) gridding of the parameter space, given as a matrix with p rows, where p is the number of parameters in the model |
theta0 |
a vector of initial parameter estimates |
use_gpu |
a boolean indicating whether to use the GPU if it is available (default true) |
a p × K matrix of MAP estimates, where p is the number of parameters in the statistical model and K is the number of data sets provided in Z
sampleposterior()
, mlestimate()
Given data Z
and a neural likelihood-to-evidence-ratio estimator
, computes the implied approximate maximum-likelihood estimate
If a vector theta0
of initial parameter estimates is given, the approximate likelihood is maximised by gradient descent. Otherwise, if a matrix of parameters theta_grid
is given, the approximate likelihood is maximised by grid search.
mlestimate(estimator, Z, theta_grid = NULL, theta0 = NULL, use_gpu = TRUE)
mlestimate(estimator, Z, theta_grid = NULL, theta0 = NULL, use_gpu = TRUE)
estimator |
a neural likelihood-to-evidence-ratio estimator |
Z |
data; it's format should be amenable to the architecture of |
theta_grid |
a (fine) gridding of the parameter space, given as a matrix with p rows, where p is the number of parameters in the model |
theta0 |
a vector of initial parameter estimates |
use_gpu |
a boolean indicating whether to use the GPU if it is available (default true) |
a p × K matrix of maximum-likelihood estimates, where p is the number of parameters in the statistical model and K is the number of data sets provided in Z
sampleposterior()
, mapestimate()
Plot the empirical sampling distribution of an estimator.
plotdistribution( df, type = c("box", "density", "scatter"), parameter_labels = NULL, estimator_labels = ggplot2::waiver(), truth_colour = "black", truth_size = 8, truth_line_size = NULL, pairs = FALSE, upper_triangle_plots = NULL, legend = TRUE, return_list = FALSE, flip = FALSE )
plotdistribution( df, type = c("box", "density", "scatter"), parameter_labels = NULL, estimator_labels = ggplot2::waiver(), truth_colour = "black", truth_size = 8, truth_line_size = NULL, pairs = FALSE, upper_triangle_plots = NULL, legend = TRUE, return_list = FALSE, flip = FALSE )
df |
a long form data frame containing fields |
type |
string indicating whether to plot kernel density estimates for each individual parameter ( |
parameter_labels |
a named vector containing parameter labels. |
estimator_labels |
a named vector containing estimator labels. |
truth_colour |
the colour used to denote the true parameter value. |
truth_size |
the size of the point used to denote the true parameter value (applicable only for |
truth_line_size |
the size of the cross-hairs used to denote the true parameter value. If |
pairs |
logical; should we combine the scatter plots into a single pairs plot (applicable only for |
upper_triangle_plots |
an optional list of plots to include in the uppertriangle of the pairs plot. |
legend |
Flag; should we include the legend (only applies when constructing a pairs plot) |
return_list |
Flag; should the parameters be split into a list? |
flip |
Flag; should the boxplots be "flipped" using |
a list of 'ggplot'
objects or, if pairs = TRUE
, a single 'ggplot'
.
## Not run: # In the following, we have two estimators and, for each parameter, 50 estimates # from each estimator. estimators <- c("Estimator 1", "Estimator 2") estimator_labels <- c("Estimator 1" = expression(hat(theta)[1]("·")), "Estimator 2" = expression(hat(theta)[2]("·"))) # Single parameter: df <- data.frame( estimator = estimators, truth = 0, parameter = "mu", estimate = rnorm(2*50), replicate = rep(1:50, each = 2) ) parameter_labels <- c("mu" = expression(mu)) plotdistribution(df) plotdistribution(df, type = "density") plotdistribution(df, parameter_labels = parameter_labels, estimator_labels = estimator_labels) # Two parameters: df <- rbind(df, data.frame( estimator = estimators, truth = 1, parameter = "sigma", estimate = rgamma(2*50, shape = 1, rate = 1), replicate = rep(1:50, each = 2) )) parameter_labels <- c(parameter_labels, "sigma" = expression(sigma)) plotdistribution(df, parameter_labels = parameter_labels) plotdistribution(df, parameter_labels = parameter_labels, type = "density") plotdistribution(df, parameter_labels = parameter_labels, type = "scatter") # Three parameters: df <- rbind(df, data.frame( estimator = estimators, truth = 0.25, parameter = "alpha", estimate = 0.5 * runif(2*50), replicate = rep(1:50, each = 2) )) parameter_labels <- c(parameter_labels, "alpha" = expression(alpha)) plotdistribution(df, parameter_labels = parameter_labels) plotdistribution(df, parameter_labels = parameter_labels, type = "density") plotdistribution(df, parameter_labels = parameter_labels, type = "scatter") plotdistribution(df, parameter_labels = parameter_labels, type = "scatter", pairs = TRUE) # Pairs plot with user-specified plots in the upper triangle: upper_triangle_plots <- lapply(1:3, function(i) { x = rnorm(10) y = rnorm(10) shape = sample(c("Class 1", "Class 2"), 10, replace = TRUE) ggplot() + geom_point(aes(x = x, y = y, shape = shape)) + labs(shape = "") + theme_bw() }) plotdistribution( df, parameter_labels = parameter_labels, estimator_labels = estimator_labels, type = "scatter", pairs = TRUE, upper_triangle_plots = upper_triangle_plots ) ## End(Not run)
## Not run: # In the following, we have two estimators and, for each parameter, 50 estimates # from each estimator. estimators <- c("Estimator 1", "Estimator 2") estimator_labels <- c("Estimator 1" = expression(hat(theta)[1]("·")), "Estimator 2" = expression(hat(theta)[2]("·"))) # Single parameter: df <- data.frame( estimator = estimators, truth = 0, parameter = "mu", estimate = rnorm(2*50), replicate = rep(1:50, each = 2) ) parameter_labels <- c("mu" = expression(mu)) plotdistribution(df) plotdistribution(df, type = "density") plotdistribution(df, parameter_labels = parameter_labels, estimator_labels = estimator_labels) # Two parameters: df <- rbind(df, data.frame( estimator = estimators, truth = 1, parameter = "sigma", estimate = rgamma(2*50, shape = 1, rate = 1), replicate = rep(1:50, each = 2) )) parameter_labels <- c(parameter_labels, "sigma" = expression(sigma)) plotdistribution(df, parameter_labels = parameter_labels) plotdistribution(df, parameter_labels = parameter_labels, type = "density") plotdistribution(df, parameter_labels = parameter_labels, type = "scatter") # Three parameters: df <- rbind(df, data.frame( estimator = estimators, truth = 0.25, parameter = "alpha", estimate = 0.5 * runif(2*50), replicate = rep(1:50, each = 2) )) parameter_labels <- c(parameter_labels, "alpha" = expression(alpha)) plotdistribution(df, parameter_labels = parameter_labels) plotdistribution(df, parameter_labels = parameter_labels, type = "density") plotdistribution(df, parameter_labels = parameter_labels, type = "scatter") plotdistribution(df, parameter_labels = parameter_labels, type = "scatter", pairs = TRUE) # Pairs plot with user-specified plots in the upper triangle: upper_triangle_plots <- lapply(1:3, function(i) { x = rnorm(10) y = rnorm(10) shape = sample(c("Class 1", "Class 2"), 10, replace = TRUE) ggplot() + geom_point(aes(x = x, y = y, shape = shape)) + labs(shape = "") + theme_bw() }) plotdistribution( df, parameter_labels = parameter_labels, estimator_labels = estimator_labels, type = "scatter", pairs = TRUE, upper_triangle_plots = upper_triangle_plots ) ## End(Not run)
Plot estimates vs. true values.
plotestimates( df, estimator_labels = ggplot2::waiver(), parameter_labels = NULL )
plotestimates( df, estimator_labels = ggplot2::waiver(), parameter_labels = NULL )
df |
a long form data frame containing fields |
estimator_labels |
a named vector containing estimator labels. |
parameter_labels |
a named vector containing parameter labels. |
a 'ggplot'
of the estimates for each parameter against the true value.
## Not run: K <- 50 df <- data.frame( estimator = c("Estimator 1", "Estimator 2"), parameter = rep(c("mu", "sigma"), each = K), truth = 1:(2*K), estimate = 1:(2*K) + rnorm(4*K) ) estimator_labels <- c("Estimator 1" = expression(hat(theta)[1]("·")), "Estimator 2" = expression(hat(theta)[2]("·"))) parameter_labels <- c("mu" = expression(mu), "sigma" = expression(sigma)) plotestimates(df, parameter_labels = parameter_labels, estimator_labels) ## End(Not run)
## Not run: K <- 50 df <- data.frame( estimator = c("Estimator 1", "Estimator 2"), parameter = rep(c("mu", "sigma"), each = K), truth = 1:(2*K), estimate = 1:(2*K) + rnorm(4*K) ) estimator_labels <- c("Estimator 1" = expression(hat(theta)[1]("·")), "Estimator 2" = expression(hat(theta)[2]("·"))) parameter_labels <- c("mu" = expression(mu), "sigma" = expression(sigma)) plotestimates(df, parameter_labels = parameter_labels, estimator_labels) ## End(Not run)
computes a Monte Carlo approximation of an estimator's Bayes risk
risk( assessment, loss = function(x, y) abs(x - y), average_over_parameters = FALSE, average_over_sample_sizes = TRUE )
risk( assessment, loss = function(x, y) abs(x - y), average_over_parameters = FALSE, average_over_sample_sizes = TRUE )
assessment |
an object returned by |
loss |
a binary operator defining the loss function (default absolute-error loss) |
average_over_parameters |
if |
average_over_sample_sizes |
if |
a dataframe giving an estimate of the Bayes risk
computes a Monte Carlo approximation of an estimator's root-mean-square error (RMSE)
rmse(assessment, ...)
rmse(assessment, ...)
assessment |
an object returned by |
... |
optional arguments inherited from |
a dataframe giving the estimated RMSE
Given data Z
, a neural likelihood-to-evidence-ratio estimator
, and a prior
, draws samples from the implied approximate posterior distribution
Currently, the sampling algorithm is based on a fine-gridding theta_grid
of the parameter space. The approximate posterior density is evaluated over this grid, which is then used to draw samples. This is very effective when making inference with a small number of parameters. For models with a large number of parameters, other sampling algorithms may be needed (please feel free to contact the package maintainer for discussion).
sampleposterior( estimator, Z, theta_grid, N = 1000, prior = NULL, use_gpu = TRUE )
sampleposterior( estimator, Z, theta_grid, N = 1000, prior = NULL, use_gpu = TRUE )
estimator |
a neural likelihood-to-evidence-ratio estimator |
Z |
data; it's format should be amenable to the architecture of |
theta_grid |
a (fine) gridding of the parameter space, given as a matrix with p rows, where p is the number of parameters in the model |
N |
number of samples to draw (default 1000) |
prior |
the prior (default uniform), specified as a Julia or R function |
use_gpu |
a boolean indicating whether to use the GPU if it is available (default true) |
a p × N
matrix of posterior samples, where p is the number of parameters in the model. If multiple data sets are given in Z
, a list of posterior samples will be returned
save the state of a neural estimator
savestate(estimator, filename)
savestate(estimator, filename)
estimator |
the neural estimator that we wish to save |
filename |
file in which to save the neural-network state as a |
No return value, called for side effects
For k
> 0, defines Julia code that defines the loss function,
which approximates the 0-1 loss as k
tends to zero.
The resulting string is intended to be used in the function train
, but can also be converted to a callable function using juliaEval
.
tanhloss(k)
tanhloss(k)
k |
Positive numeric value that controls the smoothness of the approximation. |
String defining the tanh loss function in Julia code.
The function caters for different variants of "on-the-fly" simulation.
Specifically, a sampler
can be provided to continuously sample new
parameter vectors from the prior, and a simulator
can be provided to
continuously simulate new data conditional on the parameters. If provided
with specific sets of parameters (theta_train
and theta_val
)
and/or data (Z_train
and Z_val
), they will be held fixed during
training.
Note that using R
functions to perform "on-the-fly" simulation requires the user to have installed the Julia package RCall
.
train( estimator, sampler = NULL, simulator = NULL, theta_train = NULL, theta_val = NULL, Z_train = NULL, Z_val = NULL, m = NULL, M = NULL, K = 10000, xi = NULL, loss = "absolute-error", learning_rate = 1e-04, epochs = 100, batchsize = 32, savepath = "", stopping_epochs = 5, epochs_per_Z_refresh = 1, epochs_per_theta_refresh = 1, simulate_just_in_time = FALSE, use_gpu = TRUE, verbose = TRUE )
train( estimator, sampler = NULL, simulator = NULL, theta_train = NULL, theta_val = NULL, Z_train = NULL, Z_val = NULL, m = NULL, M = NULL, K = 10000, xi = NULL, loss = "absolute-error", learning_rate = 1e-04, epochs = 100, batchsize = 32, savepath = "", stopping_epochs = 5, epochs_per_Z_refresh = 1, epochs_per_theta_refresh = 1, simulate_just_in_time = FALSE, use_gpu = TRUE, verbose = TRUE )
estimator |
a neural estimator |
sampler |
a function that takes an integer |
simulator |
a function that takes a px |
theta_train |
a set of parameters used for updating the estimator using stochastic gradient descent |
theta_val |
a set of parameters used for monitoring the performance of the estimator during training |
Z_train |
a simulated data set used for updating the estimator using stochastic gradient descent |
Z_val |
a simulated data set used for monitoring the performance of the estimator during training |
m |
vector of sample sizes. If |
M |
deprecated; use |
K |
the number of parameter vectors sampled in the training set at each epoch; the size of the validation set is set to |
xi |
a list of objects used for data simulation (e.g., distance matrices); if it is provided, the parameter sampler is called as |
loss |
the loss function: a string ('absolute-error' for mean-absolute-error loss or 'squared-error' for mean-squared-error loss), or a string of Julia code defining the loss function. For some classes of estimators (e.g., |
learning_rate |
the learning rate for the optimiser ADAM (default 1e-3) |
epochs |
the number of epochs to train the neural network. An epoch is one complete pass through the entire training data set when doing stochastic gradient descent. |
batchsize |
the batchsize to use when performing stochastic gradient descent, that is, the number of training samples processed between each update of the neural-network parameters. |
savepath |
path to save the trained estimator and other information; if null (default), nothing is saved. Otherwise, the neural-network parameters (i.e., the weights and biases) will be saved during training as |
stopping_epochs |
cease training if the risk doesn't improve in this number of epochs (default 5). |
epochs_per_Z_refresh |
integer indicating how often to refresh the training data |
epochs_per_theta_refresh |
integer indicating how often to refresh the training parameters; must be a multiple of |
simulate_just_in_time |
flag indicating whether we should simulate "just-in-time", in the sense that only a |
use_gpu |
a boolean indicating whether to use the GPU if one is available |
verbose |
a boolean indicating whether information, including empirical risk values and timings, should be printed to the console during training. |
a trained neural estimator or, if m
is a vector, a list of trained neural estimators
assess()
for assessing an estimator post training, and estimate()
for applying an estimator to observed data
## Not run: # Construct a neural Bayes estimator for replicated univariate Gaussian # data with unknown mean and standard deviation. # Load R and Julia packages library("NeuralEstimators") library("JuliaConnectoR") juliaEval("using NeuralEstimators, Flux, Distributions") # Define the neural-network architecture estimator <- juliaEval(' d = 1 # dimension of each replicate p = 2 # number of parameters in the model w = 32 # width of each layer psi = Chain(Dense(d, w, relu), Dense(w, w, relu)) phi = Chain(Dense(w, w, relu), Dense(w, p)) deepset = DeepSet(psi, phi) estimator = PointEstimator(deepset) ') # Sampler from the prior sampler <- function(K) { mu <- rnorm(K) # Gaussian prior for the mean sigma <- rgamma(K, 1) # Gamma prior for the standard deviation theta <- matrix(c(mu, sigma), byrow = TRUE, ncol = K) return(theta) } # Data simulator simulator <- function(theta_set, m) { apply(theta_set, 2, function(theta) { t(rnorm(m, theta[1], theta[2])) }, simplify = FALSE) } # Train using fixed parameter and data sets theta_train <- sampler(10000) theta_val <- sampler(2000) m <- 30 # number of iid replicates Z_train <- simulator(theta_train, m) Z_val <- simulator(theta_val, m) estimator <- train(estimator, theta_train = theta_train, theta_val = theta_val, Z_train = Z_train, Z_val = Z_val) # Train using simulation on-the-fly (requires Julia package RCall) estimator <- train(estimator, sampler = sampler, simulator = simulator, m = m) ##### Simulation on-the-fly using Julia functions #### # Defining the sampler and simulator in Julia can improve computational # efficiency by avoiding the overhead of communicating between R and Julia. # Julia is also fast (comparable to C) and so it can be useful to define # these functions in Julia when they involve for-loops. # Parameter sampler sampler <- juliaEval(" function sampler(K) mu = rand(Normal(0, 1), K) sigma = rand(Gamma(1), K) theta = hcat(mu, sigma)' return theta end") # Data simulator simulator <- juliaEval(" function simulator(theta_matrix, m) Z = [rand(Normal(theta[1], theta[2]), 1, m) for theta in eachcol(theta_matrix)] return Z end") # Train the estimator estimator <- train(estimator, sampler = sampler, simulator = simulator, m = m) ## End(Not run)
## Not run: # Construct a neural Bayes estimator for replicated univariate Gaussian # data with unknown mean and standard deviation. # Load R and Julia packages library("NeuralEstimators") library("JuliaConnectoR") juliaEval("using NeuralEstimators, Flux, Distributions") # Define the neural-network architecture estimator <- juliaEval(' d = 1 # dimension of each replicate p = 2 # number of parameters in the model w = 32 # width of each layer psi = Chain(Dense(d, w, relu), Dense(w, w, relu)) phi = Chain(Dense(w, w, relu), Dense(w, p)) deepset = DeepSet(psi, phi) estimator = PointEstimator(deepset) ') # Sampler from the prior sampler <- function(K) { mu <- rnorm(K) # Gaussian prior for the mean sigma <- rgamma(K, 1) # Gamma prior for the standard deviation theta <- matrix(c(mu, sigma), byrow = TRUE, ncol = K) return(theta) } # Data simulator simulator <- function(theta_set, m) { apply(theta_set, 2, function(theta) { t(rnorm(m, theta[1], theta[2])) }, simplify = FALSE) } # Train using fixed parameter and data sets theta_train <- sampler(10000) theta_val <- sampler(2000) m <- 30 # number of iid replicates Z_train <- simulator(theta_train, m) Z_val <- simulator(theta_val, m) estimator <- train(estimator, theta_train = theta_train, theta_val = theta_val, Z_train = Z_train, Z_val = Z_val) # Train using simulation on-the-fly (requires Julia package RCall) estimator <- train(estimator, sampler = sampler, simulator = simulator, m = m) ##### Simulation on-the-fly using Julia functions #### # Defining the sampler and simulator in Julia can improve computational # efficiency by avoiding the overhead of communicating between R and Julia. # Julia is also fast (comparable to C) and so it can be useful to define # these functions in Julia when they involve for-loops. # Parameter sampler sampler <- juliaEval(" function sampler(K) mu = rand(Normal(0, 1), K) sigma = rand(Gamma(1), K) theta = hcat(mu, sigma)' return theta end") # Data simulator simulator <- juliaEval(" function simulator(theta_matrix, m) Z = [rand(Normal(theta[1], theta[2]), 1, m) for theta in eachcol(theta_matrix)] return Z end") # Train the estimator estimator <- train(estimator, sampler = sampler, simulator = simulator, m = m) ## End(Not run)