Title: | Iterative Bias Reduction |
---|---|
Description: | Multivariate smoothing using iterative bias reduction with kernel, thin plate splines, Duchon splines or low rank splines. |
Authors: | Pierre-Andre Cornillon, Nicolas Hengartner, Eric Matzner-Lober |
Maintainer: | "Pierre-Andre Cornillon" <[email protected]> |
License: | GPL (>= 2) |
Version: | 2.0-4 |
Built: | 2024-11-09 06:11:14 UTC |
Source: | CRAN |
an R package for multivariate smoothing using Iterative Bias Reduction smoother.
We are interested in smoothing (the values of) a vector of
observations
by
covariates measured at the
same
observations (gathered in the matrix
).
The iterated Bias Reduction produces a sequence of smoothers
where is the pilot smoother which can be either a kernel or a
thin plate spline smoother. In case of a kernel smoother, the kernel
is built as a product of univariate kernels.
The most important parameter of the iterated bias reduction is
the
number of
iterationsr. Usually this parameter is unknown and is
chosen from the search grid
K
to minimize the criterion (GCV,
AIC, AICc, BIC or gMDL).
The user must choose the pilot smoother
(kernel "k"
, thin plate splines "tps"
or Duchon splines "ds"
)
plus the values of bandwidths (kernel)
or thin plate splines). As the choice of these raw
values depend on each particular dataset, one
can rely on effective degrees of freedom or default values given as degree of
freedom, see argument
df
of the main function ibr
.
Index of functions to be used by end user:
ibr: Iterative bias reduction smoothing plot.ibr: Plot diagnostic for an ibr object predict.ibr: Predicted values using iterative bias reduction smoothers forward: Variable selection for ibr (forward method) print.summary.ibr: Printing iterative bias reduction summaries summary.ibr: Summarizing iterative bias reduction fits
Pierre-Andre Cornillon, Nicolas Hengartner, Eric Matzner-Lober
Maintainer: Pierre-Andre Cornillon <[email protected]>
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1],smoother="k",df=1.1) summary(res.ibr) predict(res.ibr) plot(res.ibr) ## End(Not run)
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1],smoother="k",df=1.1) summary(res.ibr) predict(res.ibr) plot(res.ibr) ## End(Not run)
Generic function calculating the Akaike information criterion for
one model objects of ibr class for which a log-likelihood value
can be obtained, according to the formula
,
where
represents the effective degree of freedom (trace) of the
smoother in the
fitted model, and
for the usual AIC, or
(
the number of observations) for the so-called BIC or SBC
(Schwarz's Bayesian criterion).
## S3 method for class 'ibr' AIC(object, ..., k = 2)
## S3 method for class 'ibr' AIC(object, ..., k = 2)
object |
A fitted model object of class ibr. |
... |
Not used. |
k |
Numeric, the penalty per parameter to be used; the
default |
The ibr method for AIC
, AIC.ibr()
calculates
, where df is the trace
of the smoother.
returns a numeric value
with the corresponding AIC (or BIC, or ..., depending on k
).
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Hurvich, C. M., Simonoff J. S. and Tsai, C. L. (1998) Smoothing Parameter Selection in Nonparametric Regression Using an Improved Akaike Information Criterion. Journal of the Royal Statistical Society, Series B, 60, 271-293 .
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1],df=1.2) summary(res.ibr) predict(res.ibr) ## End(Not run)
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1],df=1.2) summary(res.ibr) predict(res.ibr) ## End(Not run)
Calculates the coefficients for the iterative bias reduction smoothers. This function is not intended to be used directly.
betaA(n, eigenvaluesA, tPADmdemiY, DdemiPA, ddlmini, k, index0)
betaA(n, eigenvaluesA, tPADmdemiY, DdemiPA, ddlmini, k, index0)
n |
The number of observations. |
eigenvaluesA |
Vector of the eigenvalues of the symmetric matrix A. |
tPADmdemiY |
The transpose of the matrix of eigen vectors of the symmetric matrix A times the inverse of the square root of the diagonal matrix D. |
DdemiPA |
The square root of the diagonal matrix D times the eigen vectors of the symmetric matrix A. |
ddlmini |
The number of eigenvalues (numerically) equals to 1. |
k |
A scalar which gives the number of iterations. |
index0 |
The index of the first eigen values of S numerically equal to 0. |
See the reference for detailed explanation of A and D and the meaning of coefficients.
Returns the vector of coefficients (of length n, the number of observations.)
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
The function evaluates the smoothing matrix H
, the
matrices Q and S and their associated
coefficients c
and s
. This function is not intended to be used directly.
betaS1(n,U,tUy,eigenvaluesS1,ddlmini,k,lambda,Sgu,Qgu,index0)
betaS1(n,U,tUy,eigenvaluesS1,ddlmini,k,lambda,Sgu,Qgu,index0)
n |
The number of observations. |
U |
The the matrix of eigen vectors of the symmetric smoothing matrix S. |
tUy |
The transpose of the matrix of eigen vectors of the symmetric smoothing matrix S times the vector of observation y. |
eigenvaluesS1 |
Vector of the eigenvalues of the symmetric smoothing matrix S. |
ddlmini |
The number of eigen values of S equal to 1. |
k |
A numeric vector which give the number of iterations. |
lambda |
The smoothness coefficient lambda for thin plate splines of
order |
Sgu |
The matrix of the polynomial null space S. |
Qgu |
The matrix of the semi kernel (or radial basis) Q. |
index0 |
The index of the first eigen values of S numerically equal to 0. |
See the reference for detailed explanation of Q (the semi kernel or radial basis) and S (the polynomial null space).
Returns a list containing of coefficients for the null space dgub
and the semi-kernel cgub
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober
C. Gu (2002) Smoothing spline anova models. New York: Springer-Verlag.
The function evaluates the smoothing matrix H
, the
matrices Q and S and their associated
coefficients c
and s
. This function is not intended to be used directly.
betaS1lr(n,U,tUy,eigenvaluesS1,ddlmini,k,lambda,rank,Rm1U,index0)
betaS1lr(n,U,tUy,eigenvaluesS1,ddlmini,k,lambda,rank,Rm1U,index0)
n |
The number of observations. |
U |
The the matrix of eigen vectors of the symmetric smoothing matrix S. |
tUy |
The transpose of the matrix of eigen vectors of the symmetric smoothing matrix S times the vector of observation y. |
eigenvaluesS1 |
Vector of the eigenvalues of the symmetric smoothing matrix S. |
ddlmini |
The number of eigen values of S equal to 1. |
k |
A numeric vector which give the number of iterations. |
lambda |
The smoothness coefficient lambda for thin plate splines of
order |
rank |
The rank of lowrank splines. |
Rm1U |
matrix R^-1U (see reference). |
index0 |
The index of the first eigen values of S numerically equal to 0. |
See the reference for detailed explanation of Q (the semi kernel or radial basis) and S (the polynomial null space).
Returns beta
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober
Wood, S.N. (2003) Thin plate regression splines. J. R. Statist. Soc. B, 65, 95-114.
Functions calculating the Bayesian Informative Criterion , the Generalized Cross Validation criterion and the Corrected Akaike information criterion.
## S3 method for class 'ibr' BIC(object, ...) ## S3 method for class 'ibr' GCV(object, ...) ## S3 method for class 'ibr' AICc(object, ...)
## S3 method for class 'ibr' BIC(object, ...) ## S3 method for class 'ibr' GCV(object, ...) ## S3 method for class 'ibr' AICc(object, ...)
object |
A fitted model object of class ibr. |
... |
Only for compatibility purpose with |
The ibr method for BIC
, BIC.ibr()
calculates
, where df is the trace
of the smoother.
The ibr method for GCV
, GCV.ibr()
calculates
The ibr method for AICc
, AICc.ibr()
calculates
.
Returns a numeric value with the corresponding BIC, GCV or AICc.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Hurvich, C. M., Simonoff J. S. and Tsai, C. L. (1998) Smoothing Parameter Selection in Nonparametric Regression Using an Improved Akaike Information Criterion. Journal of the Royal Statistical Society, Series B, 60, 271-293 .
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1]) BIC(res.ibr) GCV(res.ibr) AICc(res.ibr) ## End(Not run)
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1]) BIC(res.ibr) GCV(res.ibr) AICc(res.ibr) ## End(Not run)
Perform a search for the bandwidths in the given grid. For each explanatory variable, the bandwidth is chosen such that the trace of the smoothing matrix according to that variable (effective degree of freedom) is equal to a prescribed value. This function is not intended to be used directly.
bwchoice(X,objectif,kernelx="g",itermax=1000)
bwchoice(X,objectif,kernelx="g",itermax=1000)
X |
A matrix with |
objectif |
A numeric vector of either length 1 or length equal to the
number of columns of |
kernelx |
String which allows to choose between gaussian kernel
( |
itermax |
A scalar which controls the number of iterations for that search. |
Returns a vector of length d, the number of explanatory variable, where each coordinate is the value of the selected bandwidth for each explanatory variable
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Calculates the decomposition of the kernel smoothing matrix in two part: a diagonal matrix D and a symmetric matrix A. This function is not intended to be used directly.
calcA(X,bx,kernelx="g")
calcA(X,bx,kernelx="g")
X |
The matrix of explanatory variables, size n, p. |
bx |
The vector of bandwidth of length p. |
kernelx |
Character string which allows to choose between gaussian kernel
( |
see the reference for detailed explanation of A and D and the meaning of coefficients.
Returns a list containing two matrices: the symmetric matrix A
in component A
) and the square root of the diagonal matrix
D in the component Ddemi
and the trace of the smoother
in the component df
.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
The function cvobs
gives the index of observations in each test set. This function is not intended to be used directly.
cvobs(n,ntest,ntrain,Kfold,type= c("random", "timeseries", "consecutive", "interleaved"), npermut, seed)
cvobs(n,ntest,ntrain,Kfold,type= c("random", "timeseries", "consecutive", "interleaved"), npermut, seed)
n |
The total number of observations. |
ntest |
The number of observations in test set. |
ntrain |
The number of observations in training set. |
Kfold |
Either the number of folds or a boolean or |
type |
A character string in
|
npermut |
The number of random draw (with replacement), used for
|
seed |
Controls the seed of random generator
(via |
Returns a list with in each component the index of observations to be used as a test set.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
Search bandwidth for each univariate kernel smoother such that the product of these univariate kernel gives a kernel smoother with a chosen effective degree of freedom (trace of the smoother). The bandwidths are constrained to give, for each explanatory variable, a kernel smoother with same trace as the others. This function is not intended to be used directly.
departnoyau(df, x, kernel, dftobwitmax, n, p, dfobjectif)
departnoyau(df, x, kernel, dftobwitmax, n, p, dfobjectif)
df |
A numeric vector giving the effective degree of freedom (trace) of the
univariate smoothing matrix for each variable of
|
x |
Matrix of explanatory variables, size n, p. |
kernel |
Character string which allows to choose between gaussian kernel
( |
dftobwitmax |
Specifies the maximum number of iterations
transmitted to |
n |
Number of rows of data matrix x. |
p |
Number of columns of data matrix x. |
dfobjectif |
A numeric vector of length 1 which indicates
the desired effective degree of freedom (trace) of the smoothing
matrix (product kernel smoother) for |
Returns the desired bandwidths.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
The function evaluates the smoothing matrix H
, the
matrices Q and S and their associated
coefficients c
and s
. This function is not intended to be used directly.
dssmoother(X,Y=NULL,lambda,m,s)
dssmoother(X,Y=NULL,lambda,m,s)
X |
Matrix of explanatory variables, size n,p. |
Y |
Vector of response variable. If null, only the smoothing matrix is returned. |
lambda |
The smoothness coefficient lambda for thin plate splines of
order |
m |
The order of derivatives for the penalty (for thin plate splines it is the order). This integer m must verify 2m+2s/d>1, where d is the number of explanatory variables. |
s |
The power of weighting function. For thin plate splines s is equal to 0. This real must be strictly smaller than d/2 (where d is the number of explanatory variables) and must verify 2m+2s/d. To get pseudo-cubic splines, choose m=2 and s=(d-1)/2 (See Duchon, 1977). |
see the reference for detailed explanation of Q (the semi kernel or radial basis) and S (the polynomial null space).
Returns a list containing the smoothing matrix H
, and two
matrices denoted Sgu
(for null space) and Qgu
.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober
Duchon, J. (1977) Splines minimizing rotation-invariant semi-norms in Solobev spaces. in W. Shemp and K. Zeller (eds) Construction theory of functions of several variables, 85-100, Springer, Berlin.
C. Gu (2002) Smoothing spline anova models. New York: Springer-Verlag.
The function evaluates the matrix Q and S related to
the explanatory variables at any points. This function is not intended to be used directly.
dsSx(X,Xetoile,m=2,s=0)
dsSx(X,Xetoile,m=2,s=0)
X |
Matrix of explanatory variables, size n,p. |
Xetoile |
Matrix of new observations with the same number of
variables as |
m |
The order of derivatives for the penalty (for thin plate splines it is the order). This integer m must verify 2m+2s/d>1, where d is the number of explanatory variables. |
s |
The power of weighting function. For thin plate splines s is equal to 0. This real must be strictly smaller than d/2 (where d is the number of explanatory variables) and must verify 2m+2s/d. To get pseudo-cubic splines, choose m=2 and s=(d-1)/2 (See Duchon, 1977). |
see the reference for detailed explanation of Q (the semi kernel) and S (the polynomial null space).
Returns a list containing two matrices denoted Sgu
(for null
space) and Qgu
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober
Duchon, J. (1977) Splines minimizing rotation-invariant semi-norms in Solobev spaces. in W. Shemp and K. Zeller (eds) Construction theory of functions of several variables, 85-100, Springer, Berlin.
C. Gu (2002) Smoothing spline anova models. New York: Springer-Verlag.
The function DuchonQ
computes the semi-kernel of Duchon splines. This function is not intended to be used directly.
DuchonQ(x,xk,m=2,s=0,symmetric=TRUE)
DuchonQ(x,xk,m=2,s=0,symmetric=TRUE)
x |
A numeric matrix of explanatory variables, with n rows and p columns. |
xk |
A numeric matrix of explanatory variables, with nk rows and p columns. |
m |
Order of derivatives. |
s |
Exponent for the weight function. |
symmetric |
Boolean: if |
The semi-kernel evaluated.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Duchon, J. (1977) Splines minimizing rotation-invariant semi-norms in Solobev spaces. in W. Shemp and K. Zeller (eds) Construction theory of functions of several variables, 85-100, Springer, Berlin.
The function DuchonS
computes the semi-kernel of Duchon splines. This function is not intended to be used directly.
DuchonS(x,m=2)
DuchonS(x,m=2)
x |
A numeric matrix of explanatory variables, with n rows and p columns. |
m |
Order of derivatives. |
The polynomial part evaluated.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Duchon, J. (1977) Splines minimizing rotation-invariant semi-norms in Solobev spaces. in W. Shemp and K. Zeller (eds) Construction theory of functions of several variables, 85-100, Springer, Berlin.
Evaluates the fits for the iterative bias reduction smoother, using a kernel smoother and its decomposition into a symmetric matrix and a diagonal matrix. This function is not intended to be used directly.
fittedA(n, eigenvaluesA, tPADmdemiY, DdemiPA, ddlmini, k)
fittedA(n, eigenvaluesA, tPADmdemiY, DdemiPA, ddlmini, k)
n |
The number of observations. |
eigenvaluesA |
Vector of the eigenvalues of the symmetric matrix A. |
tPADmdemiY |
The transpose of the matrix of eigen vectors of the symmetric matrix A times the inverse of the square root of the diagonal matrix D. |
DdemiPA |
The square root of the diagonal matrix D times the eigen vectors of the symmetric matrix A. |
ddlmini |
The number of eigenvalues (numerically) equals to 1. |
k |
A scalar which gives the number of iterations. |
See the reference for detailed explanation of A and D.
Returns a list of two components: fitted
contains fitted values
and trace
contains the trace (effective degree of freedom) of the iterated
bias reduction smoother.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
The function evaluates the fit for iterative bias reduction
model for iteration k
. This function is not intended to be used directly.
fittedS1(n,U,tUy,eigenvaluesS1,ddlmini,k)
fittedS1(n,U,tUy,eigenvaluesS1,ddlmini,k)
n |
The number of observations. |
U |
The the matrix of eigen vectors of the symmetric smoothing matrix S. |
tUy |
The transpose of the matrix of eigen vectors of the symmetric smoothing matrix S times the vector of observation y. |
eigenvaluesS1 |
Vector of the eigenvalues of the symmetric smoothing matrix S. |
ddlmini |
The number of eigen values of S equal to 1. |
k |
A numeric vector which gives the number of iterations |
see the reference for detailed explanation of computation of iterative bias reduction smoother
Returns a vector containing the fit
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
The function evaluates the fit for iterative bias reduction
model for iteration k
. This function is not intended to be used directly.
fittedS1lr(n,U,tUy,eigenvaluesS1,ddlmini,k,rank)
fittedS1lr(n,U,tUy,eigenvaluesS1,ddlmini,k,rank)
n |
The number of observations. |
U |
The the matrix of eigen vectors of the symmetric smoothing matrix S. |
tUy |
The transpose of the matrix of eigen vectors of the symmetric smoothing matrix S times the vector of observation y. |
eigenvaluesS1 |
Vector of the eigenvalues of the symmetric smoothing matrix S. |
ddlmini |
The number of eigen values of S equal to 1. |
k |
A numeric vector which gives the number of iterations |
rank |
The rank of lowrank splines. |
see the reference for detailed explanation of computation of iterative bias reduction smoother
Returns a vector containing the fit
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
Wood, S.N. (2003) Thin plate regression splines. J. R. Statist. Soc. B, 65, 95-114.
Performs a forward variable selection for iterative bias reduction using kernel, thin plate splines or low rank splines. Missing values are not allowed.
forward(formula,data,subset,criterion="gcv",df=1.5,Kmin=1,Kmax=1e+06, smoother="k",kernel="g",rank=NULL,control.par=list(),cv.options=list(), varcrit=criterion)
forward(formula,data,subset,criterion="gcv",df=1.5,Kmin=1,Kmax=1e+06, smoother="k",kernel="g",rank=NULL,control.par=list(),cv.options=list(), varcrit=criterion)
formula |
An object of class |
data |
An optional data frame, list or environment (or object
coercible by |
subset |
An optional vector specifying a subset of observations to be used in the fitting process. |
criterion |
Character string. If the number of iterations
( |
df |
A numeric vector of either length 1 or length equal to the
number of columns of |
Kmin |
The minimum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
Kmax |
The maximum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
smoother |
Character string which allows to choose between thine plate
splines |
kernel |
Character string which allows to choose between gaussian kernel
( |
rank |
Numeric value that control the rank of low rank splines
(denoted as |
control.par |
a named list that control optional parameters. The
components are
|
cv.options |
A named list which controls the way to do cross
validation with component |
varcrit |
Character string. Criterion used for variable
selection. The criteria available are GCV,
AIC ( |
Returns an object of class forwardibr
which is a matrix
with p
columns. In the first row, each entry j contains
the value of the chosen criterion for the univariate smoother using
the jth explanatory variable. The variable which realize the minimum
of the first row is included in the model. All the column of this
variable will be Inf
except the first row. In the second row,
each entry j contains the bivariate smoother using the jth
explanatory variable and the variable already included. The variable
which realize the minimum of the second row is included in the
model. All the column of this variable will be Inf
except the
two first row. This forward selection process continue until the
chosen criterion increases.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
## Not run: data(ozone, package = "ibr") res.ibr <- forward(ozone[,-1],ozone[,1],df=1.2) apply(res.ibr,1,which.min) ## End(Not run)
## Not run: data(ozone, package = "ibr") res.ibr <- forward(ozone[,-1],ozone[,1],df=1.2) apply(res.ibr,1,which.min) ## End(Not run)
Performs iterative bias reduction using kernel, thin plate splines Duchon splines or low rank splines. Missing values are not allowed.
ibr(formula, data, subset, criterion="gcv", df=1.5, Kmin=1, Kmax=1e+06, smoother="k", kernel="g", rank=NULL, control.par=list(), cv.options=list())
ibr(formula, data, subset, criterion="gcv", df=1.5, Kmin=1, Kmax=1e+06, smoother="k", kernel="g", rank=NULL, control.par=list(), cv.options=list())
formula |
An object of class |
data |
An optional data frame, list or environment (or object
coercible by |
subset |
An optional vector specifying a subset of observations to be used in the fitting process. |
criterion |
A vector of string. If the number of iterations
( |
df |
A numeric vector of either length 1 or length equal to the
number of columns of |
Kmin |
The minimum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
Kmax |
The maximum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
smoother |
Character string which allows to choose between thin plate
splines |
kernel |
Character string which allows to choose between gaussian kernel
( |
rank |
Numeric value that control the rank of low rank splines
(denoted as |
control.par |
A named list that control optional parameters. The
components are
|
cv.options |
A named list which controls the way to do cross
validation with component |
Returns an object of class ibr
which is a list including:
beta |
Vector of coefficients. |
residuals |
Vector of residuals. |
fitted |
Vector of fitted values. |
iter |
The number of iterations used. |
initialdf |
The initial effective degree of freedom of the pilot (or base) smoother. |
finaldf |
The effective degree of freedom of the iterated bias reduction
smoother at the |
bandwidth |
Vector of bandwith for each explanatory variable |
call |
The matched call |
parcall |
A list containing several components:
|
criteria |
Value
of the chosen criterion at the given iteration, |
alliter |
Numeric vector giving all the optimal number of iterations selected by the chosen criteria. |
allcriteria |
either a list containing all the criteria evaluated on the
grid |
call |
The matched call. |
terms |
The 'terms' object used. |
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
Wood, S.N. (2003) Thin plate regression splines. J. R. Statist. Soc. B, 65, 95-114.
f <- function(x, y) { .75*exp(-((9*x-2)^2 + (9*y-2)^2)/4) + .75*exp(-((9*x+1)^2/49 + (9*y+1)^2/10)) + .50*exp(-((9*x-7)^2 + (9*y-3)^2)/4) - .20*exp(-((9*x-4)^2 + (9*y-7)^2)) } # define a (fine) x-y grid and calculate the function values on the grid ngrid <- 50; xf <- seq(0,1, length=ngrid+2)[-c(1,ngrid+2)] yf <- xf ; zf <- outer(xf, yf, f) grid <- cbind.data.frame(x=rep(xf, ngrid),y=rep(xf, rep(ngrid, ngrid)),z=as.vector(zf)) persp(xf, yf, zf, theta=130, phi=20, expand=0.45,main="True Function") #generate a data set with function f and noise to signal ratio 5 noise <- .2 ; N <- 100 xr <- seq(0.05,0.95,by=0.1) ; yr <- xr ; zr <- outer(xr,yr,f) ; set.seed(25) std <- sqrt(noise*var(as.vector(zr))) ; noise <- rnorm(length(zr),0,std) Z <- zr + matrix(noise,sqrt(N),sqrt(N)) # transpose the data to a column format xc <- rep(xr, sqrt(N)) ; yc <- rep(yr, rep(sqrt(N),sqrt(N))) data <- cbind.data.frame(x=xc,y=yc,z=as.vector(Z)) # fit by thin plate splines (of order 2) ibr res.ibr <- ibr(z~x+y,data=data,df=1.1,smoother="tps") fit <- matrix(predict(res.ibr,grid),ngrid,ngrid) persp(xf, yf, fit ,theta=130,phi=20,expand=0.45,main="Fit",zlab="fit") ## Not run: data(ozone, package = "ibr") res.ibr <- ibr(Ozone~.,data=ozone,df=1.1) summary(res.ibr) predict(res.ibr) ## End(Not run)
f <- function(x, y) { .75*exp(-((9*x-2)^2 + (9*y-2)^2)/4) + .75*exp(-((9*x+1)^2/49 + (9*y+1)^2/10)) + .50*exp(-((9*x-7)^2 + (9*y-3)^2)/4) - .20*exp(-((9*x-4)^2 + (9*y-7)^2)) } # define a (fine) x-y grid and calculate the function values on the grid ngrid <- 50; xf <- seq(0,1, length=ngrid+2)[-c(1,ngrid+2)] yf <- xf ; zf <- outer(xf, yf, f) grid <- cbind.data.frame(x=rep(xf, ngrid),y=rep(xf, rep(ngrid, ngrid)),z=as.vector(zf)) persp(xf, yf, zf, theta=130, phi=20, expand=0.45,main="True Function") #generate a data set with function f and noise to signal ratio 5 noise <- .2 ; N <- 100 xr <- seq(0.05,0.95,by=0.1) ; yr <- xr ; zr <- outer(xr,yr,f) ; set.seed(25) std <- sqrt(noise*var(as.vector(zr))) ; noise <- rnorm(length(zr),0,std) Z <- zr + matrix(noise,sqrt(N),sqrt(N)) # transpose the data to a column format xc <- rep(xr, sqrt(N)) ; yc <- rep(yr, rep(sqrt(N),sqrt(N))) data <- cbind.data.frame(x=xc,y=yc,z=as.vector(Z)) # fit by thin plate splines (of order 2) ibr res.ibr <- ibr(z~x+y,data=data,df=1.1,smoother="tps") fit <- matrix(predict(res.ibr,grid),ngrid,ngrid) persp(xf, yf, fit ,theta=130,phi=20,expand=0.45,main="Fit",zlab="fit") ## Not run: data(ozone, package = "ibr") res.ibr <- ibr(Ozone~.,data=ozone,df=1.1) summary(res.ibr) predict(res.ibr) ## End(Not run)
Performs iterative bias reduction using kernel, thin plate splines, Duchon splines or low rank splines. Missing values are not allowed. This function is not intended to be used directly.
ibr.fit(x, y, criterion="gcv", df=1.5, Kmin=1, Kmax=1e+06, smoother="k", kernel="g", rank=NULL, control.par=list(), cv.options=list())
ibr.fit(x, y, criterion="gcv", df=1.5, Kmin=1, Kmax=1e+06, smoother="k", kernel="g", rank=NULL, control.par=list(), cv.options=list())
x |
A numeric matrix of explanatory variables, with n rows and p columns. |
y |
A numeric vector of variable to be explained of length n. |
criterion |
A vector of string. If the number of iterations
( |
df |
A numeric vector of either length 1 or length equal to the
number of columns of |
Kmin |
The minimum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
Kmax |
The maximum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
smoother |
Character string which allows to choose between thin plate
splines |
kernel |
Character string which allows to choose between gaussian kernel
( |
rank |
Numeric value that control the rank of low rank splines
(denoted as |
control.par |
A named list that control optional parameters. The
components are
|
cv.options |
A named list which controls the way to do cross
validation with component |
Returns a list including:
beta |
Vector of coefficients. |
residuals |
Vector of residuals. |
fitted |
Vector of fitted values. |
iter |
The number of iterations used. |
initialdf |
The initial effective degree of freedom of the pilot (or base) smoother. |
finaldf |
The effective degree of freedom of the iterated bias reduction
smoother at the |
bandwidth |
Vector of bandwith for each explanatory variable |
call |
The matched call |
parcall |
A list containing several components:
|
criteria |
Value
of the chosen criterion at the given iteration, |
alliter |
Numeric vector giving all the optimal number of iterations selected by the chosen criteria. |
allcriteria |
either a list containing all the criteria evaluated on the
grid |
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
Wood, S.N. (2003) Thin plate regression splines. J. R. Statist. Soc. B, 65, 95-114.
ibr
, predict.ibr
, summary.ibr
, gam
The function iterchoiceA
searches the interval from
mini
to maxi
for a minimum of the function
which calculates the chosen
criterion
(critAgcv
, critAaic
, critAbic
,
critAaicc
or critAgmdl
) with respect to its first
argument (a given iteration k
) using optimize
. This function is not
intended to be used directly.
iterchoiceA(n, mini, maxi, eigenvaluesA, tPADmdemiY, DdemiPA, ddlmini, ddlmaxi, y, criterion, fraction)
iterchoiceA(n, mini, maxi, eigenvaluesA, tPADmdemiY, DdemiPA, ddlmini, ddlmaxi, y, criterion, fraction)
n |
The number of observations. |
mini |
The lower end point of the interval to be searched. |
maxi |
The upper end point of the interval to be searched. |
eigenvaluesA |
Vector of the eigenvalues of the symmetric matrix A. |
tPADmdemiY |
The transpose of the matrix of eigen vectors of the symmetric matrix A times the inverse of the square root of the diagonal matrix D. |
DdemiPA |
The square root of the diagonal matrix D times the eigen vectors of the symmetric matrix A. |
ddlmini |
The number of eigenvalues (numerically) equals to 1. |
ddlmaxi |
The maximum df. No criterion is calculated and
|
y |
The vector of observations of dependant variable. |
criterion |
The criteria available are GCV (default, |
fraction |
The subdivision of the interval [ |
See the reference for detailed explanation of A and
D. The interval [mini
,maxi
] is splitted into
subintervals using fraction
. In each subinterval the function
fcriterion
is minimzed using optimize
(with respect
to its first argument) and the minimum (and its argument) of the
result of these optimizations is returned.
A list with components iter
and objective
which give the
(rounded) optimum number of iterations (between
Kmin
and Kmax
) and the value
of the function at that real point (not rounded).
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
The function iterchoiceAcv
searches the interval from mini
to
maxi
for a minimum of the function criterion
with respect
to its first argument using optimize
. This function is not intended to be used directly.
iterchoiceAcv(X, y, bx, df, kernelx, ddlmini, ntest, ntrain, Kfold, type, npermut, seed, Kmin, Kmax, criterion, fraction)
iterchoiceAcv(X, y, bx, df, kernelx, ddlmini, ntest, ntrain, Kfold, type, npermut, seed, Kmin, Kmax, criterion, fraction)
X |
A numeric matrix of explanatory variables, with n rows and p columns. |
y |
A numeric vector of variable to be explained of length n. |
bx |
The vector of different bandwidths, length |
df |
A numeric vector of either length 1 or length equal to the
number of columns of |
kernelx |
Character string which allows to choose between gaussian kernel
( |
ddlmini |
The number of eigenvalues (numerically) equals to 1. |
ntest |
The number of observations in test set. |
ntrain |
The number of observations in training set. |
Kfold |
Either the number of folds or a boolean or |
type |
A character string in
|
npermut |
The number of random draw (with replacement), used for
|
seed |
Controls the seed of random generator
(via |
Kmin |
The minimum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
Kmax |
The maximum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
criterion |
The criteria available are map ( |
fraction |
The subdivision of the interval [ |
Returns the optimum number of iterations (between Kmin
and Kmax
).
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
Evaluates at each iteration proposed in the grid the cross-validated root mean squared error (RMSE) and mean of the relative absolute error (MAP). The minimum of these criteria gives an estimate of the optimal number of iterations. This function is not intended to be used directly.
iterchoiceAcve(X, y, bx, df, kernelx, ddlmini, ntest, ntrain, Kfold, type, npermut, seed, Kmin, Kmax)
iterchoiceAcve(X, y, bx, df, kernelx, ddlmini, ntest, ntrain, Kfold, type, npermut, seed, Kmin, Kmax)
X |
A numeric matrix of explanatory variables, with n rows and p columns. |
y |
A numeric vector of variable to be explained of length n. |
bx |
The vector of different bandwidths, length |
df |
A numeric vector of either length 1 or length equal to the
number of columns of |
kernelx |
Character string which allows to choose between gaussian kernel
( |
ddlmini |
The number of eigenvalues (numerically) equals to 1. |
ntest |
The number of observations in test set. |
ntrain |
The number of observations in training set. |
Kfold |
Either the number of folds or a boolean or |
type |
A character string in
|
npermut |
The number of random draw (with replacement), used for
|
seed |
Controls the seed of random generator
(via |
Kmin |
The minimum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
Kmax |
The maximum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
Returns the values of RMSE and MAP for each
value of the grid K
. Inf
are returned if the iteration leads
to a smoother with a df bigger than ddlmaxi
.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
Evaluates at each iteration proposed in the grid the value of different criteria: GCV, AIC, corrected AIC, BIC and gMDL (along with the ddl and sigma squared). The minimum of these criteria gives an estimate of the optimal number of iterations. This function is not intended to be used directly.
iterchoiceAe(Y, K, eigenvaluesA, tPADmdemiY, DdemiPA, ddlmini, ddlmaxi)
iterchoiceAe(Y, K, eigenvaluesA, tPADmdemiY, DdemiPA, ddlmini, ddlmaxi)
Y |
The response variable. |
K |
A numeric vector which give the search grid for iterations. |
eigenvaluesA |
Vector of the eigenvalues of the symmetric matrix A. |
tPADmdemiY |
The transpose of the matrix of eigen vectors of the symmetric matrix A times the inverse of the square root of the diagonal matrix D. |
DdemiPA |
The square root of the diagonal matrix D times the eigen vectors of the symmetric matrix A. |
ddlmini |
The number of eigenvalues (numerically) which are equal to 1. |
ddlmaxi |
The maximum df. No criteria are calculated beyond the number of iterations that leads to df bigger than this bound. |
See the reference for detailed explanation of A and D
Returns the values of GCV, AIC, corrected AIC, BIC, gMDL, df and sigma squared for each
value of the grid K
. Inf
are returned if the iteration leads
to a smoother with a df bigger than ddlmaxi
.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
The function iterchoiceS1
searches the interval from mini
to
maxi
for a minimum of the function which calculates the chosen
criterion
(critS1gcv
, critS1aic
, critS1bic
,
critS1aicc
or critS1gmdl
) with respect to its first
argument (a given iteration k
) using optimize
. This function is not intended to be used directly.
iterchoiceS1(n, mini, maxi, tUy, eigenvaluesS1, ddlmini, ddlmaxi, y, criterion, fraction)
iterchoiceS1(n, mini, maxi, tUy, eigenvaluesS1, ddlmini, ddlmaxi, y, criterion, fraction)
n |
The number of observations. |
mini |
The lower end point of the interval to be searched. |
maxi |
The upper end point of the interval to be searched. |
eigenvaluesS1 |
Vector of the eigenvalues of the symmetric smoothing matrix S. |
tUy |
The transpose of the matrix of eigen vectors of the symmetric smoothing matrix S times the vector of observation y. |
ddlmini |
The number of eigen values of S equal to 1. |
ddlmaxi |
The maximum df. No criterion is calculated and
|
y |
The vector of observations of dependant variable. |
criterion |
The criteria available are GCV (default, |
fraction |
The subdivision of the interval [ |
The interval [mini
,maxi
] is splitted into
subintervals using fraction
. In each subinterval the function
fcriterion
is minimzed using optimize
(with respect
to its first argument) and the minimum (and its argument) of the
result of these optimizations is returned.
A list with components iter
and objective
which give the
(rounded) optimum number of iterations (between
Kmin
and Kmax
) and the value
of the function at that real point (not rounded).
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
The function iterchoiceS1cv
searches the interval from mini
to
maxi
for a minimum of the function criterion
with respect
to its first argument using optimize
. This function is not intended to be used directly.
iterchoiceS1cv(X, y, lambda, df, ddlmini, ntest, ntrain, Kfold, type, npermut, seed, Kmin, Kmax, criterion, m, s, fraction)
iterchoiceS1cv(X, y, lambda, df, ddlmini, ntest, ntrain, Kfold, type, npermut, seed, Kmin, Kmax, criterion, m, s, fraction)
X |
A numeric matrix of explanatory variables, with n rows and p columns. |
y |
A numeric vector of variable to be explained of length n. |
lambda |
A numeric positive coefficient that governs the amount of penalty (coefficient lambda). |
df |
A numeric vector of length 1 which is multiplied by the minimum df of thin
plate splines ; This argument is useless if
|
ddlmini |
The number of eigenvalues equals to 1. |
ntest |
The number of observations in test set. |
ntrain |
The number of observations in training set. |
Kfold |
Either the number of folds or a boolean or |
type |
A character string in
|
npermut |
The number of random draw (with replacement), used for
|
seed |
Controls the seed of random generator
(via |
Kmin |
The minimum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
Kmax |
The maximum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
criterion |
The criteria available are map ( |
m |
The order of derivatives for the penalty (for thin plate splines it is the order). This integer m must verify 2m+2s/d>1, where d is the number of explanatory variables. |
s |
The power of weighting function. For thin plate splines s is equal to 0. This real must be strictly smaller than d/2 (where d is the number of explanatory variables) and must verify 2m+2s/d. To get pseudo-cubic splines, choose m=2 and s=(d-1)/2 (See Duchon, 1977). |
fraction |
The subdivision of the interval [ |
Returns the optimum number of iterations (between Kmin
and Kmax
).
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
Duchon, J. (1977) Splines minimizing rotation-invariant semi-norms in Solobev spaces. in W. Shemp and K. Zeller (eds) Construction theory of functions of several variables, 85-100, Springer, Berlin.
Evaluates at each iteration proposed in the grid the cross-validated root mean squared error (RMSE) and mean of the relative absolute error (MAP). The minimum of these criteria gives an estimate of the optimal number of iterations. This function is not intended to be used directly.
iterchoiceS1cve(X, y, lambda, df, ddlmini, ntest, ntrain, Kfold, type, npermut, seed, Kmin, Kmax, m, s)
iterchoiceS1cve(X, y, lambda, df, ddlmini, ntest, ntrain, Kfold, type, npermut, seed, Kmin, Kmax, m, s)
X |
A numeric matrix of explanatory variables, with n rows and p columns. |
y |
A numeric vector of variable to be explained of length n. |
lambda |
A numeric positive coefficient that governs the amount of penalty (coefficient lambda). |
df |
A numeric vector of length 1 which is multiplied by the minimum df of thin
plate splines ; This argument is useless if
|
ddlmini |
The number of eigenvalues equals to 1. |
ntest |
The number of observations in test set. |
ntrain |
The number of observations in training set. |
Kfold |
Either the number of folds or a boolean or |
type |
A character string in
|
npermut |
The number of random draw (with replacement), used for
|
seed |
Controls the seed of random generator
(via |
Kmin |
The minimum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
Kmax |
The maximum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
m |
The order of derivatives for the penalty (for thin plate splines it is the order). This integer m must verify 2m+2s/d>1, where d is the number of explanatory variables. |
s |
The power of weighting function. For thin plate splines s is equal to 0. This real must be strictly smaller than d/2 (where d is the number of explanatory variables) and must verify 2m+2s/d. To get pseudo-cubic splines, choose m=2 and s=(d-1)/2 (See Duchon). |
Returns the values of RMSE and MAP for each
value of the grid K
. Inf
are returned if the iteration leads
to a smoother with a df bigger than ddlmaxi
.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
Duchon, J. (1977) Splines minimizing rotation-invariant semi-norms in Solobev spaces. in W. Shemp and K. Zeller (eds) Construction theory of functions of several variables, 85-100, Springer, Berlin.
Evaluate at each iteration proposed in the grid the value of different criteria: GCV, AIC, corrected AIC, BIC and gMDL (along with the ddl and sigma squared). The minimum of these criteria gives an estimate of the optimal number of iterations. This function is not intended to be used directly.
iterchoiceS1e(y, K, tUy, eigenvaluesS1, ddlmini, ddlmaxi)
iterchoiceS1e(y, K, tUy, eigenvaluesS1, ddlmini, ddlmaxi)
y |
The response variable |
K |
A numeric vector which give the search grid for iterations |
eigenvaluesS1 |
Vector of the eigenvalues of the symmetric smoothing matrix S. |
tUy |
The transpose of the matrix of eigen vectors of the symmetric smoothing matrix S times the vector of observation y. |
ddlmini |
The number of eigen values of S equal to 1. |
ddlmaxi |
The maximum df. No criteria are calculated beyond the number of iterations that leads to df bigger than this bound. |
Returns the values of GCV, AIC, corrected AIC, BIC, gMDL, df
and sigma squared for each value of the grid K
. Inf
are
returned if the iteration leads to a smoother with a df bigger than
ddlmaxi
.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
The function iterchoiceS1cv
searches the interval from mini
to
maxi
for a minimum of the function criterion
with respect
to its first argument using optimize
. This function is not intended to be used directly.
iterchoiceS1lrcv(X, y, lambda, rank, bs, listvarx, df, ddlmini, ntest, ntrain, Kfold, type, npermut, seed, Kmin, Kmax, criterion, m, s, fraction)
iterchoiceS1lrcv(X, y, lambda, rank, bs, listvarx, df, ddlmini, ntest, ntrain, Kfold, type, npermut, seed, Kmin, Kmax, criterion, m, s, fraction)
X |
A numeric matrix of explanatory variables, with n rows and p columns. |
y |
A numeric vector of variable to be explained of length n. |
lambda |
A numeric positive coefficient that governs the amount of penalty (coefficient lambda). |
df |
A numeric vector of length 1 which is multiplied by the minimum df of thin
plate splines ; This argument is useless if
|
rank |
The rank of lowrank splines. |
bs |
The type rank of lowrank splines: |
listvarx |
The vector of the names of explanatory variables |
ddlmini |
The number of eigenvalues equals to 1. |
ntest |
The number of observations in test set. |
ntrain |
The number of observations in training set. |
Kfold |
Either the number of folds or a boolean or |
type |
A character string in
|
npermut |
The number of random draw (with replacement), used for
|
seed |
Controls the seed of random generator
(via |
Kmin |
The minimum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
Kmax |
The maximum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
criterion |
The criteria available are map ( |
m |
The order of derivatives for the penalty (for thin plate splines it is the order). This integer m must verify 2m+2s/d>1, where d is the number of explanatory variables. |
s |
The power of weighting function. For thin plate splines s is equal to 0. This real must be strictly smaller than d/2 (where d is the number of explanatory variables) and must verify 2m+2s/d. To get pseudo-cubic splines, choose m=2 and s=(d-1)/2 (See Duchon, 1977). |
fraction |
The subdivision of the interval [ |
Returns the optimum number of iterations (between Kmin
and Kmax
).
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
Duchon, J. (1977) Splines minimizing rotation-invariant semi-norms in Solobev spaces. in W. Shemp and K. Zeller (eds) Construction theory of functions of several variables, 85-100, Springer, Berlin.
Wood, S.N. (2003) Thin plate regression splines. J. R. Statist. Soc. B, 65, 95-114.
Evaluates at each iteration proposed in the grid the cross-validated root mean squared error (RMSE) and mean of the relative absolute error (MAP). The minimum of these criteria gives an estimate of the optimal number of iterations. This function is not intended to be used directly.
iterchoiceS1lrcve(X, y, lambda, rank, bs, listvarx, df, ddlmini, ntest, ntrain, Kfold, type, npermut, seed, Kmin, Kmax, m, s)
iterchoiceS1lrcve(X, y, lambda, rank, bs, listvarx, df, ddlmini, ntest, ntrain, Kfold, type, npermut, seed, Kmin, Kmax, m, s)
X |
A numeric matrix of explanatory variables, with n rows and p columns. |
y |
A numeric vector of variable to be explained of length n. |
lambda |
A numeric positive coefficient that governs the amount of penalty (coefficient lambda). |
rank |
The rank of lowrank splines. |
bs |
The type rank of lowrank splines: |
listvarx |
The vector of the names of explanatory variables |
df |
A numeric vector of length 1 which is multiplied by the minimum df of thin
plate splines ; This argument is useless if
|
ddlmini |
The number of eigenvalues equals to 1. |
ntest |
The number of observations in test set. |
ntrain |
The number of observations in training set. |
Kfold |
Either the number of folds or a boolean or |
type |
A character string in
|
npermut |
The number of random draw (with replacement), used for
|
seed |
Controls the seed of random generator
(via |
Kmin |
The minimum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
Kmax |
The maximum number of bias correction iterations of the search grid considered by the model selection procedure for selecting the optimal number of iterations. |
m |
The order of derivatives for the penalty (for thin plate splines it is the order). This integer m must verify 2m+2s/d>1, where d is the number of explanatory variables. |
s |
The power of weighting function. For thin plate splines s is equal to 0. This real must be strictly smaller than d/2 (where d is the number of explanatory variables) and must verify 2m+2s/d. To get pseudo-cubic splines, choose m=2 and s=(d-1)/2 (See Duchon). |
Returns the values of RMSE and MAP for each
value of the grid K
. Inf
are returned if the iteration leads
to a smoother with a df bigger than ddlmaxi
.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
Duchon, J. (1977) Splines minimizing rotation-invariant semi-norms in Solobev spaces. in W. Shemp and K. Zeller (eds) Construction theory of functions of several variables, 85-100, Springer, Berlin.
Wood, S.N. (2003) Thin plate regression splines. J. R. Statist. Soc. B, 65, 95-114.
Evaluate the kernel function at x: Gaussian, Epanechnikov, Uniform, Quartic. This function is not intended to be used directly.
gaussien(X) epane(X) uniform(X) quartic(X)
gaussien(X) epane(X) uniform(X) quartic(X)
X |
The value where the function has to be evaluate, should be a numeric and can be a scalar, a vector or a matrix |
Returns a scalar, a vector or a matrix which coordinates are the values of the kernel at the given coordinate
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
The function evaluates the matrix of design weights to predict the response at arbitrary locations x. This function is not intended to be used directly.
kernelSx(kernelx="g",X,Xetoile,bx)
kernelSx(kernelx="g",X,Xetoile,bx)
kernelx |
Character string which allows to choose between gaussian kernel
( |
X |
Matrix of explanatory variables, size n, p. |
Xetoile |
Matrix of new design points x* at which to predict the response variable, size n*, p. |
bx |
The vector of different bandwidths, length |
Returns the matrix denoted in the paper by , n*, n.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Perform a search for the different bandwidths in the given grid. For each explanatory variable, the bandwidth is chosen such that the trace of the smoothing matrix according to that variable (effective degree of freedom) is equal to a given value. This function is not intended to be used directly.
lambdachoice(X,ddlobjectif,m=2,s=0,itermax,smoother="tps")
lambdachoice(X,ddlobjectif,m=2,s=0,itermax,smoother="tps")
X |
A matrix with |
ddlobjectif |
A numeric vector of length 1 which indicates the desired effective degree of
freedom (trace) of the smoothing matrix for
thin plate splines of order |
m |
The order of derivatives for the penalty (for thin plate splines it is the order). This integer m must verify 2m+2s/d>1, where d is the number of explanatory variables. |
s |
The power of weighting function. For thin plate splines s is equal to 0. This real must be strictly smaller than d/2 (where d is the number of explanatory variables) and must verify 2m+2s/d. To get pseudo-cubic splines, choose m=2 and s=(d-1)/2 (See Duchon, 1977). |
itermax |
A scalar which controls the number of iterations for that search |
smoother |
Character string which allows to choose between thin plate
splines |
Returns the coefficient lambda that control smoothness for the desired effective degree of freedom
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober
Duchon, J. (1977) Splines minimizing rotation-invariant semi-norms in Solobev spaces. in W. Shemp and K. Zeller (eds) Construction theory of functions of several variables, 85-100, Springer, Berlin.
Perform a search for the different bandwidths in the given grid. For each explanatory variable, the bandwidth is chosen such that the trace of the smoothing matrix according to that variable (effective degree of freedom) is equal to a given value. This function is not intended to be used directly.
lambdachoicelr(x,ddlobjectif,m=2,s=0,rank,itermax,bs,listvarx)
lambdachoicelr(x,ddlobjectif,m=2,s=0,rank,itermax,bs,listvarx)
x |
A matrix with |
ddlobjectif |
A numeric vector of length 1 which indicates the desired effective degree of
freedom (trace) of the smoothing matrix for
thin plate splines of order |
m |
The order of derivatives for the penalty (for thin plate splines it is the order). This integer m must verify 2m+2s/d>1, where d is the number of explanatory variables. |
s |
The power of weighting function. For thin plate splines s is equal to 0. This real must be strictly smaller than d/2 (where d is the number of explanatory variables) and must verify 2m+2s/d. To get pseudo-cubic splines, choose m=2 and s=(d-1)/2 (See Duchon, 1977). |
itermax |
A scalar which controls the number of iterations for that search |
rank |
The rank of lowrank splines. |
bs |
The type rank of lowrank splines: |
listvarx |
The vector of the names of explanatory variables |
Returns the coefficient lambda that control smoothness for the desired effective degree of freedom
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober
Duchon, J. (1977) Splines minimizing rotation-invariant semi-norms in Solobev spaces. in W. Shemp and K. Zeller (eds) Construction theory of functions of several variables, 85-100, Springer, Berlin.
Wood, S.N. (2003) Thin plate regression splines. J. R. Statist. Soc. B, 65, 95-114.
The function evaluates all the features needed for a lowrank spline smoothing. This function is not intended to be used directly.
lrsmoother(x,bs,listvarx,lambda,m,s,rank)
lrsmoother(x,bs,listvarx,lambda,m,s,rank)
x |
Matrix of explanatory variables, size n,p. |
bs |
The type rank of lowrank splines: |
listvarx |
The vector of the names of explanatory variables |
lambda |
The smoothness coefficient lambda for thin plate splines of
order |
m |
The order of derivatives for the penalty (for thin plate splines it is the order). This integer m must verify 2m+2s/d>1, where d is the number of explanatory variables. |
s |
The power of weighting function. For thin plate splines s is equal to 0. This real must be strictly smaller than d/2 (where d is the number of explanatory variables) and must verify 2m+2s/d. To get pseudo-cubic splines, choose m=2 and s=(d-1)/2 (See Duchon, 1977). |
rank |
The rank of lowrank splines. |
see the reference for detailed explanation of the matrix matrix R^-1U (see reference) and smoothCon for the definition of smoothobject
Returns a list containing the smoothing matrix eigenvectors and eigenvalues
vectors
and values
, and one
matrix denoted Rm1U
and one smoothobject smoothobject
.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober
Duchon, J. (1977) Splines minimizing rotation-invariant semi-norms in Solobev spaces. in W. Shemp and K. Zeller (eds) Construction theory of functions of several variables, 85-100, Springer, Berlin.
Wood, S.N. (2003) Thin plate regression splines. J. R. Statist. Soc. B, 65, 95-114.
Predicted values from a local polynomials of degree less than 2.
Missing values are not allowed.
npregress(x, y, criterion="rmse", bandwidth=NULL,kernel="g", control.par=list(), cv.options=list())
npregress(x, y, criterion="rmse", bandwidth=NULL,kernel="g", control.par=list(), cv.options=list())
x |
A numeric vector of explanatory variable of length n. |
y |
A numeric vector of variable to be explained of length n. |
criterion |
Character string. If the bandwidth
( |
bandwidth |
The kernel bandwidth smoothing parameter (a numeric vector of either length 1). |
kernel |
Character string which allows to choose between gaussian kernel
( |
control.par |
A named list that control optional parameters. The
two components are |
cv.options |
A named list which controls the way to do cross
validation with component |
Returns an object of class npregress
which is a list including:
bandwidth |
The kernel bandwidth smoothing parameter. |
residuals |
Vector of residuals. |
fitted |
Vector of fitted values. |
df |
The effective degree of freedom of the smoother. |
call |
A list containing four components: |
criteria |
either a named list containing the bandwidth search
grid and all the criteria ( |
See locpoly
for fast binned implementation
over an equally-spaced grid of local polynomial. See ibr
for univariate and multivariate smoothing.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Wand, M. P. and Jones, M. C. (1995). Kernel Smoothing. Chapman and Hall, London.
predict.npregress
,
summary.npregress
,
locpoly
, ibr
f <- function(x){sin(5*pi*x)} n <- 100 x <- runif(n) z <- f(x) sigma2 <- 0.05*var(z) erreur <- rnorm(n,0,sqrt(sigma2)) y <- z+erreur res <- npregress(x,y,bandwidth=0.02) summary(res) ord <- order(x) plot(x,y) lines(x[ord],predict(res)[ord])
f <- function(x){sin(5*pi*x)} n <- 100 x <- runif(n) z <- f(x) sigma2 <- 0.05*var(z) erreur <- rnorm(n,0,sqrt(sigma2)) y <- z+erreur res <- npregress(x,y,bandwidth=0.02) summary(res) ord <- order(x) plot(x,y) lines(x[ord],predict(res)[ord])
Los Angeles ozone pollution data, 1976.
We deleted from the original data, the first 3 columns which were the
Month
, Day of the month
and Day of the week
. Each
observation is one day, so there is 366 rows.
The ozone
data is a matrix with 9 columns.
This data set is a matrix containing the following columns:
[,1] | Ozone | numeric | Daily maximum one-hour-average ozone reading (parts per million) at Upland, CA. |
[,2] | Pressure.Vand | numeric | 500 millibar pressure height (m) measured at Vandenberg AFB. |
[,3] | Wind | numeric | Wind speed (mph) at Los Angeles International Airport (LAX). |
[,4] | Humidity | numeric | Humidity in percentage at LAX. |
[,5] | Temp.Sand | numeric | Temperature (degrees F) measured at Sandburg, CA. |
[,6] | Inv.Base.height | numeric | Inversion base height (feet) at LAX. |
[,7] | Pressure.Grad | numeric | Pressure gradient (mm Hg) from LAX to Daggett, CA. |
[,8] | Inv.Base.Temp | numeric | Inversion base temperature (degrees F) at LAX. |
[,9] | Visilibity | numeric | Visibility (miles) measured at LAX. |
Leo Breiman, Department of Statistics, UC Berkeley. Data used in Breiman, L. and Friedman, J. H. (1985). Estimating optimal transformations for multiple regression and correlation, Journal of American Statistical Association, 80, 580–598.
One plot is currently available: a plot of residuals against fitted values.
## S3 method for class 'forwardibr' plot(x,global=FALSE,... )
## S3 method for class 'forwardibr' plot(x,global=FALSE,... )
x |
Object of class |
global |
Boolean: if |
... |
further arguments passed to |
The function plot.forwardibr
give an image plot of the
values of the criterion obtained by the forward selection process. Image
is read from the bottom to the top. At the bottom row, there are all the
univariate models and the selected variable is given by the lowest
criterion. This variable is selected for the second row. At the second
(bottom) row the second variable included is those which give the lowest
criterion for this row etc. All the variables included in the final
model (selected by forward search) are numbered on the image (by order of
inclusion).
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
## Not run: data(ozone, package = "ibr") ibrsel <- forward(ibr(ozone[,-1],ozone[,1],df=1.2) plot(ibrsel) plot(apply(ibrsel,1,min,na.rm=TRUE),type="l") ## End(Not run)
## Not run: data(ozone, package = "ibr") ibrsel <- forward(ibr(ozone[,-1],ozone[,1],df=1.2) plot(ibrsel) plot(apply(ibrsel,1,min,na.rm=TRUE),type="l") ## End(Not run)
One plot is currently available: a plot of residuals against fitted values.
## S3 method for class 'ibr' plot(x,... )
## S3 method for class 'ibr' plot(x,... )
x |
Object of class |
... |
Further arguments passed to or from other methods. |
The function plot.ibr
computes and returns a list of summary
statistics of the fitted iterative bias reduction smoother given in object
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1],df=1.2) plot(res.ibr) ## End(Not run)
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1],df=1.2) plot(res.ibr) ## End(Not run)
Evaluate the product of kernel function at (X-valx)/bx: Gaussian, Epanechnikov, Uniform, Quartic. This function is not intended to be used directly.
poids(kernelx,X,bx,valx,n,p)
poids(kernelx,X,bx,valx,n,p)
kernelx |
Character string which allows to choose between gaussian kernel
( |
X |
Matrix of explanatory variables, size n, p. |
bx |
The vector of different bandwidths, length |
valx |
The vector of length |
n |
Number of rows of X. |
p |
Number of columns of X. |
Returns a vector which coordinates are the values of the product kernel at the given coordinate
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Predicted values from iterative bias reduction object.
Missing values are not allowed.
## S3 method for class 'ibr' predict(object, newdata, interval= c("none", "confidence", "prediction"), ...)
## S3 method for class 'ibr' predict(object, newdata, interval= c("none", "confidence", "prediction"), ...)
object |
Object of class |
newdata |
An optional matrix in which to look for variables with which to predict. If omitted, the fitted values are used. |
interval |
Type of interval calculation. Only |
... |
Further arguments passed to or from other methods. |
Produces a vector of predictions.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1],df=1.2,K=1:500) summary(res.ibr) predict(res.ibr) ## End(Not run)
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1],df=1.2,K=1:500) summary(res.ibr) predict(res.ibr) ## End(Not run)
Predicted values from a local polynomials of degree less than 2. See
locpoly
for fast binned implementation
over an equally-spaced grid of local polynomial (gaussian kernel only)
Missing values are not allowed.
## S3 method for class 'npregress' predict(object, newdata, interval= c("none", "confidence", "prediction"), deriv=FALSE, ...)
## S3 method for class 'npregress' predict(object, newdata, interval= c("none", "confidence", "prediction"), deriv=FALSE, ...)
object |
Object of class |
newdata |
An optional vector of values to be predicted. If omitted, the fitted values are used. |
interval |
Type of interval calculation. Only |
deriv |
Bolean. If |
... |
Further arguments passed to or from other methods. |
Produces a vector of predictions. If deriv
is TRUE
the value is a named list with components: yhat
which contains predictions and (if relevant) deriv
the
first derivative of the local polynomial of degree 1.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Wand, M. P. and Jones, M. C. (1995). Kernel Smoothing. Chapman and Hall, London.
npregress
, summary.npregress
,
locpoly
f <- function(x){sin(5*pi*x)} n <- 100 x <- runif(n) z <- f(x) sigma2 <- 0.05*var(z) erreur<-rnorm(n,0,sqrt(sigma2)) y<-z+erreur grid <- seq(min(x),max(x),length=500) res <- npregress(x,y,bandwidth=0.02,control.par=list(degree=1)) plot(x,y) lines(grid,predict(res,grid))
f <- function(x){sin(5*pi*x)} n <- 100 x <- runif(n) z <- f(x) sigma2 <- 0.05*var(z) erreur<-rnorm(n,0,sqrt(sigma2)) y<-z+erreur grid <- seq(min(x),max(x),length=500) res <- npregress(x,y,bandwidth=0.02,control.par=list(degree=1)) plot(x,y) lines(grid,predict(res,grid))
print
method for class “summary.ibr
”.
## S3 method for class 'summary.ibr' print(x,displaybw=FALSE, digits = max(3, getOption("digits") - 3), ...)
## S3 method for class 'summary.ibr' print(x,displaybw=FALSE, digits = max(3, getOption("digits") - 3), ...)
x |
Object of class |
displaybw |
Boolean that indicates if bandwidth are printed or not. |
digits |
Rounds the values in its first argument to the specified number of significant digits. |
... |
Further arguments passed to or from other methods. |
The function print.summary.ibr
prints a list of summary
statistics of the fitted iterative bias reduction model given in x
.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1],df=1.2) summary(res.ibr) predict(res.ibr) ## End(Not run)
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1],df=1.2) summary(res.ibr) predict(res.ibr) ## End(Not run)
print
method for class “summary.npregress
”.
## S3 method for class 'summary.npregress' print(x,digits = max(3, getOption("digits") - 3), ...)
## S3 method for class 'summary.npregress' print(x,digits = max(3, getOption("digits") - 3), ...)
x |
Object of class |
digits |
Rounds the values in its first argument to the specified number of significant digits. |
... |
Further arguments passed to or from other methods. |
The function print.summary.npregress
prints a list of summary
statistics of the fitted iterative bias reduction model given in x
.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Wand, M. P. and Jones, M. C. (1995). Kernel Smoothing. Chapman and Hall, London.
f <- function(x){sin(5*pi*x)} n <- 100 x <- runif(n) z <- f(x) sigma2 <- 0.05*var(z) erreur <- rnorm(n,0,sqrt(sigma2)) y <- z+erreur res <- npregress(x,y,bandwidth=0.02) summary(res)
f <- function(x){sin(5*pi*x)} n <- 100 x <- runif(n) z <- f(x) sigma2 <- 0.05*var(z) erreur <- rnorm(n,0,sqrt(sigma2)) y <- z+erreur res <- npregress(x,y,bandwidth=0.02) summary(res)
summary
method for class “ibr
”.
## S3 method for class 'ibr' summary(object, criteria="call", ...)
## S3 method for class 'ibr' summary(object, criteria="call", ...)
object |
Object of class |
criteria |
Character string which gives the criteria evaluated for the model. The criteria available are GCV (default, |
... |
Further arguments passed to or from other methods. |
The function summary.ibr
computes and returns a list of summary
statistics of the fitted iterative bias reduction smoother given in object
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing. Doi: 10.1007/s11222-012-9346-4
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1],df=1.2) summary(res.ibr) predict(res.ibr) ## End(Not run)
## Not run: data(ozone, package = "ibr") res.ibr <- ibr(ozone[,-1],ozone[,1],df=1.2) summary(res.ibr) predict(res.ibr) ## End(Not run)
summary
method for class “npregress
”.
## S3 method for class 'npregress' summary(object, criteria="call", ...)
## S3 method for class 'npregress' summary(object, criteria="call", ...)
object |
Object of class |
criteria |
Character string which gives the criteria evaluated for the model. The criteria available are GCV (default, |
... |
Further arguments passed to or from other methods. |
The function summary.npregress
computes and returns a list of summary
statistics of the local polynomial smoother given in object
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Wand, M. P. and Jones, M. C. (1995). Kernel Smoothing. Chapman and Hall, London.
f <- function(x){sin(5*pi*x)} n <- 100 x <- runif(n) z <- f(x) sigma2 <- 0.05*var(z) erreur <- rnorm(n,0,sqrt(sigma2)) y <- z+erreur res <- npregress(x,y,bandwidth=0.02) summary(res)
f <- function(x){sin(5*pi*x)} n <- 100 x <- runif(n) z <- f(x) sigma2 <- 0.05*var(z) erreur <- rnorm(n,0,sqrt(sigma2)) y <- z+erreur res <- npregress(x,y,bandwidth=0.02) summary(res)
Calculates the sum of the first (k+1) terms of a geometric series with initial term 1
and common ratio equal to valpr
(lower or equal to 1).
sumvalpr(k,n,valpr,index1,index0)
sumvalpr(k,n,valpr,index1,index0)
k |
The number of terms minus 1. |
n |
The length of |
valpr |
Vector of common ratio in decreasing order. |
index1 |
The index of the last common ratio equal to 1. |
index0 |
The index of the first common ratio equal to 0. |
Returns the vector of the sums of the first (k+1) terms of the geometric series.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.
Cornillon, P.-A.; Hengartner, N.; Jegou, N. and Matzner-Lober, E. (2012) Iterative bias reduction: a comparative study. Statistics and Computing, 23, 777-791.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2013) Recursive bias estimation for multivariate regression smoothers Recursive bias estimation for multivariate regression smoothers. ESAIM: Probability and Statistics, 18, 483-502.
Cornillon, P.-A.; Hengartner, N. and Matzner-Lober, E. (2017) Iterative Bias Reduction Multivariate Smoothing in R: The ibr Package. Journal of Statistical Software, 77, 1–26.
Evaluate the trace of the product of kernel smoother (Gaussian, Epanechnikov, Uniform, Quartic). This function is not intended to be used directly.
tracekernel(X,bx,kernelx,n,p)
tracekernel(X,bx,kernelx,n,p)
X |
Matrix of explanatory variables, size n, p. |
bx |
The vector of different bandwidths, length |
kernelx |
Character string which allows to choose between gaussian kernel
( |
n |
Number of rows of X. |
p |
Number of columns of X. |
Evaluate the trace (effective degree of freedom) of the product kernel smoother.
Pierre-Andre Cornillon, Nicolas Hengartner and Eric Matzner-Lober.