Title: | Polynomial Spline Routines |
---|---|
Description: | Routines for the polynomial spline fitting routines hazard regression, hazard estimation with flexible tails, logspline, lspec, polyclass, and polymars, by C. Kooperberg and co-authors. |
Authors: | Charles Kooperberg [aut, cre], Cleve Moler [ctb] (LINPACK routines in src), Jack Dongarra [ctb] (LINPACK routines in src) |
Maintainer: | Charles Kooperberg <[email protected]> |
License: | GPL (>= 2) |
Version: | 1.1.25 |
Built: | 2024-12-07 06:27:39 UTC |
Source: | CRAN |
Produces a beta-plot for a polyclass
object.
beta.polyclass(fit, which, xsp = 0.4, cex)
beta.polyclass(fit, which, xsp = 0.4, cex)
fit |
|
which |
which classes should be compared? Default is to compare all classes. |
xsp |
location of the vertical line to the left of the axis. Useful for making high quality, device dependent, graphics. |
cex |
character size. Default is whatever the present character size is. Useful for making high quality, device dependent, graphics. |
A beta plot. One line for each basis function. The left part of the plot indicates the basis function, the right half the relative location of the betas (coefficients) of that basis function, normalized with respect to parent basis functions, for all classes. The scaling is supposed to suggest a relative importance of the basis functions. This may suggest which basis functions are important for separating particular classes.
This is not a generic function, and the complete name, beta.polyclass, has to be specified.
Charles Kooperberg [email protected].
Charles Kooperberg, Smarajit Bose, and Charles J. Stone (1997). Polychotomous regression. Journal of the American Statistical Association, 92, 117–127.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
polyclass
,
plot.polyclass
,
summary.polyclass
,
cpolyclass
,
ppolyclass
,
rpolyclass
.
data(iris) fit.iris <- polyclass(iris[,5], iris[,1:4]) beta.polyclass(fit.iris)
data(iris) fit.iris <- polyclass(iris[,5], iris[,1:4]) beta.polyclass(fit.iris)
Autocorrelations, autocovariances
(clspec
), spectral densities and line spectrum (dlspec
),
spectral distributions (plspec
) or a
random time series(rlspec
) from a model fitted with lspec
.
clspec(lag, fit, cov = TRUE, mm) dlspec(freq, fit) plspec(freq, fit, mm) rlspec(n, fit, mean = 0, cosmodel = FALSE, mm)
clspec(lag, fit, cov = TRUE, mm) dlspec(freq, fit) plspec(freq, fit, mm) rlspec(n, fit, mean = 0, cosmodel = FALSE, mm)
lag |
vector of integer-valued lags for which the autocorrelations or autocorrelations are to be computed. |
fit |
|
cov |
compute autocovariances ( |
mm |
number of points used in integration and the fft. Default is the
smallest power of two larger than |
freq |
vector of frequencies. For |
n |
length of the random time series to be generated. |
mean |
mean level of the time series to be generated. |
cosmodel |
indicate that the data should be generated from a model with constant harmonic terms rather than a true Gaussian time series. |
Autocovariances or autocorrelations (clspec
);
values of the spectral distribution at the requested frequencies. (plspec
);
random time series of length n
(rlspec
);
or a list with three components (dlspec
):
d |
the spectral density evaluated at the vector of frequencies, |
modfreq |
modified frequencies of the form |
m |
mass of the line spectrum at the modified frequencies. |
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone, and Young K. Truong (1995). Logspline Estimation of a Possibly Mixed Spectral Distribution. Journal of Time Series Analysis, 16, 359-388.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
lspec
, plot.lspec
, summary.lspec
.
data(co2) co2.detrend <- lm(co2~c(1:length(co2)))$residuals fit <- lspec(co2.detrend) clspec(0:12,fit) plspec((0:314)/100, fit) dlspec((0:314)/100, fit) rlspec(length(co2),fit)
data(co2) co2.detrend <- lm(co2~c(1:length(co2)))$residuals fit <- lspec(co2.detrend) clspec(0:12,fit) plspec((0:314)/100, fit) dlspec((0:314)/100, fit) rlspec(length(co2),fit)
Classify new cases (cpolyclass
), compute class probabilities
for new cases (ppolyclass
), and generate random multinomials for new cases
(rpolyclass
) for a polyclass
model.
cpolyclass(cov, fit) ppolyclass(data, cov, fit) rpolyclass(n, cov, fit)
cpolyclass(cov, fit) ppolyclass(data, cov, fit) rpolyclass(n, cov, fit)
cov |
covariates. Should be a matrix with |
fit |
|
data |
there are several possibilities. If data is a vector with as many elements as cov has rows, each element of data corresponds to a row of cov; if only one value is given, the probability of being in that class is computed for all sets of covariates. If data is omitted, all class probabilities are provided. |
n |
number of pseudo random numbers to be generated. |
Most likely classes (cpolyclass
),
probabilities (cpolyclass
), or
random classes according to the estimated probabilities (rpolyclass
).
Charles Kooperberg [email protected].
Charles Kooperberg, Smarajit Bose, and Charles J. Stone (1997). Polychotomous regression. Journal of the American Statistical Association, 92, 117–127.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
polyclass
,
plot.polyclass
,
summary.polyclass
,
beta.polyclass
.
data(iris) fit.iris <- polyclass(iris[,5], iris[,1:4]) class.iris <- cpolyclass(iris[,1:4], fit.iris) table(class.iris, iris[,5]) prob.setosa <- ppolyclass(1, iris[,1:4], fit.iris) prob.correct <- ppolyclass(iris[,5], iris[,1:4], fit.iris) rpolyclass(100, iris[64,1:4], fit.iris)
data(iris) fit.iris <- polyclass(iris[,5], iris[,1:4]) class.iris <- cpolyclass(iris[,1:4], fit.iris) table(class.iris, iris[,5]) prob.setosa <- ppolyclass(1, iris[,1:4], fit.iris) prob.correct <- ppolyclass(iris[,5], iris[,1:4], fit.iris) rpolyclass(100, iris[64,1:4], fit.iris)
Produces a design matrux for a model of class polymars
.
design.polymars(object, x)
design.polymars(object, x)
object |
object of the class |
x |
the predictor values at which the design matrix will be computed. The
predictor values can be in a number of formats. It can take the form of a
vector of length equal to the number of predictors in the original data set
or it can be shortened to the length of only those predictors that occur in
the model, in the same order as they appear in the original data set.
Similarly, |
The design matrix corresponding to the fitted polymars
model.
Charles Kooperberg
Charles Kooperberg, Smarajit Bose, and Charles J. Stone (1997). Polychotomous regression. Journal of the American Statistical Association, 92, 117–127.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
polymars
,
plot.polymars
,
predict.polymars
,
summary.polymars
.
data(state) state.pm <- polymars(state.region, state.x77, knots = 15, classify = TRUE, gcv = 1) desmat <- design.polymars(state.pm, state.x77) # compute traditional summary of the fit for the first class summary(lm(((state.region=="Northeast")*1) ~ desmat -1))
data(state) state.pm <- polymars(state.region, state.x77, knots = 15, classify = TRUE, gcv = 1) desmat <- design.polymars(state.pm, state.x77) # compute traditional summary of the fit for the first class summary(lm(((state.region=="Northeast")*1) ~ desmat -1))
Density (dhare
), cumulative probability (phare
), hazard rate (hhare
), quantiles
(qhare
), and random samples (rhare
) from
a hare
object.
dhare(q, cov, fit) hhare(q, cov, fit) phare(q, cov, fit) qhare(p, cov, fit) rhare(n, cov, fit)
dhare(q, cov, fit) hhare(q, cov, fit) phare(q, cov, fit) qhare(p, cov, fit) rhare(n, cov, fit)
q |
vector of quantiles. Missing values ( |
p |
vector of probabilities. Missing values ( |
n |
sample size. If |
cov |
covariates. There are several possibilities. If a vector of length
|
fit |
|
Elements of q
or p
that are missing will cause the
corresponding elements of the result to be missing.
Densities (dhare
), hazard rates (hhare
),
probabilities (phare
), quantiles (qhare
),
or a random sample (rhare
) from a hare
object.
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone and Young K. Truong (1995). Hazard regression. Journal of the American Statistical Association, 90, 78-94.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
hare
,
plot.hare
,
summary.hare
.
fit <- hare(testhare[,1], testhare[,2], testhare[,3:8]) dhare(0:10, testhare[117,3:8], fit) hhare(0:10, testhare[1:11,3:8], fit) phare(10, testhare[1:25,3:8], fit) qhare((1:19)/20, testhare[117,3:8], fit) rhare(10, testhare[117,3:8], fit)
fit <- hare(testhare[,1], testhare[,2], testhare[,3:8]) dhare(0:10, testhare[117,3:8], fit) hhare(0:10, testhare[1:11,3:8], fit) phare(10, testhare[1:25,3:8], fit) qhare((1:19)/20, testhare[117,3:8], fit) rhare(10, testhare[117,3:8], fit)
Density (dheft
), cumulative probability (pheft
), hazard rate (hheft
),
quantiles (qheft
), and random samples (rheft
) from a heft
object
dheft(q, fit) hheft(q, fit) pheft(q, fit) qheft(p, fit) rheft(n, fit)
dheft(q, fit) hheft(q, fit) pheft(q, fit) qheft(p, fit) rheft(n, fit)
q |
vector of quantiles. Missing values ( |
p |
vector of probabilities. Missing values ( |
n |
sample size. If |
fit |
|
Elements of q
or p
that are missing will cause the
corresponding elements of the result to be missing.
Densities (dheft
), hazard rates (hheft
),
probabilities (pheft
), quantiles (qheft
),
or a random sample (rheft
) from a heft
object.
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone and Young K. Truong (1995). Hazard regression. Journal of the American Statistical Association, 90, 78-94.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
heft
, plot.heft
, summary.heft
.
fit <- heft(testhare[,1],testhare[,2]) dheft(0:10,fit) hheft(0:10,fit) pheft(0:10,fit) qheft((1:19)/20,fit) rheft(10,fit)
fit <- heft(testhare[,1],testhare[,2]) dheft(0:10,fit) hheft(0:10,fit) pheft(0:10,fit) qheft((1:19)/20,fit) rheft(10,fit)
Density (dlogspline
), cumulative probability (plogspline
), quantiles
(qlogspline
), and random samples (rlogspline
) from
a logspline density that was fitted using
the 1997 knot addition and deletion algorithm (logspline
).
The 1992 algorithm is available using the oldlogspline
function.
dlogspline(q, fit, log = FALSE) plogspline(q, fit) qlogspline(p, fit) rlogspline(n, fit)
dlogspline(q, fit, log = FALSE) plogspline(q, fit) qlogspline(p, fit) rlogspline(n, fit)
q |
vector of quantiles. Missing values (NAs) are allowed. |
p |
vector of probabilities. Missing values (NAs) are allowed. |
n |
sample size. If |
fit |
|
log |
should dlogspline return densities (TRUE) or log-densities (FALSE) |
Elements of q
or p
that are missing will cause the
corresponding elements of the result to be missing.
Densities (dlogspline
), probabilities (plogspline
), quantiles (qlogspline
),
or a random sample (rlogspline
) from a logspline
density that was fitted using
knot addition and deletion.
Charles Kooperberg [email protected].
Charles Kooperberg and Charles J. Stone. Logspline density estimation for censored data (1992). Journal of Computational and Graphical Statistics, 1, 301–328.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
logspline
,
plot.logspline
,
summary.logspline
,
oldlogspline
.
x <- rnorm(100) fit <- logspline(x) qq <- qlogspline((1:99)/100, fit) plot(qnorm((1:99)/100), qq) # qq plot of the fitted density pp <- plogspline((-250:250)/100, fit) plot((-250:250)/100, pp, type = "l") lines((-250:250)/100,pnorm((-250:250)/100)) # asses the fit of the distribution dd <- dlogspline((-250:250)/100, fit) plot((-250:250)/100, dd, type = "l") lines((-250:250)/100, dnorm((-250:250)/100)) # asses the fit of the density rr <- rlogspline(100, fit) # random sample from fit
x <- rnorm(100) fit <- logspline(x) qq <- qlogspline((1:99)/100, fit) plot(qnorm((1:99)/100), qq) # qq plot of the fitted density pp <- plogspline((-250:250)/100, fit) plot((-250:250)/100, pp, type = "l") lines((-250:250)/100,pnorm((-250:250)/100)) # asses the fit of the distribution dd <- dlogspline((-250:250)/100, fit) plot((-250:250)/100, dd, type = "l") lines((-250:250)/100, dnorm((-250:250)/100)) # asses the fit of the density rr <- rlogspline(100, fit) # random sample from fit
Probability density function (doldlogspline
), distribution
function (poldlogspline
), quantiles
(qoldlogspline
), and random samples (roldlogspline
) from
a logspline density that was fitted using
the 1992 knot deletion algorithm (oldlogspline
).
The 1997 algorithm using knot
deletion and addition is available using the logspline
function.
doldlogspline(q, fit) poldlogspline(q, fit) qoldlogspline(p, fit) roldlogspline(n, fit)
doldlogspline(q, fit) poldlogspline(q, fit) qoldlogspline(p, fit) roldlogspline(n, fit)
q |
vector of quantiles. Missing values (NAs) are allowed. |
p |
vector of probabilities. Missing values (NAs) are allowed. |
n |
sample size. If |
fit |
|
Elements of q
or p
that are missing will cause the
corresponding elements of the result to be missing.
Densities (doldlogspline
), probabilities (poldlogspline
), quantiles (qoldlogspline
),
or a random sample (roldlogspline
)
from an oldlogspline
density that was fitted using
knot deletion.
Charles Kooperberg [email protected].
Charles Kooperberg and Charles J. Stone. Logspline density estimation for censored data (1992). Journal of Computational and Graphical Statistics, 1, 301–328.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
logspline
,
oldlogspline
,
plot.oldlogspline
,
summary.oldlogspline
x <- rnorm(100) fit <- oldlogspline(x) qq <- qoldlogspline((1:99)/100, fit) plot(qnorm((1:99)/100), qq) # qq plot of the fitted density pp <- poldlogspline((-250:250)/100, fit) plot((-250:250)/100, pp, type = "l") lines((-250:250)/100, pnorm((-250:250)/100)) # asses the fit of the distribution dd <- doldlogspline((-250:250)/100, fit) plot((-250:250)/100, dd, type = "l") lines((-250:250)/100, dnorm((-250:250)/100)) # asses the fit of the density rr <- roldlogspline(100, fit) # random sample from fit
x <- rnorm(100) fit <- oldlogspline(x) qq <- qoldlogspline((1:99)/100, fit) plot(qnorm((1:99)/100), qq) # qq plot of the fitted density pp <- poldlogspline((-250:250)/100, fit) plot((-250:250)/100, pp, type = "l") lines((-250:250)/100, pnorm((-250:250)/100)) # asses the fit of the distribution dd <- doldlogspline((-250:250)/100, fit) plot((-250:250)/100, dd, type = "l") lines((-250:250)/100, dnorm((-250:250)/100)) # asses the fit of the density rr <- roldlogspline(100, fit) # random sample from fit
Fit a hazard regression model: linear splines are used to model the baseline hazard, covariates, and interactions. Fitted models can be, but do not need to be, proportional hazards models.
hare(data, delta, cov, penalty, maxdim, exclude, include, prophaz = FALSE, additive = FALSE, linear, fit, silent = TRUE)
hare(data, delta, cov, penalty, maxdim, exclude, include, prophaz = FALSE, additive = FALSE, linear, fit, silent = TRUE)
data |
vector of observations. Observations may or may not be right censored. All observations should be nonnegative. |
delta |
binary vector with the same length as |
cov |
covariates: matrix with as many rows as the length of |
penalty |
the parameter to be used in the AIC criterion. The method chooses
the number of knots that minimizes |
maxdim |
maximum dimension (default is |
exclude |
combinations to be excluded - this should be a matrix with 2
columns - if for example |
include |
those combinations that can be included. Should have the same format
as |
prophaz |
should the model selection be restricted to proportional hazards models? |
additive |
should the model selection be restricted to additive models? |
linear |
vector indicating for which of the variables no knots should
be entered. For example, if |
fit |
|
silent |
suppresses the printing of diagnostic output about basis functions added or deleted, Rao-statistics, Wald-statistics and log-likelihoods. |
An object of class
hare
, which is organized to serve as input for plot.hare
,
summary.hare
, dhare
(conditional density), hhare
(conditional hazard rate), phare
(conditional probabilities), qhare
(conditional quantiles), and rhare
(random numbers).
The object is a list with the following members:
ncov |
number of covariates. |
ndim |
number of dimensions of the fitted model. |
fcts |
matrix of size second element: which knot (0 means: constant (time) or linear (covariate)); third element: second covariate involved ( fourth element: knot involved (if the third element is fifth element: beta; sixth element: standard error of beta. |
knots |
a matrix with |
penalty |
the parameter used in the AIC criterion. |
max |
maximum element of survival data. |
ranges |
column |
logl |
matrix with two columns. The |
sample |
sample size. |
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone and Young K. Truong (1995). Hazard regression. Journal of the American Statistical Association, 90, 78-94.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
heft
,
plot.hare
,
summary.hare
,
dhare
,
hhare
,
phare
,
qhare
,
rhare
.
fit <- hare(testhare[,1], testhare[,2], testhare[,3:8])
fit <- hare(testhare[,1], testhare[,2], testhare[,3:8])
Hazard estimation using cubic splines to approximate the log-hazard function and special functions to allow non-polynomial shapes in both tails.
heft(data, delta, penalty, knots, leftlin, shift, leftlog, rightlog, maxknots, mindist, silent = TRUE)
heft(data, delta, penalty, knots, leftlin, shift, leftlog, rightlog, maxknots, mindist, silent = TRUE)
data |
vector of observations. Observations may or may not be right censored. All observations should be nonnegative. |
delta |
binary vector with the same length as |
penalty |
the parameter to be used in the AIC criterion. The method chooses
the number of knots that minimizes |
knots |
ordered vector of values, which forces the method to start with these knots.
If |
leftlin |
if |
shift |
parameter for the log terms. Default is |
leftlog |
coefficient of |
rightlog |
coefficient of |
maxknots |
maximum number of knots allowed in the model (default is
|
mindist |
minimum distance in order statistics between knots. The default is 5. |
silent |
suppresses the printing of diagnostic output about knots added or deleted, Rao-statistics, Wald-statistics and log-likelihoods. |
An object of class
heft
, which is organized to serve as input for plot.heft
,
summary.heft
, dheft
(density), hheft
(hazard rate), pheft
(probabilities), qheft
(quantiles), and rheft
(random numbers).
The object is a list with the following members:
knots |
vector of the locations of the knots in the |
logl |
the |
thetak |
coefficients of the knot part of the
spline. The k-th coefficient is the coefficient
of |
thetap |
coefficients of the polynomial part of the spline. The first element is the constant term and the second element is the linear term. |
thetal |
coefficients of the logarithmic terms. The first element equals
|
penalty |
the penalty that was used. |
shift |
parameter used in the definition of the log terms. |
sample |
the sample size. |
logse |
the standard errors of |
max |
the largest element of data. |
ad |
vector indicating whether a model of this dimension was not fit (2), fit during the addition stage (0) or during the deletion stage (1). |
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone and Young K. Truong (1995). Hazard regression. Journal of the American Statistical Association, 90, 78-94.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
hare
,
plot.heft
,
summary.heft
,
dheft
,
hheft
,
pheft
,
qheft
,
rheft
.
fit1 <- heft(testhare[,1], testhare[,2]) # modify tail behavior fit2 <- heft(testhare[,1], testhare[,2], leftlog = FALSE, rightlog = FALSE, leftlin = TRUE) fit3 <- heft(testhare[,1], testhare[,2], penalty = 0) # select largest model
fit1 <- heft(testhare[,1], testhare[,2]) # modify tail behavior fit2 <- heft(testhare[,1], testhare[,2], leftlog = FALSE, rightlog = FALSE, leftlin = TRUE) fit3 <- heft(testhare[,1], testhare[,2], penalty = 0) # select largest model
Fits a logspline
density using splines to approximate the log-density
using
the 1997 knot addition and deletion algorithm (logspline
).
The 1992 algorithm is available using the oldlogspline
function.
logspline(x, lbound, ubound, maxknots = 0, knots, nknots = 0, penalty, silent = TRUE, mind = -1, error.action = 2)
logspline(x, lbound, ubound, maxknots = 0, knots, nknots = 0, penalty, silent = TRUE, mind = -1, error.action = 2)
x |
data vector. The data needs to be uncensored. |
lbound , ubound
|
lower/upper bound for the support of the density. For example, if there
is a priori knowledge that the density equals zero to the left of 0,
and has a discontinuity at 0,
the user could specify |
maxknots |
the maximum number of knots. The routine stops adding knots when this number of knots is reached. The method has an automatic rule for selecting maxknots if this parameter is not specified. |
knots |
ordered vector of values (that should cover the complete range of the
observations), which forces the method to start with these knots.
Overrules knots.
If |
nknots |
forces the method to start with |
penalty |
the parameter to be used in the AIC criterion. The method chooses
the number of knots that minimizes
|
silent |
should diagnostic output be printed? |
mind |
minimum distance, in order statistics, between knots. |
error.action |
how should |
Object of the class logspline
, that is intended as input for
plot.logspline
(summary plots),
summary.logspline
(fitting summary),
dlogspline
(densities),
plogspline
(probabilities),
qlogspline
(quantiles),
rlogspline
(random numbers from the fitted distribution).
The object has the following members:
call |
the command that was executed. |
nknots |
the number of knots in the model that was selected. |
coef.pol |
coefficients of the polynomial part of the spline. The first coefficient is the constant term and the second is the linear term. |
coef.kts |
coefficients of the knots part of the spline.
The |
knots |
vector of the locations of the knots in the |
maxknots |
the largest number of knots minus one considered during fitting
(i.e. with |
penalty |
the penalty that was used. |
bound |
first element: 0 - |
samples |
the sample size. |
logl |
matrix with 3 columns. Column one: number of knots; column two: model fitted during addition (1) or deletion (2); column 3: log-likelihood. |
range |
range of the input data. |
mind |
minimum distance in order statistics between knots required during fitting (the actual minimum distance may be much larger). |
Charles Kooperberg [email protected].
Charles Kooperberg and Charles J. Stone. Logspline density estimation for censored data (1992). Journal of Computational and Graphical Statistics, 1, 301–328.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
plot.logspline
,
summary.logspline
,
dlogspline
,
plogspline
,
qlogspline
, rlogspline
,
oldlogspline,
oldlogspline.to.logspline
.
y <- rnorm(100) fit <- logspline(y) plot(fit) # # as (4 == length(-2, -1, 0, 1, 2) -1), this forces these initial knots, # and does no knot selection fit <- logspline(y, knots = c(-2, -1, 0, 1, 2), maxknots = 4, penalty = 0) # # the following example give one of the rare examples where logspline # crashes, and this shows the use of error.action = 2. # set.seed(118) zz <- rnorm(300) zz[151:300] <- zz[151:300]+5 zz <- round(zz) fit <- logspline(zz) # # you could rerun this with # fit <- logspline(zz, error.action=0) # or # fit <- logspline(zz, error.action=1)
y <- rnorm(100) fit <- logspline(y) plot(fit) # # as (4 == length(-2, -1, 0, 1, 2) -1), this forces these initial knots, # and does no knot selection fit <- logspline(y, knots = c(-2, -1, 0, 1, 2), maxknots = 4, penalty = 0) # # the following example give one of the rare examples where logspline # crashes, and this shows the use of error.action = 2. # set.seed(118) zz <- rnorm(300) zz[151:300] <- zz[151:300]+5 zz <- round(zz) fit <- logspline(zz) # # you could rerun this with # fit <- logspline(zz, error.action=0) # or # fit <- logspline(zz, error.action=1)
Fit an lspec
model
to a time-series or a periodogram.
lspec(data, period, penalty, minmass, knots, maxknots, atoms, maxatoms, maxdim , odd = FALSE, updown = 3, silent = TRUE)
lspec(data, period, penalty, minmass, knots, maxknots, atoms, maxatoms, maxdim , odd = FALSE, updown = 3, silent = TRUE)
data |
time series (exactly one of |
period |
value of the periodogram for a time series at frequencies
|
penalty |
the parameter to be used in the AIC criterion. The method chooses
the number of basis
functions that minimizes |
minmass |
threshold value for atoms. No atoms having smaller mass than |
knots |
ordered vector of values, which forces the method to start with these knots.
If |
maxknots |
maximum number of knots allowed in the model. Does not need to be
specified, since the program has a default for |
atoms |
ordered vector of values, which forces the method to start with discrete
components at these frequencies. The values of atoms are rounded
to the nearest multiple of |
maxatoms |
maximum number of discrete components allowed in the model. Does not need to be
specified, since the program has a default for |
maxdim |
maximum number of basis
functions allowed in the model (default is
|
odd |
see |
updown |
the maximal number of times that |
silent |
should printing of information be suppressed? |
Object of class lspec
.
The output is organized to serve as input for plot.lspec
(summary plots),
summary.lspec
(summarizes fitting), clspec
(for
autocorrelations and autocovariances), dlspec
(for spectral density and line-spectrum,)
plspec
(for the spectral distribution), and rlspec
(for random time series with the same spectrum).
call |
the command that was executed. |
thetap |
coefficients of the polynomial part of the spline. |
nknots |
the number of knots that were retained. |
knots |
vector of the locations of the knots in the logspline model. Only the knots that were retained are in this vector. |
thetak |
coefficients of the knot part of the
spline. The k-th coefficient is the coefficient
of |
natoms |
the number of atoms that were retained. |
atoms |
vector of the locations of the atoms in the model. Only the atoms that were retained are in this vector. |
mass |
The k-th coefficient is the mass at |
logl |
the log-likelihood of the model. |
penalty |
the penalty that was used. |
minmass |
the minimum mass for an atom that was allowed. |
sample |
the sample size that was used, either computed as |
updown |
the actual number of times that |
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone, and Young K. Truong (1995). Logspline Estimation of a Possibly Mixed Spectral Distribution. Journal of Time Series Analysis, 16, 359-388.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
plot.lspec
, summary.lspec
, clspec
, dlspec
,
plspec
, rlspec
.
data(co2) co2.detrend <- unstrip(lm(co2~c(1:length(co2)))$residuals) fit <- lspec(co2.detrend)
data(co2) co2.detrend <- unstrip(lm(co2~c(1:length(co2)))$residuals) fit <- lspec(co2.detrend)
Fits a logspline
density using splines to approximate the log-density
using
the 1992 knot deletion algorithm (oldlogspline
).
The 1997 algorithm using knot
deletion and addition is available using the logspline
function.
oldlogspline(uncensored, right, left, interval, lbound, ubound, nknots, knots, penalty, delete = TRUE)
oldlogspline(uncensored, right, left, interval, lbound, ubound, nknots, knots, penalty, delete = TRUE)
uncensored |
vector of uncensored observations from the distribution whose density is
to be estimated. If there are no uncensored observations, this argument can
be omitted. However, either |
right |
vector of right censored observations from the distribution whose density is to be estimated. If there are no right censored observations, this argument can be omitted. |
left |
vector of left censored observations from the distribution whose density is to be estimated. If there are no left censored observations, this argument can be omitted. |
interval |
two column matrix of lower and upper bounds of observations that are interval censored from the distribution whose density is to be estimated. If there are no interval censored observations, this argument can be omitted. |
lbound , ubound
|
lower/upper bound for the support of the density. For example, if there
is a priori knowledge that the density equals zero to the left of 0,
and has a discontinuity at 0,
the user could specify |
nknots |
forces the method to start with nknots knots ( |
knots |
ordered vector of values (that should cover the complete range of the
observations), which forces the method to start with these knots ( |
penalty |
the parameter to be used in the AIC criterion. The method chooses
the number of knots that minimizes |
delete |
should stepwise knot deletion be employed? |
Object of the class oldlogspline
, that is intended as input for
plot.oldlogspline
,
summary.oldlogspline
,
doldlogspline
(densities),
poldlogspline
(probabilities),qoldlogspline
(quantiles),
roldlogspline
(random numbers from the fitted distribution).
The function oldlogspline.to.logspline
can translate an object of the class
oldlogspline
to an object of the class logspline
.
The object has the following members:
call |
the command that was executed. |
knots |
vector of the locations of the knots in the |
coef |
coefficients of the spline. The first coefficient is the constant term,
the second is the linear term and the k-th |
bound |
first element: 0 - |
logl |
the |
penalty |
the penalty that was used. |
sample |
the sample size that was used. |
delete |
was stepwise knot deletion employed? |
Charles Kooperberg [email protected].
Charles Kooperberg and Charles J. Stone. Logspline density estimation for censored data (1992). Journal of Computational and Graphical Statistics, 1, 301–328.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
logspline
,
oldlogspline
,
plot.oldlogspline
,
summary.oldlogspline
,doldlogspline
,
poldlogspline
,
qoldlogspline
,
roldlogspline
,
oldlogspline.to.logspline
.
# A simple example y <- rnorm(100) fit <- oldlogspline(y) plot(fit) # An example involving censoring and a lower bound y <- rlnorm(1000) censoring <- rexp(1000) * 4 delta <- 1 * (y <= censoring) y[delta == 0] <- censoring[delta == 0] fit <- oldlogspline(y[delta == 1], y[delta == 0], lbound = 0)
# A simple example y <- rnorm(100) fit <- oldlogspline(y) plot(fit) # An example involving censoring and a lower bound y <- rlnorm(1000) censoring <- rexp(1000) * 4 delta <- 1 * (y <= censoring) y[delta == 0] <- censoring[delta == 0] fit <- oldlogspline(y[delta == 1], y[delta == 0], lbound = 0)
Translates an oldlogspline
object in an
logspline
object. This routine is mostly used in logspline
,
as it allows the routine to use oldlogspline
for some situations
where logspline
crashes. The other use is when you have censored data,
and thus have to use oldlogspline
to fit, but wish to use the
auxiliary routines from logspline
.
oldlogspline.to.logspline(obj, data)
oldlogspline.to.logspline(obj, data)
obj |
object of class |
data |
the original data. Used to compute the |
object of the class logspline
. The call
component
of the new object is not useful. The delete
component of the old
object is ignored.
Charles Kooperberg [email protected].
Charles Kooperberg and Charles J. Stone. Logspline density estimation for censored data (1992). Journal of Computational and Graphical Statistics, 1, 301–328.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
x <- rnorm(100) fit.old <- oldlogspline(x) fit.translate <- oldlogspline.to.logspline(fit.old,x) fit.new <- logspline(x) plot(fit.new) plot(fit.old,add=TRUE,col=2) # # should look almost the same, the differences are the # different fitting routines #
x <- rnorm(100) fit.old <- oldlogspline(x) fit.translate <- oldlogspline.to.logspline(fit.old,x) fit.new <- logspline(x) plot(fit.new) plot(fit.old,add=TRUE,col=2) # # should look almost the same, the differences are the # different fitting routines #
This function is not intended for direct use. It is called
by plot.polymars
.
## S3 method for class 'polymars' persp(x, predictor1, predictor2, response, n = 33, xlim, ylim, xx, contour.polymars, main, intercept, ...)
## S3 method for class 'polymars' persp(x, predictor1, predictor2, response, n = 33, xlim, ylim, xx, contour.polymars, main, intercept, ...)
x , predictor1 , predictor2
|
this function is not intended to be called directly. |
response , n , xlim , ylim
|
this function is not intended to be called directly. |
xx , contour.polymars
|
this function is not intended to be called directly. |
main , intercept , ...
|
this function is not intended to be called directly. |
This function produces a 3-d contour or perspective plot. It is intended
to be called by plot.polymars
.
Martin O'Connor.
Charles Kooperberg, Smarajit Bose, and Charles J. Stone (1997). Polychotomous regression. Journal of the American Statistical Association, 92, 117–127.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
Plots a density, distribution function, hazard
function or survival function for
a hare
object.
## S3 method for class 'hare' plot(x, cov, n = 100, which = 0, what = "d", time, add = FALSE, xlim, xlab, ylab, type, ...)
## S3 method for class 'hare' plot(x, cov, n = 100, which = 0, what = "d", time, add = FALSE, xlim, xlab, ylab, type, ...)
x |
|
cov |
a vector of length |
n |
the number of equally spaced points at which to plot the function. |
which |
for which coordinate should the plot be made. 0: time; positive value
i: covariate i. Note that if which is the positive value i, then the
element corresponding to this covariate must be given in |
what |
what should be plotted: |
time |
if which is not equal to 0, the value of time for which the plot should be made. |
add |
should the plot be added to an existing plot? |
xlim |
plotting limits; default is from the maximum of 0 and 10% before the 1st percentile to the minimmum of 10% further than the 99th percentile and the largest observation. |
xlab , ylab
|
labels for the axes. Per default no labels are printed. |
type |
plotting type. The default is lines. |
... |
all other plotting options are passed on. |
This function produces a plot of a hare
fit at n
equally
spaced points roughly covering the support of the density. (Use
xlim=c(from,to)
to change the range of these points.)
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone and Young K. Truong (1995). Hazard regression. Journal of the American Statistical Association, 90, 78-94.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
hare
,
summary.hare
,
dhare
,
hhare
,
phare
,
qhare
,
rhare
.
fit <- hare(testhare[,1], testhare[,2], testhare[,3:8]) # hazard curve for covariates like case 1 plot(fit, testhare[1,3:8], what = "h") # survival function as a function of covariate 2, for covariates as case 1 at t=3 plot(fit, testhare[1,3:8], which = 2, what = "s", time = 3)
fit <- hare(testhare[,1], testhare[,2], testhare[,3:8]) # hazard curve for covariates like case 1 plot(fit, testhare[1,3:8], what = "h") # survival function as a function of covariate 2, for covariates as case 1 at t=3 plot(fit, testhare[1,3:8], which = 2, what = "s", time = 3)
Plots a density, distribution function, hazard
function or survival function for
a heft
object.
## S3 method for class 'heft' plot(x, n = 100, what = "d", add = FALSE, xlim, xlab, ylab, type, ...)
## S3 method for class 'heft' plot(x, n = 100, what = "d", add = FALSE, xlim, xlab, ylab, type, ...)
x |
|
n |
the number of equally spaced points at which to plot the function. |
what |
what should be plotted: |
add |
should the plot be added to an existing plot? |
xlim |
plotting limits; default is from the maximum of 0 and 10% before the 1st percentile to the minimmum of 10% further than the 99th percentile and the largest observation. |
xlab , ylab
|
labels for the axes. The default is no labels. |
type |
plotting type. The default is lines. |
... |
all other plotting options are passed on. |
This function produces a plot of a heft
fit at n
equally
spaced points roughly covering the support of the density. (Use
xlim=c(from,to)
to change the range of these points.)
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone and Young K. Truong (1995). Hazard regression. Journal of the American Statistical Association, 90, 78-94.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
heft
,
summary.heft
,
dheft
,
hheft
,
pheft
,
qheft
,
rheft
.
fit1 <- heft(testhare[,1], testhare[,2]) plot(fit1, what = "h") # modify tail behavior fit2 <- heft(testhare[,1], testhare[,2], leftlog = FALSE, rightlog = FALSE, leftlin = TRUE) plot(fit2, what = "h", add = TRUE,lty = 2) fit3 <- heft(testhare[,1], testhare[,2], penalty = 0) # select largest model plot(fit3, what = "h", add = TRUE,lty = 3)
fit1 <- heft(testhare[,1], testhare[,2]) plot(fit1, what = "h") # modify tail behavior fit2 <- heft(testhare[,1], testhare[,2], leftlog = FALSE, rightlog = FALSE, leftlin = TRUE) plot(fit2, what = "h", add = TRUE,lty = 2) fit3 <- heft(testhare[,1], testhare[,2], penalty = 0) # select largest model plot(fit3, what = "h", add = TRUE,lty = 3)
Plots a logspline
density, distribution function, hazard
function or survival function
from
a logspline density that was fitted using
the 1997 knot addition and deletion algorithm (logspline
).
The 1992 algorithm is available using the oldlogspline
function.
## S3 method for class 'logspline' plot(x, n = 100, what = "d", add = FALSE, xlim, xlab = "", ylab = "", type = "l", ...)
## S3 method for class 'logspline' plot(x, n = 100, what = "d", add = FALSE, xlim, xlab = "", ylab = "", type = "l", ...)
x |
|
n |
the number of equally spaced points at which to plot the density. |
what |
what should be plotted:
|
add |
should the plot be added to an existing plot. |
xlim |
range of data on which to plot. Default is from the 1th to the 99th percentile of the density, extended by 10% on each end. |
xlab , ylab
|
labels plotted on the axes. |
type |
type of plot. |
... |
other plotting options, as desired |
This function produces a plot of a logspline
fit at n
equally
spaced points roughly covering the support of the density. (Use
xlim = c(from, to)
to change the range of these points.)
Charles Kooperberg [email protected].
Charles Kooperberg and Charles J. Stone. Logspline density estimation for censored data (1992). Journal of Computational and Graphical Statistics, 1, 301–328.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
logspline
,
summary.logspline
,
dlogspline
,
plogspline
,
qlogspline
,
rlogspline
,
y <- rnorm(100) fit <- logspline(y) plot(fit)
y <- rnorm(100) fit <- logspline(y) plot(fit)
Plots a spectral density function,
line spectrum, or spectral distribution from a model fitted with lspec
## S3 method for class 'lspec' plot(x, what = "b", n, add = FALSE, xlim, ylim, xlab = "", ylab = "", type, ...)
## S3 method for class 'lspec' plot(x, what = "b", n, add = FALSE, xlim, ylim, xlab = "", ylab = "", type, ...)
x |
|
what |
what should be plotted: b (spectral density and line spectrum superimposed), d (spectral density function), l (line spectrum) or p (spectral distribution function). |
n |
the number of equally spaced points at which to plot the fit; default is |
add |
indicate that the plot should be added to an existing plot. |
xlim |
X-axis plotting limits: default is |
ylim |
Y-axis plotting limits. |
xlab , ylab
|
axis labels. |
type |
plotting type; default is |
... |
all regular plotting options are passed on. |
If what = "p"
the plotting range cannot extend beyond the interval .
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone, and Young K. Truong (1995). Logspline Estimation of a Possibly Mixed Spectral Distribution. Journal of Time Series Analysis, 16, 359-388.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
lspec
, summary.lspec
, clspec
, dlspec
,
plspec
, rlspec
.
data(co2) co2.detrend <- lm(co2~c(1:length(co2)))$residuals fit <- lspec(co2.detrend) plot(fit)
data(co2) co2.detrend <- lm(co2~c(1:length(co2)))$residuals fit <- lspec(co2.detrend) plot(fit)
Plots an oldlogspline
density, distribution function, hazard
function or survival function
from
a logspline density that was fitted using
the 1992 knot deletion algorithm.
The 1997 algorithm using knot
deletion and addition is available using the logspline
function.
## S3 method for class 'oldlogspline' plot(x, n = 100, what = "d", xlim, xlab = "", ylab = "", type = "l", add = FALSE, ...)
## S3 method for class 'oldlogspline' plot(x, n = 100, what = "d", xlim, xlab = "", ylab = "", type = "l", add = FALSE, ...)
x |
|
n |
the number of equally spaced points at which to plot the density. |
what |
what should be plotted:
|
xlim |
range of data on which to plot. Default is from the 1th to the 99th percentile of the density, extended by 10% on each end. |
xlab , ylab
|
labels plotted on the axes. |
type |
type of plot. |
add |
should the plot be added to an existing plot. |
... |
other plotting options, as desired |
This function produces a plot of a oldlogspline
fit at n
equally
spaced points roughly covering the support of the density. (Use
xlim=c(from,to)
to change the range of these points.)
Charles Kooperberg [email protected].
Charles Kooperberg and Charles J. Stone. Logspline density estimation for censored data (1992). Journal of Computational and Graphical Statistics, 1, 301–328.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
logspline
,
oldlogspline
,
summary.oldlogspline
,
doldlogspline
,
poldlogspline
,qoldlogspline
,
roldlogspline
.
y <- rnorm(100) fit <- oldlogspline(y) plot(fit)
y <- rnorm(100) fit <- oldlogspline(y) plot(fit)
Probability or classification plots for a polyclass
model.
## S3 method for class 'polyclass' plot(x, cov, which, lims, what, data, n, xlab="", ylab="", zlab="", ...)
## S3 method for class 'polyclass' plot(x, cov, which, lims, what, data, n, xlab="", ylab="", zlab="", ...)
x |
|
cov |
a vector of length |
which |
for which covariates should the plot be made.
Number or a character string defining the name, if the
same names were used with the call to |
lims |
plotting limits. If omitted, the plot is made over the same range
of the covariate as in the original data. Otherwise a vector of
length two of the form |
what |
an integer between 1 and 8, defining the type of plot to be made.
|
data |
Class for which the plot is made. Should be provided if |
n |
the number of equally spaced points at which to plot the fit. The
default is 250 if |
xlab , ylab , zlab
|
axis plotting labels. |
... |
all other options are passed on. |
Charles Kooperberg [email protected].
Charles Kooperberg, Smarajit Bose, and Charles J. Stone (1997). Polychotomous regression. Journal of the American Statistical Association, 92, 117–127.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
polyclass
,
summary.polyclass
,
beta.polyclass
,
cpolyclass
,
ppolyclass
,
rpolyclass
.
data(iris) fit.iris <- polyclass(iris[,5], iris[,1:4]) plot(fit.iris, iris[64,1:4], which=c(3,4), data=2, what=1) plot(fit.iris,iris[64,1:4], which=c(3,4), what=5) plot(fit.iris,iris[64,1:4], which=4, what=7)
data(iris) fit.iris <- polyclass(iris[,5], iris[,1:4]) plot(fit.iris, iris[64,1:4], which=c(3,4), data=2, what=1) plot(fit.iris,iris[64,1:4], which=c(3,4), what=5) plot(fit.iris,iris[64,1:4], which=4, what=7)
Produces two and three dimensional plots of the
fitted values from a polymars
object.
## S3 method for class 'polymars' plot(x, predictor1, response, predictor2, xx, add = FALSE, n, xyz = FALSE, contour.polymars = FALSE, xlim, ylim, intercept, ...)
## S3 method for class 'polymars' plot(x, predictor1, response, predictor2, xx, add = FALSE, n, xyz = FALSE, contour.polymars = FALSE, xlim, ylim, intercept, ...)
x |
|
predictor1 |
the index of a predictor that was used when the |
response |
if the model was fitted to multiple response data the response index should be specified. |
predictor2 |
the index of a predictor that was used when the |
xx |
should be a vector of length equal to the number of predictors in the
original data set. The values should be in the same order as in the original
dataset. By default the function uses the median values of the data that was
used to fit the model. Although the values for predictor and predictor2 are
not used, they should still be provided as part of |
add |
should the plot be added to a previously created plot? Works only for two dimensional plots. |
n |
number of plotting points (2 dimensional plot) or plotting points along each
axis (3 dimensional plot). The default is |
xyz |
is the plot being made a 3 dimensional plot?
If there is only one response it need not be set, if two numerical values
accompany the model in the call they will be understood as two predictors
for a 3-d plot. By default a 3-d plot uses the |
contour.polymars |
if the plot being made a 3 dimensional plot should it be made as a contour plot
( |
intercept |
Setting intercept equal to |
xlim , ylim
|
Plotting limits. The function tries to choose intelligent limits itself |
... |
other options are passed on. |
This function produces a 2-d plot of 1 predictor and response of a polymars
object
at n equally spaced points or a 3-d plot of two predictors and response of a
polymars
object. The range of the plot is by default equal to the range of the
particular predictor(s) in the original data, but this can be changed by
xlim = c(from, to)
and
ylim = c(from, to)
.
Martin O'Connor.
Charles Kooperberg, Smarajit Bose, and Charles J. Stone (1997). Polychotomous regression. Journal of the American Statistical Association, 92, 117–127.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
design.polymars
,
polymars
,
predict.polymars
,
summary.polymars
.
data(state) state.pm <- polymars(state.region, state.x77, knots = 15, classify = TRUE, gcv = 1) plot(state.pm, 3, 4)
data(state) state.pm <- polymars(state.region, state.x77, knots = 15, classify = TRUE, gcv = 1) plot(state.pm, 3, 4)
Fit a polychotomous regression and multiple classification using linear splines and selected tensor products.
polyclass(data, cov, weight, penalty, maxdim, exclude, include, additive = FALSE, linear, delete = 2, fit, silent = TRUE, normweight = TRUE, tdata, tcov, tweight, cv, select, loss, seed)
polyclass(data, cov, weight, penalty, maxdim, exclude, include, additive = FALSE, linear, delete = 2, fit, silent = TRUE, normweight = TRUE, tdata, tcov, tweight, cv, select, loss, seed)
data |
vector of classes:
|
cov |
covariates: matrix with as many rows as the length of |
weight |
optional vector of case-weights. Should have the same length as
|
penalty |
the parameter to be used in the AIC criterion if the
model selection is carried out by AIC. The program chooses
the number of knots that minimizes |
maxdim |
maximum dimension (default is
|
exclude |
combinations to be excluded - this should be a matrix with 2
columns - if for example |
include |
those combinations that can be included. Should have the same format
as |
additive |
should the model selection be restricted to additive models? |
linear |
vector indicating for which of the variables no knots should
be entered. For example, if |
delete |
should complete basis functions be deleted at once (2), should only individual dimensions be deleted (1) or should only the addition stage of the model selection be carried out (0)? |
fit |
|
silent |
suppresses the printing of diagnostic output about basis functions added or deleted, Rao-statistics, Wald-statistics and log-likelihoods. |
normweight |
should the weights be normalized so that they average to one? This option has only an effect if the model is selected using AIC. |
tdata , tcov , tweight
|
test set. Should satisfy the same requirements as |
cv |
in how many subsets should the data be divided for cross-validation? If |
select |
if a test set is provided, or if the model is selected using cross validation, should the model be select that minimizes (misclassification) loss (0), that maximizes test set log-likelihood (1) or that minimizes test set squared error loss (2)? |
loss |
a rectangular matrix specifying the loss function, whose
size is the number of
classes times number of actions.
Used for cross-validation and test set model
selection. |
seed |
optional
seed for the random number generator that determines the sequence of the
cases for cross-validation. If the seed has length 12 or more,
the first twelve elements are assumed to be |
The output is an object of class polyclass
, organized
to serve as input for plot.polyclass
,
beta.polyclass
,
summary.polyclass
, ppolyclass
(fitted probabilities),
cpolyclass
(fitted classes) and rpolyclass
(random classes).
The function returns a list with the following members:
call |
the command that was executed. |
ncov |
number of covariates. |
ndim |
number of dimensions of the fitted model. |
nclass |
number of classes. |
nbas |
number of basis functions. |
naction |
number of possible actions that are considered. |
fcts |
matrix of size second element: which knot ( third element: second covariate involved ( fourth element: knot involved (if the third element is fifth, sixth,... element: beta (coefficient) for class one, two, ... |
knots |
a matrix with |
cv |
in how many sets was the data divided for cross-validation.
Only provided if |
loss |
the loss matrix used in cross-validation and test set.
Only provided if |
penalty |
the parameter used in the AIC criterion. Only provided if |
method |
0 = AIC, 1 = test set, 2 = cross-validation. |
ranges |
column |
logl |
matrix with eight or eleven columns. Summarizes fits.
Column one indicates the dimension, column
column two the AIC or loss value, whichever was
used during the model selection
appropriate, column three four and five give the training set log-likelihood,
(misclassification) loss and squared error loss, columns six to
eight give the same information for the test set, column nine (or column
six if |
sample |
sample size. |
tsample |
the sample size of the test set. Only prvided if |
wgtsum |
sum of the case weights. |
covnames |
names of the covariates. |
classnames |
(numerical) names of the classes. |
cv.aic |
the penalty value that was determined optimal by
by cross validation. Only provided if |
cv.tab |
table with three columns. Column one and two indicate the penalty parameter
range for which the cv-loss in column three would be realized.
Only provided if |
seed |
the random seed that was used to determine the order
of the cases for cross-validation.
Only provided if |
delete |
were complete basis functions deleted at once (2), were only individual dimensions deleted (1) or was only the addition stage of the model selection carried out (0)? |
beta |
moments of basisfunctions. Needed for |
select |
if a test set is provided, or if the model is selected using cross validation, was the model selected that minimized (misclassification) loss (0), that maximized test set log-likelihood (1) or that minimized test set squared error loss (2)? |
anova |
matrix with three columns. The first two elements in a line indicate the subspace to which the line refers. The third element indicates the percentage of variance explained by that subspace. |
twgtsum |
sum of the test set case weights (only if |
Charles Kooperberg [email protected].
Charles Kooperberg, Smarajit Bose, and Charles J. Stone (1997). Polychotomous regression. Journal of the American Statistical Association, 92, 117–127.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
polymars
,
plot.polyclass
,
summary.polyclass
,
beta.polyclass
,
cpolyclass
,ppolyclass
,
rpolyclass
.
data(iris) fit.iris <- polyclass(iris[,5], iris[,1:4])
data(iris) fit.iris <- polyclass(iris[,5], iris[,1:4])
An adaptive regression procedure using piecewise linear splines to model the response.
polymars(responses, predictors, maxsize, gcv = 4, additive = FALSE, startmodel, weights, no.interact, knots, knot.space = 3, ts.resp, ts.pred, ts.weights, classify, factors, tolerance, verbose = FALSE)
polymars(responses, predictors, maxsize, gcv = 4, additive = FALSE, startmodel, weights, no.interact, knots, knot.space = 3, ts.resp, ts.pred, ts.weights, classify, factors, tolerance, verbose = FALSE)
responses |
vector of responses, or a matrix for multiple response regression. In the case of a matrix each column corresponds to a response and each row corresponds to an observation. Missing values are not allowed. |
predictors |
matrix of predictor variables for the regression. Each column corresponds to a predictor and each row corresponds to an observation in the same order as they appear in the response argument. Missing values are not allowed. |
maxsize |
the maximum number of basis functions that the model is allowed to grow to in
the stepwise addition procedure. Default is
|
gcv |
parameter used to find the overall best model from a sequence of fitted models.
The residual sum of squares of a model is penalized by dividing by the square of
|
additive |
Should the fitted model be additive in the predictors? |
startmodel |
the first model that is to be fit by |
weights |
optional vector of observation weights; if supplied, the algorithm fits to minimize the sum of the weights multiplied by the squared residuals. The length of weights must be the same as the number of observations. The weights must be nonnegative. |
no.interact |
an optional matrix used if certain predictor interactions are not allowed in the model.
It is given as a matrix of size |
knots |
defines how the function is to find potential knots for the spline basis
functions. This can be set to the maximum number of knots you would
like to be considered for each predictor.
Usually, to avoid the design matrix becoming singular the actual number of
knots produced is constrained to at most every third order statistic in any
predictor. This constraint can be adjusted using the When specifying knots as a single number or a matrix and there are categorical variables these are specified separately as such using the factor argument. |
knot.space |
is an integer describing the minimum number of order statistics apart that two knots can be. Knots should not be too close to insure numerical stability. |
ts.resp |
testset responses for model selection. Should have the same number of columns
as the training set response. A testset can be used for the model selection.
Depending on the value of classify, either the model with the smallest testset
residual sum of squares or the smallest testset classification error is
provided. Overrides |
ts.pred |
testset predictors. Should have the same number of columns as the training set predictors. |
ts.weights |
testset observation weights. A vector of length equal to the number of cases of the testset. All weights must be non-negative. |
classify |
when the response is discrete (categorical), polymars can be used for
classification. In particular, when |
factors |
used to indicate that certain variables in the predictor set are categorical
variables. Specified as a vector containing the appropriate predictor
indices (column numbers of categorical variables in predictors matrix). Factors
can also be set when the |
tolerance |
for each possible candidate to be added/deleted the resulting residual sums
of squares of the model, with/without this candidate, must be calculated.
The inversion of of the "X-transpose by X" matrix, X being the design matrix,
is done by an updating procedure c.f. C.R. Rao - Linear Statistical Inference
and Its Applications, 2nd. edition, page 33.
In the inversion the size of the bottom right-hand entry of this matrix is
critical. If it |
verbose |
when set to |
An object of the class polymars. The returned object contains information about the fitting steps and the model selected. The first data frame contains a row for each step of the fitting procedure. In the columns are: a 1 for an addition step or a 0 for a deletion step, the size of the model at each step, residual sums of squares (RSS) and the generalized cross validation value (GCV), testset residual sums of squares or testset misclassification, whatever was used for the model selection. The second data frame, model, contains a row for each basis function of the model. Each row corresponds to one basis function (with two possible components). The pred1 column contains the indices of the first predictor of the basis function. Column knot1 is a possible knot in this predictor. If this column is NA, the first component is linear. If any of the basis functions of the model is categorical then there will be a level1 column. Column pred2 is the possible second predictor involved (if it is NA the basis function only depends on one predictor). Column knot2 contains the possible knot for the predictor pred2, and it is NA when this component is linear. This is a similar format to the startmodel argument together with an additional first row corresponding to the intercept but the startmodel doesn't use a separate column to specify levels of a categorical variable . If any predictor in pred2 is categorical then there will be a level2 column. The column "coefs" (more than one column in the case of multiple response regression) contains the coefficients. The returned object also contains the fitted values and residuals of the data used in fitting the model.
The algorithm employed by polymars
is different from the MARS(tm)
algorithm of Friedman (1991), though it has many similarities. (The name
polymars
has been used for this algorithm well before MARS was trademarked.)
Some of the main differences are:
polymars
requires linear terms of a predictor to be in the model
before nonlinear terms using the same predictor can be added;
polymars
requires a univariate basis function to be in the model
before a tensor-product basis function involving the univariate
basis function can be in the model;
during stepwise deletion the same hierarchy is maintained;
polymars
can be fit to multiple outcomes simultaneously, with
categorical outcomes it can be used for multiple classification; and
polyclass
uses the same modeling strategy as polymars
,
but uses a logistic (polychotomous) likelihood.
MARS is a registered trademark of Jeril, Inc and is used here with permission. Commercial licenses and versions of PolyMARS may be obtained from Salford Systems at http://www.salford-systems.com
Martin O'Connor.
Charles Kooperberg, Smarajit Bose, and Charles J. Stone (1997). Polychotomous regression. Journal of the American Statistical Association, 92, 117–127.
Friedman, J. H. (1991). Multivariate adaptive regression splines (with discussion). The Annals of Statistics, 19, 1–141.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
polyclass
,
design.polymars
,
persp.polymars
,
plot.polymars
,
predict.polymars
,
summary.polymars
.
data(state) state.pm <- polymars(state.region, state.x77, knots = 15, classify = TRUE) state.pm2 <- polymars(state.x77[, 2], state.x77[,-2], gcv = 2) plot(fitted(state.pm2), residuals(state.pm2))
data(state) state.pm <- polymars(state.region, state.x77, knots = 15, classify = TRUE) state.pm2 <- polymars(state.x77[, 2], state.x77[,-2], gcv = 2) plot(fitted(state.pm2), residuals(state.pm2))
Produces fitted values for a model of class polymars
.
## S3 method for class 'polymars' predict(object, x, classify = FALSE, intercept, ...)
## S3 method for class 'polymars' predict(object, x, classify = FALSE, intercept, ...)
object |
object of the class |
x |
the predictor values at which the fitted values will be computed. The
predictor values can be in a number of formats. It can take the form of a
vector of length equal to the number of predictors in the original data set
or it can be shortened to the length of only those predictors that occur in
the model, in the same order as they appear in the original data set.
Similarly, |
classify |
if the original call to polymars was for a classification problem and you would
like the classifications (class predictions), set this option equal to |
intercept |
Setting intercept equal to |
... |
other arguments are ignored. |
A matrix of fitted values.
The number of columns in the
returned matrix equals the number of responses in the original call to polymars
.
Martin O'Connor.
Charles Kooperberg, Smarajit Bose, and Charles J. Stone (1997). Polychotomous regression. Journal of the American Statistical Association, 92, 117–127.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
polymars
,
design.polymars
,
plot.polymars
,
summary.polymars
.
data(state) state.pm <- polymars(state.region, state.x77, knots = 15, classify = TRUE, gcv = 1) table(predict(state.pm, x = state.x77, classify = TRUE), state.region)
data(state) state.pm <- polymars(state.region, state.x77, knots = 15, classify = TRUE, gcv = 1) table(predict(state.pm, x = state.x77, classify = TRUE), state.region)
This function summarizes both the stepwise selection process of the
model fitting by hare
, as well as the final model
that was selected using AIC/BIC.
## S3 method for class 'hare' summary(object, ...) ## S3 method for class 'hare' print(x, ...)
## S3 method for class 'hare' summary(object, ...) ## S3 method for class 'hare' print(x, ...)
object , x
|
|
... |
other arguments are ignored. |
These function produce identical printed output. The main body consists of two tables.
The first table has six columns: the first column is a possible number of dimensions for the fitted model;
the second column indicates whether this model was fitted during the addition or deletion stage;
the third column is the log-likelihood for the fit;
the fourth column is -2 * loglikelihood + penalty * (dimension)
,
which is the AIC criterion - hare
selected the model with
the minimum value of AIC;
the last two columns give the
endpoints of the interval of values of penalty that would yield the
model with the indicated number of dimensions
(NA
s imply that the model is not optimal for any choice of penalty).
At the bottom of the first table the dimension of the selected model is reported, as is the value of penalty that was used.
Each row of the second table summarizes the information about a basis function in the final model. It shows the variables involved, the knot locations, the estimated coefficient and its standard error and Wald statistic (estimate/SE).
Since the basis functions are selected in an adaptive fashion, typically most Wald statistics are larger than (the magical) 2. These statistics should be taken with a grain of salt though, as they are inflated because of the adaptivity of the model selection.
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone and Young K. Truong (1995). Hazard regression. Journal of the American Statistical Association, 90, 78-94.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
hare
,
plot.hare
,
dhare
,
hhare
,
phare
,
qhare
,
rhare
.
fit <- hare(testhare[,1], testhare[,2], testhare[,3:8]) summary(fit)
fit <- hare(testhare[,1], testhare[,2], testhare[,3:8]) summary(fit)
This function summarizes both the stepwise selection process of the
model fitting by heft
, as well as the final model
that was selected using AIC/BIC.
## S3 method for class 'heft' summary(object, ...) ## S3 method for class 'heft' print(x, ...)
## S3 method for class 'heft' summary(object, ...) ## S3 method for class 'heft' print(x, ...)
object , x
|
|
... |
other arguments are ignored. |
These function produce identical printed output. The main body is a table with six columns:
the first column is a possible number of knots for the fitted model;
the second column is 0 if the model was fitted during the addition stage and 1 if the model was fitted during the deletion stage;
the third column is the log-likelihood for the fit;
the fourth column is -2 * loglikelihood + penalty * (dimension)
,
which is the AIC criterion - heft
selected the model with
the minimum value of AIC;
the fifth and sixth columns give the
endpoints of the interval of values of penalty that would yield the
model with the indicated number of knots. (NA
s imply that the model is
not optimal for any choice of penalty.)
At the bottom of the table the number of knots corresponding to the selected model is reported, as are the value of penalty that was used and the coefficients of the log-based terms in the fitted model and their standard errors.
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone and Young K. Truong (1995). Hazard regression. Journal of the American Statistical Association, 90, 78-94.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
heft
,
plot.heft
,
dheft
,
hheft
,
pheft
,
qheft
,
rheft
.
fit1 <- heft(testhare[,1], testhare[,2]) summary(fit1) # modify tail behavior fit2 <- heft(testhare[,1], testhare[,2], leftlog = FALSE, rightlog = FALSE, leftlin = TRUE) summary(fit2) fit3 <- heft(testhare[,1], testhare[,2], penalty = 0) # select largest model summary(fit3)
fit1 <- heft(testhare[,1], testhare[,2]) summary(fit1) # modify tail behavior fit2 <- heft(testhare[,1], testhare[,2], leftlog = FALSE, rightlog = FALSE, leftlin = TRUE) summary(fit2) fit3 <- heft(testhare[,1], testhare[,2], penalty = 0) # select largest model summary(fit3)
This function summarizes both the stepwise selection process of the
model fitting by logspline
, as well as the final model
that was selected using AIC/BIC. A
logspline
object was fit using the 1997 knot addition and deletion algorithm.
The 1992 algorithm is available using the oldlogspline
function.
## S3 method for class 'logspline' summary(object, ...) ## S3 method for class 'logspline' print(x, ...)
## S3 method for class 'logspline' summary(object, ...) ## S3 method for class 'logspline' print(x, ...)
object , x
|
|
... |
other arguments are ignored. |
These function produce identical printed output. The main body is a table with five columns: the first column is a possible number of knots for the fitted model;
the second column is the log-likelihood for the fit;
the third column is -2 * loglikelihood + penalty * (number of knots - 1)
,
which is the AIC criterion; logspline
selected the model with
the smallest value of AIC;
the fourth and fifth columns give the
endpoints of the interval of values of penalty that would yield the
model with the indicated number of knots. (NA
s imply that the model is
not optimal for any choice of penalty
.) At the bottom of the table the
number of knots corresponding to the selected model is reported, as is
the value of penalty that was used.
Charles Kooperberg [email protected].
Charles Kooperberg and Charles J. Stone. Logspline density estimation for censored data (1992). Journal of Computational and Graphical Statistics, 1, 301–328.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
logspline
,
plot.logspline
,
dlogspline
,
plogspline
,
qlogspline
,
rlogspline
,oldlogspline
.
y <- rnorm(100) fit <- logspline(y) summary(fit)
y <- rnorm(100) fit <- logspline(y) summary(fit)
Summary of a model fitted with lspec
## S3 method for class 'lspec' summary(object, ...) ## S3 method for class 'lspec' print(x, ...)
## S3 method for class 'lspec' summary(object, ...) ## S3 method for class 'lspec' print(x, ...)
object , x
|
|
... |
other options are ignored. |
These function produce an identical printed summary of an lspec
object.
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone, and Young K. Truong (1995). Logspline Estimation of a Possibly Mixed Spectral Distribution. Journal of Time Series Analysis, 16, 359-388.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
lspec, plot.lspec, clspec, dlspec, plspec, rlspec.
data(co2) co2.detrend <- lm(co2~c(1:length(co2)))$residuals fit <- lspec(co2.detrend) summary(fit)
data(co2) co2.detrend <- lm(co2~c(1:length(co2)))$residuals fit <- lspec(co2.detrend) summary(fit)
This function summarizes both the stepwise selection process of the
model fitting by oldlogspline
, as well as the final model
that was selected using AIC/BIC. A
logspline
object was fit using
the 1992 knot deletion algorithm (oldlogspline
).
The 1997 algorithm using knot
deletion and addition is available using the logspline
function.
## S3 method for class 'oldlogspline' summary(object, ...) ## S3 method for class 'oldlogspline' print(x, ...)
## S3 method for class 'oldlogspline' summary(object, ...) ## S3 method for class 'oldlogspline' print(x, ...)
object , x
|
|
... |
other arguments are ignored. |
These function produces the same printed output. The main body is a table with five columns: the first column is a possible number of knots for the fitted model;
the second column is the log-likelihood for the fit;
the third column is -2 * loglikelihood + penalty * (number of knots - 1)
,
which is the AIC criterion; logspline
selected the model with
the smallest value of AIC;
the fourth and fifth columns give the
endpoints of the interval of values of penalty that would yield the
model with the indicated number of knots. (NA
s imply that the model is
not optimal for any choice of penalty
.) At the bottom of the table the
number of knots corresponding to the selected model is reported, as is
the value of penalty that was used.
Charles Kooperberg [email protected].
Charles Kooperberg and Charles J. Stone. Logspline density estimation for censored data (1992). Journal of Computational and Graphical Statistics, 1, 301–328.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
logspline
,
oldlogspline
,
plot.oldlogspline
,
doldlogspline
,
poldlogspline
,qoldlogspline
,
roldlogspline
.
y <- rnorm(100) fit <- oldlogspline(y) summary(fit)
y <- rnorm(100) fit <- oldlogspline(y) summary(fit)
This function summarizes both the stepwise selection process of the
model fitting by polyclass
, as well as the final model
that was selected
## S3 method for class 'polyclass' summary(object, ...) ## S3 method for class 'polyclass' print(x, ...)
## S3 method for class 'polyclass' summary(object, ...) ## S3 method for class 'polyclass' print(x, ...)
object , x
|
|
... |
other arguments are ignored. |
These function summarize a polyclass
fit identically. They also give information
about fits that could have been obtained with other
model selection options in polyclass
.
Charles Kooperberg [email protected].
Charles Kooperberg, Smarajit Bose, and Charles J. Stone (1997). Polychotomous regression. Journal of the American Statistical Association, 92, 117–127.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
polyclass
,
plot.polyclass
,
beta.polyclass
,
cpolyclass
,
ppolyclass
,
rpolyclass
.
data(iris) fit.iris <- polyclass(iris[,5], iris[,1:4]) summary(fit.iris)
data(iris) fit.iris <- polyclass(iris[,5], iris[,1:4]) summary(fit.iris)
Gives details of a polymars
object.
## S3 method for class 'polymars' summary(object, ...) ## S3 method for class 'polymars' print(x, ...)
## S3 method for class 'polymars' summary(object, ...) ## S3 method for class 'polymars' print(x, ...)
object , x
|
object of the class |
... |
other arguments are ignored. |
These two functions provide identical printed information. about the fitting steps and the model selected. The first data frame contains a row for each step of the fitting procedure. In the columns are: a 1 for an addition step or a 0 for a deletion step, the size of the model at each step, residual sums of squares (RSS) and the generalized cross validation value (GCV), testset residual sums of squares or testset misclassification, whatever was used for the model selection. The second data frame, model, contains a row for each basis function of the model. Each row corresponds to one basis function (with two possible components). The pred1 column contains the indices of the first predictor of the basis function. Column knot1 is a possible knot in this predictor. If this column is NA, the first component is linear. If any of the basis functions of the model is categorical then there will be a level1 column. Column pred2 is the possible second predictor involved (if it is NA the basis function only depends on one predictor). Column knot2 contains the possible knot for the predictor pred2, and it is NA when this component is linear. This is a similar format to the startmodel argument together with an additional first row corresponding to the intercept but the startmodel doesn't use a separate column to specify levels of a categorical variable . If any predictor in pred2 is categorical then there will be a level2 column. The column "coefs" (more than one column in the case of multiple response regression) contains the coefficients.
Martin O'Connor.
Charles Kooperberg, Smarajit Bose, and Charles J. Stone (1997). Polychotomous regression. Journal of the American Statistical Association, 92, 117–127.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
polymars
,
design.polymars
,
persp.polymars
,
plot.polymars
,
predict.polymars
.
data(state) state.pm <- polymars(state.region, state.x77, knots = 15, classify = TRUE) summary(state.pm)
data(state) state.pm <- polymars(state.region, state.x77, knots = 15, classify = TRUE) summary(state.pm)
Fake survival analysis data set for testing hare
and heft
testhare
testhare
A matrix with 2000 lines (observations) and 8 columns. Column 1 is intended to be the survival time, column 2 the censoring indicator, and columns 3 through 8 are predictors (covariates).
Charles Kooperberg [email protected].
I started out with a real data set; then I sampled, transformed and added noise. Virtually no number is unchanged.
Charles Kooperberg, Charles J. Stone and Young K. Truong (1995). Hazard regression. Journal of the American Statistical Association, 90, 78-94.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
harefit <- hare(testhare[,1], testhare[,2], testhare[,3:8]) heftfit <- heft(testhare[,1], testhare[,2])
harefit <- hare(testhare[,1], testhare[,2], testhare[,3:8]) heftfit <- heft(testhare[,1], testhare[,2])
This function tries to convert a date.frame or a matrix to a no-frills matrix without labels, and a vector or time-series to a no-frills vector without labels.
unstrip(x)
unstrip(x)
x |
one- or two-dimensional object. |
Many of the functions for logspline
, oldlogspline
,
lspec
, polyclass
,
hare
, heft
, and polymars
were written in the “before data.frame” era;
unstrip
attempts to keep all these functions useful with more advanced input objects.
In particular, many of these functions call unstrip
before doing anything else.
If x
is two-dimensional a matrix
without names, if x
is one-dimensional a numerical vector
Charles Kooperberg [email protected].
data(co2) unstrip(co2) data(iris) unstrip(iris)
data(co2) unstrip(co2) data(iris) unstrip(iris)
Driver function for dhare
, hhare
, phare
, qhare
,
and rhare
. This function is not intended for use by itself.
xhare(arg1, arg2, arg3, arg4)
xhare(arg1, arg2, arg3, arg4)
arg1 , arg2 , arg3 , arg4
|
arguments. |
This function is used internally.
This function is not intended for direct use.
Charles Kooperberg [email protected].
Charles Kooperberg, Charles J. Stone and Young K. Truong (1995). Hazard regression. Journal of the American Statistical Association, 90, 78-94.
Charles J. Stone, Mark Hansen, Charles Kooperberg, and Young K. Truong. The use of polynomial splines and their tensor products in extended linear modeling (with discussion) (1997). Annals of Statistics, 25, 1371–1470.
hare
,
dhare
, hhare
, phare
, qhare
,
rhare
.