Title: | Vector Generalized Linear and Additive Models |
---|---|
Description: | An implementation of about 6 major classes of statistical regression models. The central algorithm is Fisher scoring and iterative reweighted least squares. At the heart of this package are the vector generalized linear and additive model (VGLM/VGAM) classes. VGLMs can be loosely thought of as multivariate GLMs. VGAMs are data-driven VGLMs that use smoothing. The book "Vector Generalized Linear and Additive Models: With an Implementation in R" (Yee, 2015) <DOI:10.1007/978-1-4939-2818-7> gives details of the statistical framework and the package. Currently only fixed-effects models are implemented. Many (100+) models and distributions are estimated by maximum likelihood estimation (MLE) or penalized MLE. The other classes are RR-VGLMs (reduced-rank VGLMs), quadratic RR-VGLMs, doubly constrained RR-VGLMs, quadratic RR-VGLMs, reduced-rank VGAMs, RCIMs (row-column interaction models)---these classes perform constrained and unconstrained quadratic ordination (CQO/UQO) models in ecology, as well as constrained additive ordination (CAO). Hauck-Donner effect detection is implemented. Note that these functions are subject to change; see the NEWS and ChangeLog files for latest changes. |
Authors: | Thomas Yee [aut, cre] , Cleve Moler [ctb] (LINPACK routines in src) |
Maintainer: | Thomas Yee <[email protected]> |
License: | GPL-3 |
Version: | 1.1-12 |
Built: | 2024-11-03 06:44:15 UTC |
Source: | CRAN |
VGAM provides functions for fitting vector generalized linear and additive models (VGLMs and VGAMs), and associated models (Reduced-rank VGLMs or RR-VGLMs, Doubly constrained RR-VGLMs (DRR-VGLMs), Quadratic RR-VGLMs, Reduced-rank VGAMs). This package fits many models and distributions by maximum likelihood estimation (MLE) or penalized MLE, under this statistical framework. Also fits constrained ordination models in ecology such as constrained quadratic ordination (CQO).
This package centers on the
iteratively reweighted least squares (IRLS)
algorithm.
Other key words include
Fisher scoring,
additive models,
reduced-rank regression,
penalized likelihood,
and constrained ordination.
The central modelling functions are
vglm
,
vgam
,
rrvglm
,
rcim
,
cqo
,
cao
.
Function
vglm
operates very similarly to
glm
but is much more general,
and many methods functions
such as coef
and
predict
are available.
The package uses S4 (see methods-package
).
Some notable companion packages:
(1) VGAMdata mainly contains data sets
useful for illustrating VGAM.
Some of the big ones were initially from VGAM.
Recently, some older VGAM family functions have been shifted
into this package.
(2) VGAMextra written by Victor Miranda has some additional
VGAM family and link functions,
with a bent towards time series models.
(3) svyVGAM provides design-based inference,
e.g., to survey sampling settings.
This is because the weights
argument of
vglm
can be assigned any positive values including
survey weights.
Compared to other similar packages, such as
gamlss and
mgcv,
VGAM has more models implemented (150+ of them)
and they are not restricted to
a location-scale-shape framework or
(largely) the 1-parameter exponential family.
The general statistical framework behind it all,
once grasped, makes regression modelling unified.
Some features of the package are:
(i) many family functions handle multiple responses;
(ii) reduced-rank regression is available by operating
on latent variables (optimal linear combinations of the
explanatory variables);
(iii) basic
automatic smoothing parameter selection is
implemented for VGAMs
(sm.os
and sm.ps
with a call to magic
),
although it has to be refined;
(iv) smart prediction allows correct prediction of nested
terms in the formula provided smart functions are used.
The GLM and GAM classes are special cases of VGLMs and VGAMs. The VGLM/VGAM framework is intended to be very general so that it encompasses as many distributions and models as possible. VGLMs are limited only by the assumption that the regression coefficients enter through a set of linear predictors. The VGLM class is very large and encompasses a wide range of multivariate response types and models, e.g., it includes univariate and multivariate distributions, categorical data analysis, extreme values, correlated binary data, quantile and expectile regression, time series problems. Potentially, it can handle generalized estimating equations, survival analysis, bioassay data and nonlinear least-squares problems.
Crudely, VGAMs are to VGLMs what GAMs are to GLMs.
Two types of VGAMs are implemented:
1st-generation VGAMs with s
use vector backfitting,
while
2nd-generation VGAMs with sm.os
and
sm.ps
use O-splines and P-splines
so have a direct solution
(hence avoids backfitting)
and have automatic smoothing parameter selection.
The former is older and is based on Yee and Wild (1996).
The latter is more modern
(Yee, Somchit and Wild, 2024)
but it requires a reasonably large number of observations
to work well because it is based on optimizing
over a predictive criterion rather than
using a Bayesian approach.
An important feature of the framework
is that of constraint matrices.
They apportion the regression coefficients according
to each explanatory variable.
For example,
since each parameter has a link function applied to it
to turn it into a linear or additive predictor,
does a covariate have an equal effect on each parameter?
Or no effect?
Arguments such as zero
, parallel
and
exchangeable
,
are merely easy ways to have them constructed
internally.
Users may input them explicitly using
the constraint
argument, and
CM.symm0
etc. can make this easier.
Another important feature is implemented by
xij
.
It allows different linear/additive predictors
to have a different values of the same
explanatory variable, e.g.,
multinomial
for the
conditional logit model and the like.
VGLMs with dimension reduction form the class of RR-VGLMs. This is achieved by reduced rank regression. Here, a subset of the constraint matrices are estimated rather than being known and prespecified. Optimal linear combinations of the explanatory variables are taken (creating latent variables) which are used for fitting a VGLM. Thus the regression can be thought of as being in two stages. The class of DRR-VGLMs provides further structure to RR-VGLMs by allowing constraint matrices to be specified for each column of A and row of C. Thus the reduced rank regression can be fitted with greater control.
This package is the first to check for the Hauck-Donner effect
(HDE) in regression models; see hdeff
. This is an
aberration of the Wald statistics when the parameter estimates are too
close to the boundary of the parameter space. When present the p-value
of a regression coefficient is biased upwards so that a highly
significant variable might be deemed nonsignificant. Thus the HDE can
create havoc for variable selection!
Somewhat related to the previous paragraph, hypothesis testing
using the likelihood ratio test,
Rao's score test (Lagrange multiplier test) and
(modified) Wald's test are all available; see summaryvglm
.
For all regression coefficients of a model, taken one at a time,
all three methods require further IRLS iterations to obtain
new values of the other regression coefficients after one
of the coefficients has had its value set (usually to 0).
Hence the computation load is overall significant.
For a complete list of this package, use library(help = "VGAM")
.
New VGAM family functions are continually being written and
added to the package.
This package is undergoing continual
development and improvement,
therefore users should treat
many things as subject to change.
This includes the
family function names,
argument names,
many of the internals,
moving some functions to VGAMdata,
the use of link functions,
and slot names.
For example,
many link functions were renamed in 2019
so that they all end in "link"
,
e.g., loglink()
instead of loge()
.
Some future pain can be avoided by using good
programming techniques, e.g.,
using extractor functions such as
coef()
, weights()
, vcov()
,
predict()
.
Although changes are now less frequent,
please expect changes in all aspects of the
package.
See the NEWS
file for a list of changes
from version to version.
Thomas W. Yee, [email protected], with contributions from Victor Miranda and several graduate students over the years, especially Xiangjie (Albert) Xue and Chanatda Somchit.
Maintainer: Thomas Yee [email protected].
Yee, T. W. (2015). Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: Springer.
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
Yee, T. W. and Stephenson, A. G. (2007). Vector generalized linear and additive extreme value models. Extremes, 10, 1–19.
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
Yee, T. W. (2008).
The VGAM
Package.
R News, 8, 28–39.
Yee, T. W. (2010). The VGAM package for categorical data analysis. Journal of Statistical Software, 32, 1–34. doi:10.18637/jss.v032.i10.
Yee, T. W. (2014). Reduced-rank vector generalized linear models with two linear predictors. Computational Statistics and Data Analysis, 71, 889–902.
Yee, T. W. and Ma, C. (2024). Generally altered, inflated, truncated and deflated regression. Statistical Science, 39 (in press).
Yee, T. W. (2022). On the Hauck-Donner effect in Wald tests: Detection, tipping points and parameter space characterization, Journal of the American Statistical Association, 117, 1763–1774. doi:10.1080/01621459.2021.1886936.
Yee, T. W. and Somchit, C. and Wild, C. J. (2024). Penalized vector generalized additive models. Manuscript in preparation.
The website for the VGAM package and book is https://www.stat.auckland.ac.nz/~yee/. There are some resources there, especially as relating to my book and new features added to VGAM.
Some useful background reference for the package include:
Chambers, J. and Hastie, T. (1991). Statistical Models in S. Wadsworth & Brooks/Cole.
Green, P. J. and Silverman, B. W. (1994). Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach. Chapman and Hall.
Hastie, T. J. and Tibshirani, R. J. (1990). Generalized Additive Models. Chapman and Hall.
vglm
,
vgam
,
rrvglm
,
rcim
,
cqo
,
TypicalVGAMfamilyFunction
,
CommonVGAMffArguments
,
Links
,
hdeff
,
glm
,
lm
,
https://CRAN.R-project.org/package=VGAM.
# Example 1; proportional odds model pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)) depvar(fit1) # Better than using fit1@y; dependent variable (response) weights(fit1, type = "prior") # Number of observations coef(fit1, matrix = TRUE) # p.179, in McCullagh and Nelder (1989) constraints(fit1) # Constraint matrices summary(fit1) # HDE could affect these results summary(fit1, lrt0 = TRUE, score0 = TRUE, wald0 = TRUE) # No HDE hdeff(fit1) # Check for any Hauck-Donner effect # Example 2; zero-inflated Poisson model zdata <- data.frame(x2 = runif(nn <- 2000)) zdata <- transform(zdata, pstr0 = logitlink(-0.5 + 1*x2, inverse = TRUE), lambda = loglink( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y = rzipois(nn, lambda, pstr0 = pstr0)) with(zdata, table(y)) fit2 <- vglm(y ~ x2, zipoisson, data = zdata, trace = TRUE) coef(fit2, matrix = TRUE) # These should agree with the above values # Example 3; fit a two species GAM simultaneously fit3 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(2, 3)), binomialff(multiple.responses = TRUE), data = hunua) coef(fit3, matrix = TRUE) # Not really interpretable ## Not run: plot(fit3, se = TRUE, overlay = TRUE, lcol = 3:4, scol = 3:4) ooo <- with(hunua, order(altitude)) with(hunua, matplot(altitude[ooo], fitted(fit3)[ooo, ], type = "l", lwd = 2, col = 3:4, xlab = "Altitude (m)", ylab = "Probability of presence", las = 1, main = "Two plant species' response curves", ylim = c(0, 0.8))) with(hunua, rug(altitude)) ## End(Not run) # Example 4; LMS quantile regression fit4 <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), data = bmi.nz, trace = TRUE) head(predict(fit4)) head(fitted(fit4)) head(bmi.nz) # Person 1 is near the lower quartile among people his age head(cdf(fit4)) ## Not run: par(mfrow = c(1,1), bty = "l", mar = c(5,4,4,3)+0.1, xpd=TRUE) qtplot(fit4, percentiles = c(5,50,90,99), main = "Quantiles", las = 1, xlim = c(15, 90), ylab = "BMI", lwd=2, lcol=4) # Quantile plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) # Density plot aa <- deplot(fit4, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "Density functions at Age=20 (black), 42 (red) and 55 (blue)") aa aa <- deplot(fit4, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red") aa <- deplot(fit4, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue", Attach = TRUE) aa@post$deplot # Contains density function values ## End(Not run) # Example 5; GEV distribution for extremes (fit5 <- vglm(maxtemp ~ 1, gevff, data = oxtemp, trace = TRUE)) head(fitted(fit5)) coef(fit5, matrix = TRUE) Coef(fit5) vcov(fit5) vcov(fit5, untransform = TRUE) sqrt(diag(vcov(fit5))) # Approximate standard errors ## Not run: rlplot(fit5)
# Example 1; proportional odds model pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)) depvar(fit1) # Better than using fit1@y; dependent variable (response) weights(fit1, type = "prior") # Number of observations coef(fit1, matrix = TRUE) # p.179, in McCullagh and Nelder (1989) constraints(fit1) # Constraint matrices summary(fit1) # HDE could affect these results summary(fit1, lrt0 = TRUE, score0 = TRUE, wald0 = TRUE) # No HDE hdeff(fit1) # Check for any Hauck-Donner effect # Example 2; zero-inflated Poisson model zdata <- data.frame(x2 = runif(nn <- 2000)) zdata <- transform(zdata, pstr0 = logitlink(-0.5 + 1*x2, inverse = TRUE), lambda = loglink( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y = rzipois(nn, lambda, pstr0 = pstr0)) with(zdata, table(y)) fit2 <- vglm(y ~ x2, zipoisson, data = zdata, trace = TRUE) coef(fit2, matrix = TRUE) # These should agree with the above values # Example 3; fit a two species GAM simultaneously fit3 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(2, 3)), binomialff(multiple.responses = TRUE), data = hunua) coef(fit3, matrix = TRUE) # Not really interpretable ## Not run: plot(fit3, se = TRUE, overlay = TRUE, lcol = 3:4, scol = 3:4) ooo <- with(hunua, order(altitude)) with(hunua, matplot(altitude[ooo], fitted(fit3)[ooo, ], type = "l", lwd = 2, col = 3:4, xlab = "Altitude (m)", ylab = "Probability of presence", las = 1, main = "Two plant species' response curves", ylim = c(0, 0.8))) with(hunua, rug(altitude)) ## End(Not run) # Example 4; LMS quantile regression fit4 <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), data = bmi.nz, trace = TRUE) head(predict(fit4)) head(fitted(fit4)) head(bmi.nz) # Person 1 is near the lower quartile among people his age head(cdf(fit4)) ## Not run: par(mfrow = c(1,1), bty = "l", mar = c(5,4,4,3)+0.1, xpd=TRUE) qtplot(fit4, percentiles = c(5,50,90,99), main = "Quantiles", las = 1, xlim = c(15, 90), ylab = "BMI", lwd=2, lcol=4) # Quantile plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) # Density plot aa <- deplot(fit4, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "Density functions at Age=20 (black), 42 (red) and 55 (blue)") aa aa <- deplot(fit4, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red") aa <- deplot(fit4, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue", Attach = TRUE) aa@post$deplot # Contains density function values ## End(Not run) # Example 5; GEV distribution for extremes (fit5 <- vglm(maxtemp ~ 1, gevff, data = oxtemp, trace = TRUE)) head(fitted(fit5)) coef(fit5, matrix = TRUE) Coef(fit5) vcov(fit5) vcov(fit5, untransform = TRUE) sqrt(diag(vcov(fit5))) # Approximate standard errors ## Not run: rlplot(fit5)
Estimates the three independent parameters of the the A1A2A3 blood group system.
A1A2A3(link = "logitlink", inbreeding = FALSE, ip1 = NULL, ip2 = NULL, iF = NULL)
A1A2A3(link = "logitlink", inbreeding = FALSE, ip1 = NULL, ip2 = NULL, iF = NULL)
link |
Link function applied to |
inbreeding |
Logical. Is there inbreeding? |
ip1 , ip2 , iF
|
Optional initial value for |
The parameters p1
and p2
are probabilities, so that
p3=1-p1-p2
is the third probability.
The parameter f
is the third independent parameter if
inbreeding = TRUE
.
If inbreeding = FALSE
then and Hardy-Weinberg
Equilibrium (HWE) is assumed.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
The input can be a 6-column matrix of counts,
with columns corresponding to
A1A1
,
A1A2
,
A1A3
,
A2A2
,
A2A3
,
A3A3
(in order).
Alternatively, the input can be a 6-column matrix of
proportions (so each row adds to 1) and the weights
argument is used to specify the total number of counts for each row.
T. W. Yee
Lange, K. (2002). Mathematical and Statistical Methods for Genetic Analysis, 2nd ed. New York: Springer-Verlag.
AA.Aa.aa
,
AB.Ab.aB.ab
,
ABO
,
MNSs
.
ymat <- cbind(108, 196, 429, 143, 513, 559) fit <- vglm(ymat ~ 1, A1A2A3(link = probitlink), trace = TRUE, crit = "coef") fit <- vglm(ymat ~ 1, A1A2A3(link = logitlink, ip1 = 0.3, ip2 = 0.3, iF = 0.02), trace = TRUE, crit = "coef") Coef(fit) # Estimated p1 and p2 rbind(ymat, sum(ymat) * fitted(fit)) sqrt(diag(vcov(fit)))
ymat <- cbind(108, 196, 429, 143, 513, 559) fit <- vglm(ymat ~ 1, A1A2A3(link = probitlink), trace = TRUE, crit = "coef") fit <- vglm(ymat ~ 1, A1A2A3(link = logitlink, ip1 = 0.3, ip2 = 0.3, iF = 0.02), trace = TRUE, crit = "coef") Coef(fit) # Estimated p1 and p2 rbind(ymat, sum(ymat) * fitted(fit)) sqrt(diag(vcov(fit)))
Estimates the parameter of the AA-Aa-aa blood group system, with or without Hardy Weinberg equilibrium.
AA.Aa.aa(linkp = "logitlink", linkf = "logitlink", inbreeding = FALSE, ipA = NULL, ifp = NULL, zero = NULL)
AA.Aa.aa(linkp = "logitlink", linkf = "logitlink", inbreeding = FALSE, ipA = NULL, ifp = NULL, zero = NULL)
linkp , linkf
|
Link functions applied to |
ipA , ifp
|
Optional initial values for |
inbreeding |
Logical. Is there inbreeding? |
zero |
See |
This one or two parameter model involves a probability called pA
.
The probability of getting a count in the first column of the
input (an AA) is pA*pA
.
When inbreeding = TRUE
, an additional parameter f
is used.
If inbreeding = FALSE
then and Hardy-Weinberg
Equilibrium (HWE) is assumed.
The EIM is used if
inbreeding = FALSE
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
Setting inbreeding = FALSE
makes estimation difficult
with non-intercept-only models.
Currently, this code seems to work with intercept-only models.
The input can be a 3-column matrix of counts, where the columns
are AA, Ab and aa
(in order).
Alternatively, the input can be a 3-column matrix of
proportions (so each row adds to 1) and the weights
argument is used to specify the total number of counts for each row.
T. W. Yee
Weir, B. S. (1996). Genetic Data Analysis II: Methods for Discrete Population Genetic Data, Sunderland, MA: Sinauer Associates, Inc.
AB.Ab.aB.ab
,
ABO
,
A1A2A3
,
MNSs
.
y <- cbind(53, 95, 38) fit1 <- vglm(y ~ 1, AA.Aa.aa, trace = TRUE) fit2 <- vglm(y ~ 1, AA.Aa.aa(inbreeding = TRUE), trace = TRUE) rbind(y, sum(y) * fitted(fit1)) Coef(fit1) # Estimated pA Coef(fit2) # Estimated pA and f summary(fit1)
y <- cbind(53, 95, 38) fit1 <- vglm(y ~ 1, AA.Aa.aa, trace = TRUE) fit2 <- vglm(y ~ 1, AA.Aa.aa(inbreeding = TRUE), trace = TRUE) rbind(y, sum(y) * fitted(fit1)) Coef(fit1) # Estimated pA Coef(fit2) # Estimated pA and f summary(fit1)
Estimates the parameter of the AB-Ab-aB-ab blood group system.
AB.Ab.aB.ab(link = "logitlink", init.p = NULL)
AB.Ab.aB.ab(link = "logitlink", init.p = NULL)
link |
Link function applied to |
init.p |
Optional initial value for |
This one parameter model involves a probability called p
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
The input can be a 4-column matrix of counts, where the columns
are AB, Ab, aB and ab
(in order).
Alternatively, the input can be a 4-column matrix of
proportions (so each row adds to 1) and the weights
argument is used to specify the total number of counts for each row.
T. W. Yee
Lange, K. (2002). Mathematical and Statistical Methods for Genetic Analysis, 2nd ed. New York: Springer-Verlag.
ymat <- cbind(AB=1997, Ab=906, aB=904, ab=32) # Data from Fisher (1925) fit <- vglm(ymat ~ 1, AB.Ab.aB.ab(link = "identitylink"), trace = TRUE) fit <- vglm(ymat ~ 1, AB.Ab.aB.ab, trace = TRUE) rbind(ymat, sum(ymat)*fitted(fit)) Coef(fit) # Estimated p p <- sqrt(4*(fitted(fit)[, 4])) p*p summary(fit)
ymat <- cbind(AB=1997, Ab=906, aB=904, ab=32) # Data from Fisher (1925) fit <- vglm(ymat ~ 1, AB.Ab.aB.ab(link = "identitylink"), trace = TRUE) fit <- vglm(ymat ~ 1, AB.Ab.aB.ab, trace = TRUE) rbind(ymat, sum(ymat)*fitted(fit)) Coef(fit) # Estimated p p <- sqrt(4*(fitted(fit)[, 4])) p*p summary(fit)
Estimates the two independent parameters of the the ABO blood group system.
ABO(link.pA = "logitlink", link.pB = "logitlink", ipA = NULL, ipB = NULL, ipO = NULL, zero = NULL)
ABO(link.pA = "logitlink", link.pB = "logitlink", ipA = NULL, ipB = NULL, ipO = NULL, zero = NULL)
link.pA , link.pB
|
Link functions applied to |
ipA , ipB , ipO
|
Optional initial value for |
zero |
Details at |
The parameters pA
and pB
are probabilities, so that
pO=1-pA-pB
is the third probability.
The probabilities pA
and pB
correspond to A and B respectively,
so that pO
is the probability for O.
It is easier to make use of initial values for pO
than for pB
.
In documentation elsewhere I sometimes use
pA=p
,
pB=q
,
pO=r
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
The input can be a 4-column matrix of counts, where the columns
are A, B, AB, O (in order).
Alternatively, the input can be a 4-column matrix of
proportions (so each row adds to 1) and the weights
argument is used to specify the total number of counts for each row.
T. W. Yee
Lange, K. (2002). Mathematical and Statistical Methods for Genetic Analysis, 2nd ed. New York: Springer-Verlag.
AA.Aa.aa
,
AB.Ab.aB.ab
,
A1A2A3
,
MNSs
.
ymat <- cbind(A = 725, B = 258, AB = 72, O = 1073) # Order matters, not the name fit <- vglm(ymat ~ 1, ABO(link.pA = "identitylink", link.pB = "identitylink"), trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) # Estimated pA and pB rbind(ymat, sum(ymat) * fitted(fit)) sqrt(diag(vcov(fit)))
ymat <- cbind(A = 725, B = 258, AB = 72, O = 1073) # Order matters, not the name fit <- vglm(ymat ~ 1, ABO(link.pA = "identitylink", link.pB = "identitylink"), trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) # Estimated pA and pB rbind(ymat, sum(ymat) * fitted(fit)) sqrt(diag(vcov(fit)))
Fits an adjacent categories regression model to an ordered (preferably) factor response.
acat(link = "loglink", parallel = FALSE, reverse = FALSE, zero = NULL, ynames = FALSE, Thresh = NULL, Trev = reverse, Tref = if (Trev) "M" else 1, whitespace = FALSE)
acat(link = "loglink", parallel = FALSE, reverse = FALSE, zero = NULL, ynames = FALSE, Thresh = NULL, Trev = reverse, Tref = if (Trev) "M" else 1, whitespace = FALSE)
link |
Link function applied to the ratios of the
adjacent categories probabilities.
See |
parallel |
A logical, or formula specifying which terms have equal/unequal coefficients. |
reverse |
Logical.
By default, the linear/additive predictors used are
|
ynames |
See |
zero |
An integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
The values must be from the set {1,2,..., |
Thresh , Trev , Tref
|
See |
whitespace |
See |
In this help file the response is assumed to be a
factor with ordered values
, so that
is the number of linear/additive predictors
.
By default, the log link is used because the ratio of
two probabilities is positive.
Internally, deriv3
is called to
perform symbolic differentiation and
consequently this family function will struggle if
becomes too large.
If this occurs, try combining levels so that
is effectively reduced.
One idea is to aggregate levels with the fewest observations
in them first.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
No check is made to verify that the response is ordinal if the
response is a matrix;
see ordered
.
The response should be either a matrix of counts
(with row sums that are
all positive), or an ordered factor. In both cases,
the y
slot returned
by vglm
/vgam
/rrvglm
is the
matrix of counts.
For a nominal (unordered) factor response,
the multinomial logit model
(multinomial
) is more appropriate.
Here is an example of the usage of the parallel
argument.
If there are covariates x1
, x2
and x3
, then
parallel = TRUE ~ x1 + x2 -1
and
parallel = FALSE ~ x3
are equivalent.
This would constrain the regression coefficients
for x1
and x2
to be equal; those of the
intercepts and x3
would be different.
Thomas W. Yee
Agresti, A. (2013).
Categorical Data Analysis,
3rd ed. Hoboken, NJ, USA: Wiley.
Tutz, G. (2012).
Regression for Categorical Data,
Cambridge: Cambridge University Press.
Yee, T. W. (2010).
The VGAM package for categorical data analysis.
Journal of Statistical Software,
32, 1–34.
doi:10.18637/jss.v032.i10.
cumulative
,
cratio
,
sratio
,
multinomial
,
CM.equid
,
CommonVGAMffArguments
,
margeff
,
pneumo
,
budworm
,
deriv3
.
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, acat, pneumo)) coef(fit, matrix = TRUE) constraints(fit) model.matrix(fit)
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, acat, pneumo)) coef(fit, matrix = TRUE) constraints(fit) model.matrix(fit)
Compute all the single terms in the scope
argument that
can be
added to or
dropped from the model, fit those models and compute a
table of the changes in fit.
## S3 method for class 'vglm' add1(object, scope, test = c("none", "LRT"), k = 2, ...) ## S3 method for class 'vglm' drop1(object, scope, test = c("none", "LRT"), k = 2, ...)
## S3 method for class 'vglm' add1(object, scope, test = c("none", "LRT"), k = 2, ...) ## S3 method for class 'vglm' drop1(object, scope, test = c("none", "LRT"), k = 2, ...)
object |
a fitted |
scope , k
|
See |
test |
Same as |
... |
further arguments passed to or from other methods. |
These functions are a direct adaptation of
add1.glm
and
drop1.glm
for vglm-class
objects.
For drop1
methods, a missing scope
is taken to
be all terms in the model. The hierarchy is respected when
considering terms to be added or dropped: all main effects
contained in a second-order interaction must remain, and so on.
In a scope
formula .
means ‘what is
already there’.
Compared to
add1.glm
and
drop1.glm
these functions are simpler, e.g., there is no
Cp, F and Rao (score) tests,
x
and scale
arguments.
Most models do not have a deviance, however twice the
log-likelihood differences are used to test the significance
of terms.
The default output table gives AIC, defined as minus twice log
likelihood plus where
is the rank of the model (the
number of effective parameters). This is only defined up to an
additive constant (like log-likelihoods).
An object of class "anova"
summarizing the differences
in fit between the models.
In general, the same warnings in
add1.glm
and
drop1.glm
apply here.
Furthermore, these functions have not been rigorously tested
for all models, so treat the results cautiously and please
report any bugs.
Care is needed to check that the constraint matrices of added
terms are correct.
Also, if object
is of the form
vglm(..., constraints = list(x1 = cm1, x2 = cm2))
then add1.vglm
may fail because the
constraints
argument needs to have the constaint
matrices for all terms.
Most VGAM family functions do not compute a deviance,
but instead the likelihood function is evaluated at the MLE.
Hence a column name "Deviance"
only appears for a
few models; and almost always there is a column labelled
"logLik"
.
step4vglm
,
vglm
,
extractAIC.vglm
,
trim.constraints
,
anova.vglm
,
backPain2
,
update
.
data("backPain2", package = "VGAM") summary(backPain2) fit1 <- vglm(pain ~ x2 + x3 + x4, propodds, data = backPain2) coef(fit1) add1(fit1, scope = ~ x2 * x3 * x4, test = "LRT") drop1(fit1, test = "LRT") fit2 <- vglm(pain ~ x2 * x3 * x4, propodds, data = backPain2) drop1(fit2)
data("backPain2", package = "VGAM") summary(backPain2) fit1 <- vglm(pain ~ x2 + x3 + x4, propodds, data = backPain2) coef(fit1) add1(fit1, scope = ~ x2 * x3 * x4, test = "LRT") drop1(fit1, test = "LRT") fit2 <- vglm(pain ~ x2 * x3 * x4, propodds, data = backPain2) drop1(fit2)
Calculates the Akaike information criterion for a fitted model object for which a log-likelihood value has been obtained.
AICvlm(object, ..., corrected = FALSE, k = 2) AICvgam(object, ..., k = 2) AICrrvglm(object, ..., k = 2) AICdrrvglm(object, ..., k = 2) AICqrrvglm(object, ..., k = 2) AICrrvgam(object, ..., k = 2)
AICvlm(object, ..., corrected = FALSE, k = 2) AICvgam(object, ..., k = 2) AICrrvglm(object, ..., k = 2) AICdrrvglm(object, ..., k = 2) AICqrrvglm(object, ..., k = 2) AICrrvgam(object, ..., k = 2)
object |
Some VGAM object, for example, having
class |
... |
Other possible arguments fed into
|
corrected |
Logical, perform the finite sample correction? |
k |
Numeric, the penalty per parameter to be used; the default is the classical AIC. |
The following formula is used for VGLMs:
, where
represents the number of
parameters
in the fitted model, and
for the usual AIC.
One could assign
(
the number of observations)
for the so-called BIC or SBC (Schwarz's Bayesian criterion).
This is the function
AICvlm()
.
This code relies on the log-likelihood being defined, and computed, for the object. When comparing fitted objects, the smaller the AIC, the better the fit. The log-likelihood and hence the AIC is only defined up to an additive constant.
Any estimated scale parameter (in GLM parlance) is used as one parameter.
For VGAMs and CAO the nonlinear effective degrees of freedom for each
smoothed component is used. This formula is heuristic.
These are the functions AICvgam()
and AICcao()
.
The finite sample correction is usually recommended when the sample size is small or when the number of parameters is large. When the sample size is large their difference tends to be negligible. The correction is described in Hurvich and Tsai (1989), and is based on a (univariate) linear model with normally distributed errors.
Returns a numeric value with the corresponding AIC (or BIC, or ...,
depending on k
).
This code has not been double-checked.
The general applicability of AIC
for the VGLM/VGAM classes
has not been developed fully.
In particular, AIC
should not be run on some VGAM family
functions because of violation of certain regularity conditions, etc.
AIC has not been defined for QRR-VGLMs, yet.
Using AIC to compare posbinomial
models
with, e.g., posbernoulli.tb
models,
requires posbinomial(omit.constant = TRUE)
.
See posbinomial
for an example.
A warning is given if it suspects a wrong omit.constant
value
was used.
Where defined,
AICc(...)
is the same as AIC(..., corrected = TRUE)
.
T. W. Yee.
Hurvich, C. M. and Tsai, C.-L. (1989). Regression and time series model selection in small samples, Biometrika, 76, 297–307.
VGLMs are described in vglm-class
;
VGAMs are described in vgam-class
;
RR-VGLMs are described in rrvglm-class
;
AIC
,
BICvlm
,
TICvlm
,
drop1.vglm
,
extractAIC.vglm
.
pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) coef(fit1, matrix = TRUE) AIC(fit1) AICc(fit1) # Quick way AIC(fit1, corrected = TRUE) # Slow way (fit2 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), data = pneumo)) coef(fit2, matrix = TRUE) AIC(fit2) AICc(fit2) AIC(fit2, corrected = TRUE)
pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) coef(fit1, matrix = TRUE) AIC(fit1) AICc(fit1) # Quick way AIC(fit1, corrected = TRUE) # Slow way (fit2 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), data = pneumo)) coef(fit2, matrix = TRUE) AIC(fit2) AICc(fit2) AIC(fit2, corrected = TRUE)
Computes some arcsine–logit mixture link transformations, including their inverse and the first few derivatives.
alogitlink(theta, bvalue = NULL, taumix.logit = 1, tol = 1e-13, nmax = 99, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE, c10 = c(4, -pi)) lcalogitlink(theta, bvalue = NULL, pmix.logit = 0.01, tol = 1e-13, nmax = 99, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE, c10 = c(4, -pi))
alogitlink(theta, bvalue = NULL, taumix.logit = 1, tol = 1e-13, nmax = 99, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE, c10 = c(4, -pi)) lcalogitlink(theta, bvalue = NULL, pmix.logit = 0.01, tol = 1e-13, nmax = 99, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE, c10 = c(4, -pi))
theta |
Numeric or character. See below for further details. |
bvalue |
See |
taumix.logit |
Numeric, of length 1.
Mixing parameter assigned
to |
pmix.logit |
Numeric, of length 1.
Mixing probability assigned
to |
tol , nmax
|
Arguments fed into a function implementing a vectorized bisection method. |
inverse , deriv , short , tag
|
Details at |
c10 |
lcalogitlink
is a
linear combination (LC) of
asinlink
and
logitlink
.
The following holds for the LC variant.
For deriv >= 0
,
(1 - pmix.logit) * asinlink(p, deriv = deriv)
+ pmix.logit * logitlink(p, deriv = deriv)
when inverse = FALSE
,
and if inverse = TRUE
then a nonlinear
equation is solved for the probability,
given
eta
.
For deriv = 1
, then the function
returns d eta
/ d
theta
as a function of theta
if
inverse = FALSE
, else if inverse
= TRUE
then it returns the reciprocal.
The default values for taumix.logit
and pmix.logit
may change in the future.
The name and order of the arguments
may change too.
Thomas W. Yee
Hauck, J. W. W. and A. Donner (1977). Wald's test as applied to hypotheses in logit analysis. Journal of the American Statistical Association, 72, 851–853.
asinlink
,
logitlink
,
Links
,
probitlink
,
clogloglink
,
cauchitlink
,
binomialff
,
sloglink
,
hdeff
,
https://www.cia.gov/index.html.
p <- seq(0.01, 0.99, length= 10) alogitlink(p) max(abs(alogitlink(alogitlink(p), inv = TRUE) - p)) # 0? ## Not run: par(mfrow = c(2, 2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) p <- seq(0.01, 0.99, by = 0.01) for (d in 0:1) { matplot(p, cbind(logitlink(p, deriv = d), probitlink(p, deriv = d)), type = "n", col = "blue", ylab = "transformation", las = 1, main = if (d == 0) "Some probability link functions" else "First derivative") lines(p, logitlink(p, deriv = d), col = "green") lines(p, probitlink(p, deriv = d), col = "blue") lines(p, clogloglink(p, deriv = d), col = "tan") lines(p, alogitlink(p, deriv = d), col = "red3") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logitlink", "probitlink", "clogloglink", "alogitlink"), lwd = mylwd, col = c("green", "blue", "tan", "red3")) } else abline(v = 0.5, lwd = 0.5, col = "gray") } for (d in 0) { matplot(y, cbind( logitlink(y, deriv = d, inverse = TRUE), probitlink(y, deriv = d, inverse = TRUE)), type = "n", col = "blue", xlab = "transformation", ylab = "p", main = if (d == 0) "Some inverse probability link functions" else "First derivative", las=1) lines(y, logitlink(y, deriv = d, inverse = TRUE), col = "green") lines(y, probitlink(y, deriv = d, inverse = TRUE), col = "blue") lines(y, clogloglink(y, deriv = d, inverse = TRUE), col = "tan") lines(y, alogitlink(y, deriv = d, inverse = TRUE), col = "red3") if (d == 0) { abline(h = 0.5, v = 0, lwd = 0.5, col = "gray") legend(-4, 1, c("logitlink", "probitlink", "clogloglink", "alogitlink"), lwd = mylwd, col = c("green", "blue", "tan", "red3")) } } par(lwd = 1) ## End(Not run)
p <- seq(0.01, 0.99, length= 10) alogitlink(p) max(abs(alogitlink(alogitlink(p), inv = TRUE) - p)) # 0? ## Not run: par(mfrow = c(2, 2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) p <- seq(0.01, 0.99, by = 0.01) for (d in 0:1) { matplot(p, cbind(logitlink(p, deriv = d), probitlink(p, deriv = d)), type = "n", col = "blue", ylab = "transformation", las = 1, main = if (d == 0) "Some probability link functions" else "First derivative") lines(p, logitlink(p, deriv = d), col = "green") lines(p, probitlink(p, deriv = d), col = "blue") lines(p, clogloglink(p, deriv = d), col = "tan") lines(p, alogitlink(p, deriv = d), col = "red3") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logitlink", "probitlink", "clogloglink", "alogitlink"), lwd = mylwd, col = c("green", "blue", "tan", "red3")) } else abline(v = 0.5, lwd = 0.5, col = "gray") } for (d in 0) { matplot(y, cbind( logitlink(y, deriv = d, inverse = TRUE), probitlink(y, deriv = d, inverse = TRUE)), type = "n", col = "blue", xlab = "transformation", ylab = "p", main = if (d == 0) "Some inverse probability link functions" else "First derivative", las=1) lines(y, logitlink(y, deriv = d, inverse = TRUE), col = "green") lines(y, probitlink(y, deriv = d, inverse = TRUE), col = "blue") lines(y, clogloglink(y, deriv = d, inverse = TRUE), col = "tan") lines(y, alogitlink(y, deriv = d, inverse = TRUE), col = "red3") if (d == 0) { abline(h = 0.5, v = 0, lwd = 0.5, col = "gray") legend(-4, 1, c("logitlink", "probitlink", "clogloglink", "alogitlink"), lwd = mylwd, col = c("green", "blue", "tan", "red3")) } } par(lwd = 1) ## End(Not run)
Return the altered, inflated, truncated and deflated values in a GAITD regression object, else test whether the model is altered, inflated, truncated or deflated.
altered(object, ...) inflated(object, ...) truncated(object, ...) is.altered(object, ...) is.deflated(object, ...) is.inflated(object, ...) is.truncated(object, ...)
altered(object, ...) inflated(object, ...) truncated(object, ...) is.altered(object, ...) is.deflated(object, ...) is.inflated(object, ...) is.truncated(object, ...)
object |
an object of class |
... |
any additional arguments, to future-proof this function. |
Yee and Ma (2023) propose GAITD regression
where values from four (or seven since
there are parametric and nonparametric
forms) disjoint sets are referred to as
special. These extractor functions
return one set each; they are the alter
,
inflate
, truncate
, deflate
(and sometimes max.support
) arguments
from the family function.
Returns one type of ‘special’ sets associated with GAITD regression.
This is a vector, else a list for truncation.
All three sets are returned by specialsvglm
.
Some of these functions are subject to change.
Only family functions beginning with "gaitd"
will
work with these functions, hence
zipoisson
fits will return FALSE
or empty
values.
Yee, T. W. and Ma, C. (2024). Generally altered, inflated, truncated and deflated regression. Statistical Science, 39 (in press).
vglm
,
vglm-class
,
specialsvglm
,
gaitdpoisson
,
gaitdlog
,
gaitdzeta
,
Gaitdpois
.
## Not run: abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) fit1 <- vglm(y ~ 1, gaitdpoisson(a.mix = 0), data = abdata, weight = w, subset = w > 0) specials(fit1) # All three sets altered(fit1) # Subject to change inflated(fit1) # Subject to change truncated(fit1) # Subject to change is.altered(fit1) is.inflated(fit1) is.truncated(fit1) ## End(Not run)
## Not run: abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) fit1 <- vglm(y ~ 1, gaitdpoisson(a.mix = 0), data = abdata, weight = w, subset = w > 0) specials(fit1) # All three sets altered(fit1) # Subject to change inflated(fit1) # Subject to change truncated(fit1) # Subject to change is.altered(fit1) is.inflated(fit1) is.truncated(fit1) ## End(Not run)
Binomial quantile regression estimated by maximizing an asymmetric likelihood function.
amlbinomial(w.aml = 1, parallel = FALSE, digw = 4, link = "logitlink")
amlbinomial(w.aml = 1, parallel = FALSE, digw = 4, link = "logitlink")
w.aml |
Numeric, a vector of positive constants controlling the percentiles. The larger the value the larger the fitted percentile value (the proportion of points below the “w-regression plane”). The default value of unity results in the ordinary maximum likelihood (MLE) solution. |
parallel |
If |
digw |
Passed into |
link |
See |
The general methodology behind this VGAM family function
is given in Efron (1992) and full details can be obtained there.
This model is essentially a logistic regression model
(see binomialff
) but the usual deviance is
replaced by an
asymmetric squared error loss function; it is multiplied by
for positive residuals.
The solution is the set of regression coefficients that minimize the
sum of these deviance-type values over the data set, weighted by
the
weights
argument (so that it can contain frequencies).
Newton-Raphson estimation is used here.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
If w.aml
has more than one value then the value returned by
deviance
is the sum of all the (weighted) deviances taken over
all the w.aml
values. See Equation (1.6) of Efron (1992).
On fitting, the extra
slot has list components "w.aml"
and "percentile"
. The latter is the percent of observations
below the “w-regression plane”, which is the fitted values. Also,
the individual deviance values corresponding to each element of the
argument w.aml
is stored in the extra
slot.
For amlbinomial
objects, methods functions for the generic
functions qtplot
and cdf
have not been written yet.
See amlpoisson
about comments on the jargon, e.g.,
expectiles etc.
In this documentation the word quantile can often be interchangeably replaced by expectile (things are informal here).
Thomas W. Yee
Efron, B. (1992). Poisson overdispersion estimates based on the method of asymmetric maximum likelihood. Journal of the American Statistical Association, 87, 98–107.
amlpoisson
,
amlexponential
,
amlnormal
,
extlogF1
,
alaplace1
,
denorm
.
# Example: binomial data with lots of trials per observation set.seed(1234) sizevec <- rep(100, length = (nn <- 200)) mydat <- data.frame(x = sort(runif(nn))) mydat <- transform(mydat, prob = logitlink(-0 + 2.5*x + x^2, inverse = TRUE)) mydat <- transform(mydat, y = rbinom(nn, size = sizevec, prob = prob)) (fit <- vgam(cbind(y, sizevec - y) ~ s(x, df = 3), amlbinomial(w = c(0.01, 0.2, 1, 5, 60)), mydat, trace = TRUE)) fit@extra ## Not run: par(mfrow = c(1,2)) # Quantile plot with(mydat, plot(x, jitter(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse = ", "), "percentile-expectile curves"))) with(mydat, matlines(x, 100 * fitted(fit), lwd = 2, col = "blue", lty=1)) # Compare the fitted expectiles with the quantiles with(mydat, plot(x, jitter(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse = ", "), "percentile curves are red"))) with(mydat, matlines(x, 100 * fitted(fit), lwd = 2, col = "blue", lty = 1)) for (ii in fit@extra$percentile) with(mydat, matlines(x, 100 * qbinom(p = ii/100, size = sizevec, prob = prob) / sizevec, col = "red", lwd = 2, lty = 1)) ## End(Not run)
# Example: binomial data with lots of trials per observation set.seed(1234) sizevec <- rep(100, length = (nn <- 200)) mydat <- data.frame(x = sort(runif(nn))) mydat <- transform(mydat, prob = logitlink(-0 + 2.5*x + x^2, inverse = TRUE)) mydat <- transform(mydat, y = rbinom(nn, size = sizevec, prob = prob)) (fit <- vgam(cbind(y, sizevec - y) ~ s(x, df = 3), amlbinomial(w = c(0.01, 0.2, 1, 5, 60)), mydat, trace = TRUE)) fit@extra ## Not run: par(mfrow = c(1,2)) # Quantile plot with(mydat, plot(x, jitter(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse = ", "), "percentile-expectile curves"))) with(mydat, matlines(x, 100 * fitted(fit), lwd = 2, col = "blue", lty=1)) # Compare the fitted expectiles with the quantiles with(mydat, plot(x, jitter(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse = ", "), "percentile curves are red"))) with(mydat, matlines(x, 100 * fitted(fit), lwd = 2, col = "blue", lty = 1)) for (ii in fit@extra$percentile) with(mydat, matlines(x, 100 * qbinom(p = ii/100, size = sizevec, prob = prob) / sizevec, col = "red", lwd = 2, lty = 1)) ## End(Not run)
Exponential expectile regression estimated by maximizing an asymmetric likelihood function.
amlexponential(w.aml = 1, parallel = FALSE, imethod = 1, digw = 4, link = "loglink")
amlexponential(w.aml = 1, parallel = FALSE, imethod = 1, digw = 4, link = "loglink")
w.aml |
Numeric, a vector of positive constants controlling the expectiles. The larger the value the larger the fitted expectile value (the proportion of points below the “w-regression plane”). The default value of unity results in the ordinary maximum likelihood (MLE) solution. |
parallel |
If |
imethod |
Integer, either 1 or 2 or 3. Initialization method. Choose another value if convergence fails. |
digw |
Passed into |
link |
See |
The general methodology behind this VGAM family function
is given in Efron (1992) and full details can be obtained there.
This model is essentially an exponential regression model
(see exponential
) but the usual deviance is
replaced by an
asymmetric squared error loss function; it is multiplied by
for positive residuals.
The solution is the set of regression coefficients that minimize the
sum of these deviance-type values over the data set, weighted by
the
weights
argument (so that it can contain frequencies).
Newton-Raphson estimation is used here.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
Note that the link
argument of exponential
and
amlexponential
are currently different: one is the
rate parameter and the other is the mean (expectile) parameter.
If w.aml
has more than one value then the value returned by
deviance
is the sum of all the (weighted) deviances taken over
all the w.aml
values. See Equation (1.6) of Efron (1992).
On fitting, the extra
slot has list components "w.aml"
and "percentile"
. The latter is the percent of observations
below the “w-regression plane”, which is the fitted values. Also,
the individual deviance values corresponding to each element of the
argument w.aml
is stored in the extra
slot.
For amlexponential
objects, methods functions for the generic
functions qtplot
and cdf
have not been written yet.
See amlpoisson
about comments on the jargon, e.g.,
expectiles etc.
In this documentation the word quantile can often be interchangeably replaced by expectile (things are informal here).
Thomas W. Yee
Efron, B. (1992). Poisson overdispersion estimates based on the method of asymmetric maximum likelihood. Journal of the American Statistical Association, 87, 98–107.
exponential
,
amlbinomial
,
amlpoisson
,
amlnormal
,
extlogF1
,
alaplace1
,
lms.bcg
,
deexp
.
nn <- 2000 mydat <- data.frame(x = seq(0, 1, length = nn)) mydat <- transform(mydat, mu = loglink(-0 + 1.5*x + 0.2*x^2, inverse = TRUE)) mydat <- transform(mydat, mu = loglink(0 - sin(8*x), inverse = TRUE)) mydat <- transform(mydat, y = rexp(nn, rate = 1/mu)) (fit <- vgam(y ~ s(x, df=5), amlexponential(w=c(0.001, 0.1, 0.5, 5, 60)), mydat, trace = TRUE)) fit@extra ## Not run: # These plots are against the sqrt scale (to increase clarity) par(mfrow = c(1,2)) # Quantile plot with(mydat, plot(x, sqrt(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse=", "), "percentile-expectile curves"))) with(mydat, matlines(x, sqrt(fitted(fit)), lwd = 2, col = "blue", lty=1)) # Compare the fitted expectiles with the quantiles with(mydat, plot(x, sqrt(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse=", "), "percentile curves are orange"))) with(mydat, matlines(x, sqrt(fitted(fit)), lwd = 2, col = "blue", lty=1)) for (ii in fit@extra$percentile) with(mydat, matlines(x, sqrt(qexp(p = ii/100, rate = 1/mu)), col = "orange")) ## End(Not run)
nn <- 2000 mydat <- data.frame(x = seq(0, 1, length = nn)) mydat <- transform(mydat, mu = loglink(-0 + 1.5*x + 0.2*x^2, inverse = TRUE)) mydat <- transform(mydat, mu = loglink(0 - sin(8*x), inverse = TRUE)) mydat <- transform(mydat, y = rexp(nn, rate = 1/mu)) (fit <- vgam(y ~ s(x, df=5), amlexponential(w=c(0.001, 0.1, 0.5, 5, 60)), mydat, trace = TRUE)) fit@extra ## Not run: # These plots are against the sqrt scale (to increase clarity) par(mfrow = c(1,2)) # Quantile plot with(mydat, plot(x, sqrt(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse=", "), "percentile-expectile curves"))) with(mydat, matlines(x, sqrt(fitted(fit)), lwd = 2, col = "blue", lty=1)) # Compare the fitted expectiles with the quantiles with(mydat, plot(x, sqrt(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse=", "), "percentile curves are orange"))) with(mydat, matlines(x, sqrt(fitted(fit)), lwd = 2, col = "blue", lty=1)) for (ii in fit@extra$percentile) with(mydat, matlines(x, sqrt(qexp(p = ii/100, rate = 1/mu)), col = "orange")) ## End(Not run)
Asymmetric least squares, a special case of maximizing an asymmetric likelihood function of a normal distribution. This allows for expectile/quantile regression using asymmetric least squares error loss.
amlnormal(w.aml = 1, parallel = FALSE, lexpectile = "identitylink", iexpectile = NULL, imethod = 1, digw = 4)
amlnormal(w.aml = 1, parallel = FALSE, lexpectile = "identitylink", iexpectile = NULL, imethod = 1, digw = 4)
w.aml |
Numeric, a vector of positive constants controlling the percentiles. The larger the value the larger the fitted percentile value (the proportion of points below the “w-regression plane”). The default value of unity results in the ordinary least squares (OLS) solution. |
parallel |
If |
lexpectile , iexpectile
|
See |
imethod |
Integer, either 1 or 2 or 3. Initialization method. Choose another value if convergence fails. |
digw |
Passed into |
This is an implementation of Efron (1991) and full details can
be obtained there.
Equation numbers below refer to that article.
The model is essentially a linear model
(see lm
), however,
the asymmetric squared error loss function for a residual
is
if
and
if
.
The solution is the set of regression coefficients that
minimize the sum of these over the data set, weighted by the
weights
argument (so that it can contain frequencies).
Newton-Raphson estimation is used here.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
and vgam
.
On fitting, the extra
slot has list components
"w.aml"
and "percentile"
. The latter is the
percent of observations below the “w-regression plane”,
which is the fitted values.
One difficulty is finding the w.aml
value giving a
specified percentile. One solution is to fit the model within
a root finding function such as uniroot
;
see the example below.
For amlnormal
objects, methods functions for the
generic functions qtplot
and cdf
have not been
written yet.
See the note in amlpoisson
on the jargon,
including expectiles and regression quantiles.
The deviance
slot computes the total asymmetric squared error
loss (2.5).
If w.aml
has more than one value then the value returned
by the slot is the sum taken over all the w.aml
values.
This VGAM family function could well be renamed
amlnormal()
instead, given the other function names
amlpoisson
, amlbinomial
, etc.
In this documentation the word quantile can often be interchangeably replaced by expectile (things are informal here).
Thomas W. Yee
Efron, B. (1991). Regression percentiles using asymmetric squared error loss. Statistica Sinica, 1, 93–125.
amlpoisson
,
amlbinomial
,
amlexponential
,
bmi.nz
,
extlogF1
,
alaplace1
,
denorm
,
lms.bcn
and similar variants are alternative
methods for quantile regression.
## Not run: # Example 1 ooo <- with(bmi.nz, order(age)) bmi.nz <- bmi.nz[ooo, ] # Sort by age (fit <- vglm(BMI ~ sm.bs(age), amlnormal(w.aml = 0.1), bmi.nz)) fit@extra # Gives the w value and the percentile coef(fit, matrix = TRUE) # Quantile plot with(bmi.nz, plot(age, BMI, col = "blue", main = paste(round(fit@extra$percentile, digits = 1), "expectile-percentile curve"))) with(bmi.nz, lines(age, c(fitted(fit)), col = "black")) # Example 2 # Find the w values that give the 25, 50 and 75 percentiles find.w <- function(w, percentile = 50) { fit2 <- vglm(BMI ~ sm.bs(age), amlnormal(w = w), data = bmi.nz) fit2@extra$percentile - percentile } # Quantile plot with(bmi.nz, plot(age, BMI, col = "blue", las = 1, main = "25, 50 and 75 expectile-percentile curves")) for (myp in c(25, 50, 75)) { # Note: uniroot() can only find one root at a time bestw <- uniroot(f = find.w, interval = c(1/10^4, 10^4), percentile = myp) fit2 <- vglm(BMI ~ sm.bs(age), amlnormal(w = bestw$root), bmi.nz) with(bmi.nz, lines(age, c(fitted(fit2)), col = "orange")) } # Example 3; this is Example 1 but with smoothing splines and # a vector w and a parallelism assumption. ooo <- with(bmi.nz, order(age)) bmi.nz <- bmi.nz[ooo, ] # Sort by age fit3 <- vgam(BMI ~ s(age, df = 4), data = bmi.nz, trace = TRUE, amlnormal(w = c(0.1, 1, 10), parallel = TRUE)) fit3@extra # The w values, percentiles and weighted deviances # The linear components of the fit; not for human consumption: coef(fit3, matrix = TRUE) # Quantile plot with(bmi.nz, plot(age, BMI, col="blue", main = paste(paste(round(fit3@extra$percentile, digits = 1), collapse = ", "), "expectile-percentile curves"))) with(bmi.nz, matlines(age, fitted(fit3), col = 1:fit3@extra$M, lwd = 2)) with(bmi.nz, lines(age, c(fitted(fit )), col = "black")) # For comparison ## End(Not run)
## Not run: # Example 1 ooo <- with(bmi.nz, order(age)) bmi.nz <- bmi.nz[ooo, ] # Sort by age (fit <- vglm(BMI ~ sm.bs(age), amlnormal(w.aml = 0.1), bmi.nz)) fit@extra # Gives the w value and the percentile coef(fit, matrix = TRUE) # Quantile plot with(bmi.nz, plot(age, BMI, col = "blue", main = paste(round(fit@extra$percentile, digits = 1), "expectile-percentile curve"))) with(bmi.nz, lines(age, c(fitted(fit)), col = "black")) # Example 2 # Find the w values that give the 25, 50 and 75 percentiles find.w <- function(w, percentile = 50) { fit2 <- vglm(BMI ~ sm.bs(age), amlnormal(w = w), data = bmi.nz) fit2@extra$percentile - percentile } # Quantile plot with(bmi.nz, plot(age, BMI, col = "blue", las = 1, main = "25, 50 and 75 expectile-percentile curves")) for (myp in c(25, 50, 75)) { # Note: uniroot() can only find one root at a time bestw <- uniroot(f = find.w, interval = c(1/10^4, 10^4), percentile = myp) fit2 <- vglm(BMI ~ sm.bs(age), amlnormal(w = bestw$root), bmi.nz) with(bmi.nz, lines(age, c(fitted(fit2)), col = "orange")) } # Example 3; this is Example 1 but with smoothing splines and # a vector w and a parallelism assumption. ooo <- with(bmi.nz, order(age)) bmi.nz <- bmi.nz[ooo, ] # Sort by age fit3 <- vgam(BMI ~ s(age, df = 4), data = bmi.nz, trace = TRUE, amlnormal(w = c(0.1, 1, 10), parallel = TRUE)) fit3@extra # The w values, percentiles and weighted deviances # The linear components of the fit; not for human consumption: coef(fit3, matrix = TRUE) # Quantile plot with(bmi.nz, plot(age, BMI, col="blue", main = paste(paste(round(fit3@extra$percentile, digits = 1), collapse = ", "), "expectile-percentile curves"))) with(bmi.nz, matlines(age, fitted(fit3), col = 1:fit3@extra$M, lwd = 2)) with(bmi.nz, lines(age, c(fitted(fit )), col = "black")) # For comparison ## End(Not run)
Poisson quantile regression estimated by maximizing an asymmetric likelihood function.
amlpoisson(w.aml = 1, parallel = FALSE, imethod = 1, digw = 4, link = "loglink")
amlpoisson(w.aml = 1, parallel = FALSE, imethod = 1, digw = 4, link = "loglink")
w.aml |
Numeric, a vector of positive constants controlling the percentiles. The larger the value the larger the fitted percentile value (the proportion of points below the “w-regression plane”). The default value of unity results in the ordinary maximum likelihood (MLE) solution. |
parallel |
If |
imethod |
Integer, either 1 or 2 or 3. Initialization method. Choose another value if convergence fails. |
digw |
Passed into |
link |
See |
This method was proposed by Efron (1992) and full details can
be obtained there.
The model is essentially a Poisson regression model
(see poissonff
) but the usual deviance is replaced by an
asymmetric squared error loss function; it is multiplied by
for positive residuals.
The solution is the set of regression coefficients that minimize the
sum of these deviance-type values over the data set, weighted by
the
weights
argument (so that it can contain frequencies).
Newton-Raphson estimation is used here.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
If w.aml
has more than one value then the value returned by
deviance
is the sum of all the (weighted) deviances taken over
all the w.aml
values.
See Equation (1.6) of Efron (1992).
On fitting, the extra
slot has list components "w.aml"
and "percentile"
. The latter is the percent of observations
below the “w-regression plane”, which is the fitted values. Also,
the individual deviance values corresponding to each element of the
argument w.aml
is stored in the extra
slot.
For amlpoisson
objects, methods functions for the generic
functions qtplot
and cdf
have not been written yet.
About the jargon, Newey and Powell (1987) used the name expectiles for regression surfaces obtained by asymmetric least squares. This was deliberate so as to distinguish them from the original regression quantiles of Koenker and Bassett (1978). Efron (1991) and Efron (1992) use the general name regression percentile to apply to all forms of asymmetric fitting. Although the asymmetric maximum likelihood method very nearly gives regression percentiles in the strictest sense for the normal and Poisson cases, the phrase quantile regression is used loosely in this VGAM documentation.
In this documentation the word quantile can often be interchangeably replaced by expectile (things are informal here).
Thomas W. Yee
Efron, B. (1991). Regression percentiles using asymmetric squared error loss. Statistica Sinica, 1, 93–125.
Efron, B. (1992). Poisson overdispersion estimates based on the method of asymmetric maximum likelihood. Journal of the American Statistical Association, 87, 98–107.
Koenker, R. and Bassett, G. (1978). Regression quantiles. Econometrica, 46, 33–50.
Newey, W. K. and Powell, J. L. (1987). Asymmetric least squares estimation and testing. Econometrica, 55, 819–847.
amlnormal
,
amlbinomial
,
extlogF1
,
alaplace1
.
set.seed(1234) mydat <- data.frame(x = sort(runif(nn <- 200))) mydat <- transform(mydat, y = rpois(nn, exp(0 - sin(8*x)))) (fit <- vgam(y ~ s(x), fam = amlpoisson(w.aml = c(0.02, 0.2, 1, 5, 50)), mydat, trace = TRUE)) fit@extra ## Not run: # Quantile plot with(mydat, plot(x, jitter(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse = ", "), "percentile-expectile curves"))) with(mydat, matlines(x, fitted(fit), lwd = 2)) ## End(Not run)
set.seed(1234) mydat <- data.frame(x = sort(runif(nn <- 200))) mydat <- transform(mydat, y = rpois(nn, exp(0 - sin(8*x)))) (fit <- vgam(y ~ s(x), fam = amlpoisson(w.aml = c(0.02, 0.2, 1, 5, 50)), mydat, trace = TRUE)) fit@extra ## Not run: # Quantile plot with(mydat, plot(x, jitter(y), col = "blue", las = 1, main = paste(paste(round(fit@extra$percentile, digits = 1), collapse = ", "), "percentile-expectile curves"))) with(mydat, matlines(x, fitted(fit), lwd = 2)) ## End(Not run)
Compute an analysis of deviance table for one or more vector generalized linear model fits.
## S3 method for class 'vglm' anova(object, ..., type = c("II", "I", "III", 2, 1, 3), test = c("LRT", "none"), trydev = TRUE, silent = TRUE)
## S3 method for class 'vglm' anova(object, ..., type = c("II", "I", "III", 2, 1, 3), test = c("LRT", "none"), trydev = TRUE, silent = TRUE)
object , ...
|
objects of class |
type |
character or numeric;
any one of the
(effectively three) choices given.
Note that |
test |
a character string,
(partially) matching one of
|
trydev |
logical; if |
silent |
logical; if |
anova.vglm
is intended to be similar to
anova.glm
so specifying a single object and type = 1
gives a
sequential analysis of deviance table for that fit.
By analysis of deviance, it is meant loosely
that if the deviance of the model is not defined or implemented,
then twice the difference between the log-likelihoods of two
nested models remains asymptotically chi-squared distributed
with degrees of freedom equal to the difference in the number
of parameters of the two models.
Of course, the usual regularity conditions are assumed to hold.
For Type I,
the analysis of deviance table has
the reductions in the residual deviance
as each term of the formula is added in turn are given in as
the rows of a table, plus the residual deviances themselves.
Type I or sequential tests
(as in anova.glm
).
are computationally the easiest of the three methods.
For this, the order of the terms is important, and the
each term is added sequentially from first to last.
The Anova()
function in car allows for testing
Type II and Type III (SAS jargon) hypothesis
tests, although the definitions used are not precisely
that of SAS.
As car notes,
Type I rarely test interesting hypotheses in unbalanced
designs. Type III enter each term last, keeping all
the other terms in the model.
Type II tests, according to SAS, add the term after all other terms have been added to the model except terms that contain the effect being tested; an effect is contained in another effect if it can be derived by deleting variables from the latter effect. Type II tests are currently the default.
As in anova.glm
, but not as
Anova.glm()
in car,
if more than one object is specified, then
the table has a row for the
residual degrees of freedom and deviance for each model.
For all but the first model, the change in degrees of freedom
and deviance is also given. (This only makes statistical sense
if the models are nested.) It is conventional to list the
models from smallest to largest, but this is up to the user.
It is necessary to have type = 1
with more than one
objects are specified.
See anova.glm
for more details
and warnings.
The VGAM package now implements full likelihood models
only, therefore no dispersion parameters are estimated.
An object of class "anova"
inheriting from
class "data.frame"
.
See anova.glm
.
Several VGAM family functions implement distributions
which do not satisfying the usual regularity conditions needed for
the LRT to work. No checking or warning is given for these.
As car says, be careful of Type III tests because they violate marginality. Type II tests (the default) do not have this problem.
It is possible for this function to stop
when type = 2
or 3
, e.g.,
anova(vglm(cans ~ myfactor, poissonff, data = boxcar))
where myfactor
is a factor.
The code was adapted
directly from anova.glm
and Anova.glm()
in car
by T. W. Yee.
Hence the Type II and Type III tests do not
correspond precisely with the SAS definition.
anova.glm
,
stat.anova
,
stats:::print.anova
,
Anova.glm()
in car if car is installed,
vglm
,
lrtest
,
add1.vglm
,
drop1.vglm
,
lrt.stat.vlm
,
score.stat.vlm
,
wald.stat.vlm
,
backPain2
,
update
.
# Example 1: a proportional odds model fitted to pneumo. set.seed(1) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(8)) fit1 <- vglm(cbind(normal, mild, severe) ~ let , propodds, pneumo) fit2 <- vglm(cbind(normal, mild, severe) ~ let + x3, propodds, pneumo) fit3 <- vglm(cbind(normal, mild, severe) ~ let + x3, cumulative, pneumo) anova(fit1, fit2, fit3, type = 1) # Remember to specify 'type'!! anova(fit2) anova(fit2, type = "I") anova(fit2, type = "III") # Example 2: a proportional odds model fitted to backPain2. data("backPain2", package = "VGAM") summary(backPain2) fitlogit <- vglm(pain ~ x2 * x3 * x4, propodds, data = backPain2) coef(fitlogit) anova(fitlogit) anova(fitlogit, type = "I") anova(fitlogit, type = "III")
# Example 1: a proportional odds model fitted to pneumo. set.seed(1) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(8)) fit1 <- vglm(cbind(normal, mild, severe) ~ let , propodds, pneumo) fit2 <- vglm(cbind(normal, mild, severe) ~ let + x3, propodds, pneumo) fit3 <- vglm(cbind(normal, mild, severe) ~ let + x3, cumulative, pneumo) anova(fit1, fit2, fit3, type = 1) # Remember to specify 'type'!! anova(fit2) anova(fit2, type = "I") anova(fit2, type = "III") # Example 2: a proportional odds model fitted to backPain2. data("backPain2", package = "VGAM") summary(backPain2) fitlogit <- vglm(pain ~ x2 * x3 * x4, propodds, data = backPain2) coef(fitlogit) anova(fitlogit) anova(fitlogit, type = "I") anova(fitlogit, type = "III")
Maximum likelihood estimation of the three-parameter AR-1 model
AR1(ldrift = "identitylink", lsd = "loglink", lvar = "loglink", lrho = "rhobitlink", idrift = NULL, isd = NULL, ivar = NULL, irho = NULL, imethod = 1, ishrinkage = 0.95, type.likelihood = c("exact", "conditional"), type.EIM = c("exact", "approximate"), var.arg = FALSE, nodrift = FALSE, print.EIM = FALSE, zero = c(if (var.arg) "var" else "sd", "rho"))
AR1(ldrift = "identitylink", lsd = "loglink", lvar = "loglink", lrho = "rhobitlink", idrift = NULL, isd = NULL, ivar = NULL, irho = NULL, imethod = 1, ishrinkage = 0.95, type.likelihood = c("exact", "conditional"), type.EIM = c("exact", "approximate"), var.arg = FALSE, nodrift = FALSE, print.EIM = FALSE, zero = c(if (var.arg) "var" else "sd", "rho"))
ldrift , lsd , lvar , lrho
|
Link functions applied to the scaled mean, standard deviation
or variance, and correlation parameters.
The parameter |
idrift , isd , ivar , irho
|
Optional initial values for the parameters.
If failure to converge occurs then try different values
and monitor convergence by using |
ishrinkage , imethod , zero
|
See |
var.arg |
Same meaning as |
nodrift |
Logical, for determining whether to estimate the drift parameter.
The default is to estimate it.
If |
type.EIM |
What type of expected information matrix (EIM) is used in
Fisher scoring. By default, this family function calls
If |
print.EIM |
Logical. If |
type.likelihood |
What type of likelihood function is maximized.
The first choice (default) is the sum of the marginal likelihood
and the conditional likelihood.
Choosing the conditional likelihood means that the first observation is
effectively ignored (this is handled internally by setting
the value of the first prior weight to be some small
positive number, e.g., |
The AR-1 model implemented here has
and
where the are i.i.d. Normal(0, sd =
)
random variates.
Here are a few notes:
(1). A test for weak stationarity might be to verify whether
lies outside the unit circle.
(2). The mean of all the
is
and
these are returned as the fitted values.
(3). The correlation of all the
with
is
.
(4). The default link function ensures that
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
Monitoring convergence is urged, i.e., set trace = TRUE
.
Moreover, if the exact EIMs are used, set print.EIM = TRUE
to compare the computed exact to the approximate EIM.
Under the VGLM/VGAM approach, parameters can be modelled in terms
of covariates. Particularly, if the standard deviation of
the white noise is modelled in this way, then
type.EIM = "exact"
may certainly lead to unstable
results. The reason is that white noise is a stationary
process, and consequently, its variance must remain as a constant.
Consequently, the use of variates to model
this parameter contradicts the assumption of
stationary random components to compute the exact EIMs proposed
by Porat and Friedlander (1987).
To prevent convergence issues in such cases, this family function
internally verifies whether the variance of the white noise remains
as a constant at each Fisher scoring iteration.
If this assumption is violated and type.EIM = "exact"
is set,
then AR1
automatically shifts to
type.EIM = "approximate"
.
Also, a warning is accordingly displayed.
Multiple responses are handled. The mean is returned as the fitted values.
Victor Miranda (exact method) and Thomas W. Yee (approximate method).
Porat, B. and Friedlander, B. (1987). The Exact Cramer-Rao Bond for Gaussian Autoregressive Processes. IEEE Transactions on Aerospace and Electronic Systems, AES-23(4), 537–542.
AR1EIM
,
vglm.control
,
dAR1
,
arima.sim
.
## Not run: ### Example 1: using arima.sim() to generate a 0-mean stationary time series. nn <- 500 tsdata <- data.frame(x2 = runif(nn)) ar.coef.1 <- rhobitlink(-1.55, inverse = TRUE) # Approx -0.65 ar.coef.2 <- rhobitlink( 1.0, inverse = TRUE) # Approx 0.50 set.seed(1) tsdata <- transform(tsdata, index = 1:nn, TS1 = arima.sim(nn, model = list(ar = ar.coef.1), sd = exp(1.5)), TS2 = arima.sim(nn, model = list(ar = ar.coef.2), sd = exp(1.0 + 1.5 * x2))) ### An autoregressive intercept--only model. ### ### Using the exact EIM, and "nodrift = TRUE" ### fit1a <- vglm(TS1 ~ 1, data = tsdata, trace = TRUE, AR1(var.arg = FALSE, nodrift = TRUE, type.EIM = "exact", print.EIM = FALSE), crit = "coefficients") Coef(fit1a) summary(fit1a) ### Two responses. Here, the white noise standard deviation of TS2 ### ### is modelled in terms of 'x2'. Also, 'type.EIM = exact'. ### fit1b <- vglm(cbind(TS1, TS2) ~ x2, AR1(zero = NULL, nodrift = TRUE, var.arg = FALSE, type.EIM = "exact"), constraints = list("(Intercept)" = diag(4), "x2" = rbind(0, 0, 1, 0)), data = tsdata, trace = TRUE, crit = "coefficients") coef(fit1b, matrix = TRUE) summary(fit1b) ### Example 2: another stationary time series nn <- 500 my.rho <- rhobitlink(1.0, inverse = TRUE) my.mu <- 1.0 my.sd <- exp(1) tsdata <- data.frame(index = 1:nn, TS3 = runif(nn)) set.seed(2) for (ii in 2:nn) tsdata$TS3[ii] <- my.mu/(1 - my.rho) + my.rho * tsdata$TS3[ii-1] + rnorm(1, sd = my.sd) tsdata <- tsdata[-(1:ceiling(nn/5)), ] # Remove the burn-in data: ### Fitting an AR(1). The exact EIMs are used. fit2a <- vglm(TS3 ~ 1, AR1(type.likelihood = "exact", # "conditional", type.EIM = "exact"), data = tsdata, trace = TRUE, crit = "coefficients") Coef(fit2a) summary(fit2a) # SEs are useful to know Coef(fit2a)["rho"] # Estimate of rho, for intercept-only models my.rho # The 'truth' (rho) Coef(fit2a)["drift"] # Estimate of drift, for intercept-only models my.mu /(1 - my.rho) # The 'truth' (drift) ## End(Not run)
## Not run: ### Example 1: using arima.sim() to generate a 0-mean stationary time series. nn <- 500 tsdata <- data.frame(x2 = runif(nn)) ar.coef.1 <- rhobitlink(-1.55, inverse = TRUE) # Approx -0.65 ar.coef.2 <- rhobitlink( 1.0, inverse = TRUE) # Approx 0.50 set.seed(1) tsdata <- transform(tsdata, index = 1:nn, TS1 = arima.sim(nn, model = list(ar = ar.coef.1), sd = exp(1.5)), TS2 = arima.sim(nn, model = list(ar = ar.coef.2), sd = exp(1.0 + 1.5 * x2))) ### An autoregressive intercept--only model. ### ### Using the exact EIM, and "nodrift = TRUE" ### fit1a <- vglm(TS1 ~ 1, data = tsdata, trace = TRUE, AR1(var.arg = FALSE, nodrift = TRUE, type.EIM = "exact", print.EIM = FALSE), crit = "coefficients") Coef(fit1a) summary(fit1a) ### Two responses. Here, the white noise standard deviation of TS2 ### ### is modelled in terms of 'x2'. Also, 'type.EIM = exact'. ### fit1b <- vglm(cbind(TS1, TS2) ~ x2, AR1(zero = NULL, nodrift = TRUE, var.arg = FALSE, type.EIM = "exact"), constraints = list("(Intercept)" = diag(4), "x2" = rbind(0, 0, 1, 0)), data = tsdata, trace = TRUE, crit = "coefficients") coef(fit1b, matrix = TRUE) summary(fit1b) ### Example 2: another stationary time series nn <- 500 my.rho <- rhobitlink(1.0, inverse = TRUE) my.mu <- 1.0 my.sd <- exp(1) tsdata <- data.frame(index = 1:nn, TS3 = runif(nn)) set.seed(2) for (ii in 2:nn) tsdata$TS3[ii] <- my.mu/(1 - my.rho) + my.rho * tsdata$TS3[ii-1] + rnorm(1, sd = my.sd) tsdata <- tsdata[-(1:ceiling(nn/5)), ] # Remove the burn-in data: ### Fitting an AR(1). The exact EIMs are used. fit2a <- vglm(TS3 ~ 1, AR1(type.likelihood = "exact", # "conditional", type.EIM = "exact"), data = tsdata, trace = TRUE, crit = "coefficients") Coef(fit2a) summary(fit2a) # SEs are useful to know Coef(fit2a)["rho"] # Estimate of rho, for intercept-only models my.rho # The 'truth' (rho) Coef(fit2a)["drift"] # Estimate of drift, for intercept-only models my.mu /(1 - my.rho) # The 'truth' (drift) ## End(Not run)
Computation of the exact Expected Information Matrix of
the Autoregressive process of order- (AR(
))
with Gaussian white noise and stationary
random components.
AR1EIM(x = NULL, var.arg = NULL, p.drift = NULL, WNsd = NULL, ARcoeff1 = NULL, eps.porat = 1e-2)
AR1EIM(x = NULL, var.arg = NULL, p.drift = NULL, WNsd = NULL, ARcoeff1 = NULL, eps.porat = 1e-2)
x |
A vector of quantiles. The gaussian time series for which the EIMs are computed. If multiple time series are being analyzed, then |
var.arg |
Logical. Same as with |
p.drift |
A numeric vector with the scaled mean(s) (commonly referred as drift) of the AR process(es) in turn. Its length matches the number of responses. |
WNsd , ARcoeff1
|
Matrices.
The standard deviation of the white noise, and the
correlation (coefficient) of the AR( That is, the dimension for each matrix is |
eps.porat |
A very small positive number to test whether the standar deviation
( See below for further details. |
This function implements the algorithm of Porat and Friedlander (1986) to recursively compute the exact expected information matrix (EIM) of Gaussian time series with stationary random components.
By default, when the VGLM/VGAM family function
AR1
is used to fit an AR() model
via
vglm
, Fisher scoring is executed using
the approximate EIM for the AR process. However, this model
can also be fitted using the exact EIMs computed by
AR1EIM
.
Given consecutive data points,
with probability density
,
the Porat and Friedlander algorithm
calculates the EIMs
,
for all
. This is done based on the
Levinson-Durbin algorithm for computing the orthogonal polynomials of
a Toeplitz matrix.
In particular, for the AR(
) model, the vector of parameters
to be estimated under the VGAM/VGLM approach is
where is the variance of the white noise
and
is the drift parameter
(See
AR1
for further details on this).
Consequently, for each observation , the EIM,
, has dimension
, where the diagonal elements are:
and
As for the off-diagonal elements, one has the usual entries, i.e.,
etc.
If var.arg = FALSE
, then instead of
is estimated. Therefore,
,
, etc., are correspondingly replaced.
Once these expected values are internally computed, they are returned
in an array of dimension ,
of the form
AR1EIM
handles multiple time series, say .
If this happens, then it accordingly returns an array of
dimension
. Here,
, for
, is a matrix
of dimension
, which
stores the EIMs for the
th response, as
above, i.e.,
the bandwith form, as per required by
AR1
.
An array of dimension ,
as above.
This array stores the EIMs calculated from the joint density as a function of
Nevertheless, note that, under the VGAM/VGLM approach, the EIMs
must be correspondingly calculated in terms of the linear
predictors, .
For large enough , the EIMs,
,
become approximately linear in
. That is, for some
,
where is
a constant matrix.
This relationsihip is internally considered if a proper value
of is determined. Different ways can be adopted to
find
. In
AR1EIM
, this is done by checking
the difference between the internally estimated variances and the
entered ones at WNsd
.
If this difference is less than
eps.porat
at some iteration, say at iteration ,
then
AR1EIM
takes
as the last computed increment of
, and extraplotates
, for all
using
.
Else, the algorithm will complete the iterations for
.
Finally, note that the rate of convergence reasonably decreases if
the asymptotic relationship is used to compute
,
. Normally, the number
of operations involved on this algorithm is proportional to
.
See Porat and Friedlander (1986) for full details on the asymptotic behaviour of the algorithm.
Arguments WNsd
, and ARcoeff1
are matrices of dimension
. Else, these arguments are accordingly
recycled.
For simplicity, one can assume that the time series analyzed has
a 0-mean. Consequently, where the family function
AR1
calls AR1EIM
to compute
the EIMs, the argument p.drift
is internally set
to zero-vector, whereas x
is centered by
subtracting its mean value.
V. Miranda and T. W. Yee.
Porat, B. and Friedlander, B. (1986). Computation of the Exact Information Matrix of Gaussian Time Series with Stationary Random Components. IEEE Transactions on Acoustics, Speech, and Signal Processing, 54(1), 118–130.
AR1
.
set.seed(1) nn <- 500 ARcoeff1 <- c(0.3, 0.25) # Will be recycled. WNsd <- c(exp(1), exp(1.5)) # Will be recycled. p.drift <- c(0, 0) # Zero-mean gaussian time series. ### Generate two (zero-mean) AR(1) processes ### ts1 <- p.drift[1]/(1 - ARcoeff1[1]) + arima.sim(model = list(ar = ARcoeff1[1]), n = nn, sd = WNsd[1]) ts2 <- p.drift[2]/(1 - ARcoeff1[2]) + arima.sim(model = list(ar = ARcoeff1[2]), n = nn, sd = WNsd[2]) ARdata <- matrix(cbind(ts1, ts2), ncol = 2) ### Compute the exact EIMs: TWO responses. ### ExactEIM <- AR1EIM(x = ARdata, var.arg = FALSE, p.drift = p.drift, WNsd = WNsd, ARcoeff1 = ARcoeff1) ### For response 1: head(ExactEIM[, 1 ,]) # NOTICE THAT THIS IS A (nn x 6) MATRIX! ### For response 2: head(ExactEIM[, 2 ,]) # NOTICE THAT THIS IS A (nn x 6) MATRIX!
set.seed(1) nn <- 500 ARcoeff1 <- c(0.3, 0.25) # Will be recycled. WNsd <- c(exp(1), exp(1.5)) # Will be recycled. p.drift <- c(0, 0) # Zero-mean gaussian time series. ### Generate two (zero-mean) AR(1) processes ### ts1 <- p.drift[1]/(1 - ARcoeff1[1]) + arima.sim(model = list(ar = ARcoeff1[1]), n = nn, sd = WNsd[1]) ts2 <- p.drift[2]/(1 - ARcoeff1[2]) + arima.sim(model = list(ar = ARcoeff1[2]), n = nn, sd = WNsd[2]) ARdata <- matrix(cbind(ts1, ts2), ncol = 2) ### Compute the exact EIMs: TWO responses. ### ExactEIM <- AR1EIM(x = ARdata, var.arg = FALSE, p.drift = p.drift, WNsd = WNsd, ARcoeff1 = ARcoeff1) ### For response 1: head(ExactEIM[, 1 ,]) # NOTICE THAT THIS IS A (nn x 6) MATRIX! ### For response 2: head(ExactEIM[, 2 ,]) # NOTICE THAT THIS IS A (nn x 6) MATRIX!
Computes the arcsine link, including its inverse and the first few derivatives.
asinlink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE, c10 = c(4, -pi))
asinlink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE, c10 = c(4, -pi))
theta |
Numeric or character. See below for further details. |
bvalue |
See |
inverse , deriv , short , tag
|
Details at |
c10 |
Similar to |
Function alogitlink
gives some motivation for this link.
However, the problem with this link
is that it is bounded by default
between (-pi, pi)
so that it can be unsuitable for regression.
This link is a scaled and centred
CDF of the arcsine distribution.
The centring is chosen so that
asinlink(0.5)
is 0,
and the scaling is chosen so that
asinlink(0.5, deriv = 1)
and
logitlink(0.5, deriv = 1)
are equal (the value 4 actually),
hence this link will operate similar to the
logitlink
when close to 0.5.
Similar to logitlink
but using different formulas.
It is possible that the scaling might change in the future.
Thomas W. Yee
logitlink
,
alogitlink
,
Links
,
probitlink
,
clogloglink
,
cauchitlink
,
binomialff
,
sloglink
,
hdeff
.
p <- seq(0.01, 0.99, length= 10) asinlink(p) max(abs(asinlink(asinlink(p), inv = TRUE) - p)) # 0? ## Not run: par(mfrow = c(2, 2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) p <- seq(0.01, 0.99, by = 0.01) for (d in 0:1) { matplot(p, cbind(logitlink(p, deriv = d), probitlink(p, deriv = d)), type = "n", col = "blue", ylab = "transformation", log = ifelse(d == 1, "y", ""), las = 1, main = if (d == 0) "Some probability link functions" else "First derivative") lines(p, logitlink(p, deriv = d), col = "green") lines(p, probitlink(p, deriv = d), col = "blue") lines(p, clogloglink(p, deriv = d), col = "tan") lines(p, asinlink(p, deriv = d), col = "red3") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logitlink", "probitlink", "clogloglink", "asinlink"), lwd = mylwd, col = c("green", "blue", "tan", "red3")) } else abline(v = 0.5, lwd = 0.5, col = "gray") } for (d in 0) { matplot(y, cbind( logitlink(y, deriv = d, inverse = TRUE), probitlink(y, deriv = d, inverse = TRUE)), type = "n", col = "blue", xlab = "transformation", ylab = "p", main = if (d == 0) "Some inverse probability link functions" else "First derivative", las=1) lines(y, logitlink(y, deriv = d, inverse = TRUE), col = "green") lines(y, probitlink(y, deriv = d, inverse = TRUE), col = "blue") lines(y, clogloglink(y, deriv = d, inverse = TRUE), col = "tan") lines(y, asinlink(y, deriv = d, inverse = TRUE), col = "red3") if (d == 0) { abline(h = 0.5, v = 0, lwd = 0.5, col = "gray") legend(-4, 1, c("logitlink", "probitlink", "clogloglink", "asinlink"), lwd = mylwd, col = c("green", "blue", "tan", "red3")) } } par(lwd = 1) ## End(Not run)
p <- seq(0.01, 0.99, length= 10) asinlink(p) max(abs(asinlink(asinlink(p), inv = TRUE) - p)) # 0? ## Not run: par(mfrow = c(2, 2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) p <- seq(0.01, 0.99, by = 0.01) for (d in 0:1) { matplot(p, cbind(logitlink(p, deriv = d), probitlink(p, deriv = d)), type = "n", col = "blue", ylab = "transformation", log = ifelse(d == 1, "y", ""), las = 1, main = if (d == 0) "Some probability link functions" else "First derivative") lines(p, logitlink(p, deriv = d), col = "green") lines(p, probitlink(p, deriv = d), col = "blue") lines(p, clogloglink(p, deriv = d), col = "tan") lines(p, asinlink(p, deriv = d), col = "red3") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logitlink", "probitlink", "clogloglink", "asinlink"), lwd = mylwd, col = c("green", "blue", "tan", "red3")) } else abline(v = 0.5, lwd = 0.5, col = "gray") } for (d in 0) { matplot(y, cbind( logitlink(y, deriv = d, inverse = TRUE), probitlink(y, deriv = d, inverse = TRUE)), type = "n", col = "blue", xlab = "transformation", ylab = "p", main = if (d == 0) "Some inverse probability link functions" else "First derivative", las=1) lines(y, logitlink(y, deriv = d, inverse = TRUE), col = "green") lines(y, probitlink(y, deriv = d, inverse = TRUE), col = "blue") lines(y, clogloglink(y, deriv = d, inverse = TRUE), col = "tan") lines(y, asinlink(y, deriv = d, inverse = TRUE), col = "red3") if (d == 0) { abline(h = 0.5, v = 0, lwd = 0.5, col = "gray") legend(-4, 1, c("logitlink", "probitlink", "clogloglink", "asinlink"), lwd = mylwd, col = c("green", "blue", "tan", "red3")) } } par(lwd = 1) ## End(Not run)
Undergraduate student enrolments at the University of Auckland in 1990.
data(auuc)
data(auuc)
A data frame with 4 observations on the following 5 variables.
a numeric vector of counts.
a numeric vector of counts.
a numeric vector of counts.
a numeric vector of counts.
a numeric vector of counts.
Each student is cross-classified by their colleges (Science and Engineering have been combined) and the socio-economic status (SES) of their fathers (1 = highest, down to 4 = lowest).
Dr Tony Morrison.
Wild, C. J. and Seber, G. A. F. (2000). Chance Encounters: A First Course in Data Analysis and Inference, New York: Wiley.
auuc ## Not run: round(fitted(grc(auuc))) round(fitted(grc(auuc, Rank = 2))) ## End(Not run)
auuc ## Not run: round(fitted(grc(auuc))) round(fitted(grc(auuc, Rank = 2))) ## End(Not run)
Returns behavioural effects indicator variables from a capture history matrix.
aux.posbernoulli.t(y, check.y = FALSE, rename = TRUE, name = "bei")
aux.posbernoulli.t(y, check.y = FALSE, rename = TRUE, name = "bei")
y |
Capture history matrix. Rows are animals, columns are sampling occasions, and values should be 0s and 1s only. |
check.y |
Logical, if |
rename , name
|
If |
This function can help fit certain capture–recapture models
(commonly known as or
(no prefix
means it is an intercept-only model)
in the literature).
See
posbernoulli.t
for details.
A list with the following components.
A matrix the same dimension as y
.
In any particular row there are 0s up to
the first capture. Then there are 1s thereafter.
A vector specifying which time occasion the animal was first captured.
Number of noncaptures before the first capture.
Number of noncaptures after the first capture.
Number of recaptures after the first capture.
# Fit a M_tbh model to the deermice data: (pdata <- aux.posbernoulli.t(with(deermice, cbind(y1, y2, y3, y4, y5, y6)))) deermice <- data.frame(deermice, bei = 0, # Add this pdata$cap.hist1) # Incorporate these head(deermice) # Augmented with behavioural effect indicator variables tail(deermice)
# Fit a M_tbh model to the deermice data: (pdata <- aux.posbernoulli.t(with(deermice, cbind(y1, y2, y3, y4, y5, y6)))) deermice <- data.frame(deermice, bei = 0, # Add this pdata$cap.hist1) # Incorporate these head(deermice) # Augmented with behavioural effect indicator variables tail(deermice)
Data from a study of patients suffering from back pain. Prognostic variables were recorded at presentation and progress was categorised three weeks after treatment.
data(backPain)
data(backPain)
A data frame with 101 observations on the following 4 variables.
length of previous attack.
pain change.
lordosis.
an ordered factor describing the progress of each
patient with levels worse
< same
<
slight.improvement
< moderate.improvement
<
marked.improvement
< complete.relief
.
http://ideas.repec.org/c/boc/bocode/s419001.html
The data set and this help file was copied from gnm so that a vignette in VGAM could be run; the analysis is described in Yee (2010).
The data frame backPain2
is a modification of
backPain
where the variables have been renamed
(x1
becomes x2
,
x2
becomes x3
,
x3
becomes x4
)
and
converted into factors.
Anderson, J. A. (1984). Regression and Ordered Categorical Variables. J. R. Statist. Soc. B, 46(1), 1-30.
Yee, T. W. (2010). The VGAM package for categorical data analysis. Journal of Statistical Software, 32, 1–34. doi:10.18637/jss.v032.i10.
summary(backPain) summary(backPain2)
summary(backPain) summary(backPain2)
Purchasing of bacon and eggs.
data(beggs)
data(beggs)
Data frame of a two way table.
The b
refers to bacon.
The number of times bacon was purchased was 0, 1, 2, 3, or 4.
The e
refers to eggs.
The number of times eggs was purchased was 0, 1, 2, 3, or 4.
The data is from Information Resources, Inc., a consumer panel based in a large US city [see Bell and Lattin (1998) for further details]. Starting in June 1991, the purchases in the bacon and fresh eggs product categories for a sample of 548 households over four consecutive store trips was tracked. Only those grocery shopping trips with a total basket value of at least five dollars was considered. For each household, the total number of bacon purchases in their four eligible shopping trips and the total number of egg purchases (usually a package of eggs) for the same trips, were counted.
Bell, D. R. and Lattin, J. M. (1998) Shopping Behavior and Consumer Preference for Store Price Format: Why ‘Large Basket’ Shoppers Prefer EDLP. Marketing Science, 17, 66–88.
Danaher, P. J. and Hardie, B. G. S. (2005). Bacon with Your Eggs? Applications of a New Bivariate Beta-Binomial Distribution. American Statistician, 59(4), 282–286.
beggs colSums(beggs) rowSums(beggs)
beggs colSums(beggs) rowSums(beggs)
Returns the values of the Bell series.
bell(n)
bell(n)
n |
Vector of non-negative integers.
Values greater than 218 return an |
The Bell numbers emerge from a series expansion of
for real
.
The first few values are
,
,
,
,
.
The series increases quickly so that overflow occurs when
its argument is more than 218.
This function returns
.
T. W. Yee
Bell, E. T. (1934). Exponential polynomials. Ann. Math., 35, 258–277.
Bell, E. T. (1934). Exponential numbers. Amer. Math. Monthly, 41, 411–419.
## Not run: plot(0:10, bell(0:10), log = "y", type = "h", col = "blue") ## End(Not run)
## Not run: plot(0:10, bell(0:10), log = "y", type = "h", col = "blue") ## End(Not run)
Density, distribution function, quantile function, and random generation for Benford's distribution.
dbenf(x, ndigits = 1, log = FALSE) pbenf(q, ndigits = 1, lower.tail = TRUE, log.p = FALSE) qbenf(p, ndigits = 1, lower.tail = TRUE, log.p = FALSE) rbenf(n, ndigits = 1)
dbenf(x, ndigits = 1, log = FALSE) pbenf(q, ndigits = 1, lower.tail = TRUE, log.p = FALSE) qbenf(p, ndigits = 1, lower.tail = TRUE, log.p = FALSE) rbenf(n, ndigits = 1)
x , q
|
Vector of quantiles.
See |
p |
vector of probabilities. |
n |
number of observations. A single positive integer.
Else if |
ndigits |
Number of leading digits, either 1 or 2. If 1 then the support of the distribution is {1,...,9}, else {10,...,99}. |
log , log.p
|
Logical.
If |
lower.tail |
Benford's Law (aka the significant-digit law) is the
empirical observation that in many naturally occuring tables of
numerical data, the leading significant (nonzero) digit
is not uniformly distributed in .
Instead, the leading significant digit (
, say)
obeys the law
for .
This means
the probability the first significant digit is 1 is
approximately
, etc.
Benford's Law was apparently first discovered in 1881 by astronomer/mathematician S. Newcombe. It started by the observation that the pages of a book of logarithms were dirtiest at the beginning and progressively cleaner throughout. In 1938, a General Electric physicist called F. Benford rediscovered the law on this same observation. Over several years he collected data from different sources as different as atomic weights, baseball statistics, numerical data from Reader's Digest, and drainage areas of rivers.
Applications of Benford's Law has been as diverse as to the area of fraud detection in accounting and the design computers.
Benford's distribution has been called
“a” logarithmic distribution;
see logff
.
dbenf
gives the density,
pbenf
gives the distribution function, and
qbenf
gives the quantile function, and
rbenf
generates random deviates.
T. W. Yee and Kai Huang
Benford, F. (1938). The Law of Anomalous Numbers. Proceedings of the American Philosophical Society, 78, 551–572.
Newcomb, S. (1881). Note on the Frequency of Use of the Different Digits in Natural Numbers. American Journal of Mathematics, 4, 39–40.
dbenf(x <- c(0:10, NA, NaN, -Inf, Inf)) pbenf(x) ## Not run: xx <- 1:9 barplot(dbenf(xx), col = "lightblue", xlab = "Leading digit", ylab = "Probability", names.arg = as.character(xx), main = "Benford's distribution", las = 1) hist(rbenf(1000), border = "blue", prob = TRUE, main = "1000 random variates from Benford's distribution", xlab = "Leading digit", sub="Red is the true probability", breaks = 0:9 + 0.5, ylim = c(0, 0.35), xlim = c(0, 10.0)) lines(xx, dbenf(xx), col = "red", type = "h") points(xx, dbenf(xx), col = "red") ## End(Not run)
dbenf(x <- c(0:10, NA, NaN, -Inf, Inf)) pbenf(x) ## Not run: xx <- 1:9 barplot(dbenf(xx), col = "lightblue", xlab = "Leading digit", ylab = "Probability", names.arg = as.character(xx), main = "Benford's distribution", las = 1) hist(rbenf(1000), border = "blue", prob = TRUE, main = "1000 random variates from Benford's distribution", xlab = "Leading digit", sub="Red is the true probability", breaks = 0:9 + 0.5, ylim = c(0, 0.35), xlim = c(0, 10.0)) lines(xx, dbenf(xx), col = "red", type = "h") points(xx, dbenf(xx), col = "red") ## End(Not run)
Density, distribution function, quantile function and
random generation for the Benini distribution with parameter
shape
.
dbenini(x, y0, shape, log = FALSE) pbenini(q, y0, shape, lower.tail = TRUE, log.p = FALSE) qbenini(p, y0, shape, lower.tail = TRUE, log.p = FALSE) rbenini(n, y0, shape)
dbenini(x, y0, shape, log = FALSE) pbenini(q, y0, shape, lower.tail = TRUE, log.p = FALSE) qbenini(p, y0, shape, lower.tail = TRUE, log.p = FALSE) rbenini(n, y0, shape)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as |
y0 |
the scale parameter |
shape |
the positive shape parameter |
log |
Logical.
If |
lower.tail , log.p
|
See benini1
, the VGAM family function
for estimating the parameter by maximum likelihood
estimation, for the formula of the probability density function
and other details.
dbenini
gives the density,
pbenini
gives the distribution function,
qbenini
gives the quantile function, and
rbenini
generates random deviates.
T. W. Yee and Kai Huang
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
## Not run: y0 <- 1; shape <- exp(1) xx <- seq(0.0, 4, len = 101) plot(xx, dbenini(xx, y0 = y0, shape = shape), col = "blue", main = "Blue is density, orange is the CDF", type = "l", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = 0:1, las = 1, ylab = "", xlab = "x") abline(h = 0, col = "blue", lty = 2) lines(xx, pbenini(xx, y0 = y0, shape = shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qbenini(probs, y0 = y0, shape = shape) lines(Q, dbenini(Q, y0 = y0, shape = shape), col = "purple", lty = 3, type = "h") pbenini(Q, y0 = y0, shape = shape) - probs # Should be all zero ## End(Not run)
## Not run: y0 <- 1; shape <- exp(1) xx <- seq(0.0, 4, len = 101) plot(xx, dbenini(xx, y0 = y0, shape = shape), col = "blue", main = "Blue is density, orange is the CDF", type = "l", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = 0:1, las = 1, ylab = "", xlab = "x") abline(h = 0, col = "blue", lty = 2) lines(xx, pbenini(xx, y0 = y0, shape = shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qbenini(probs, y0 = y0, shape = shape) lines(Q, dbenini(Q, y0 = y0, shape = shape), col = "purple", lty = 3, type = "h") pbenini(Q, y0 = y0, shape = shape) - probs # Should be all zero ## End(Not run)
Estimating the 1-parameter Benini distribution by maximum likelihood estimation.
benini1(y0 = stop("argument 'y0' must be specified"), lshape = "loglink", ishape = NULL, imethod = 1, zero = NULL, parallel = FALSE, type.fitted = c("percentiles", "Qlink"), percentiles = 50)
benini1(y0 = stop("argument 'y0' must be specified"), lshape = "loglink", ishape = NULL, imethod = 1, zero = NULL, parallel = FALSE, type.fitted = c("percentiles", "Qlink"), percentiles = 50)
y0 |
Positive scale parameter. |
lshape |
Parameter link function and extra argument of the parameter
|
ishape |
Optional initial value for the shape parameter. The default is to compute the value internally. |
imethod , zero , parallel
|
Details at |
type.fitted , percentiles
|
See |
The Benini distribution has a probability density function that can be written
for , and shape
.
The cumulative distribution function for
is
Here, Newton-Raphson and Fisher scoring coincide.
The median of is now returned as the fitted values,
by default.
This VGAM family function can handle a multiple
responses, which is inputted as a matrix.
On fitting, the extra
slot has a component called
y0
which contains the value of the y0
argument.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
Yet to do: the 2-parameter Benini distribution estimates another
shape parameter too. Hence, the code may change in
the future.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
y0 <- 1; nn <- 3000 bdata <- data.frame(y = rbenini(nn, y0 = y0, shape = exp(2))) fit <- vglm(y ~ 1, benini1(y0 = y0), data = bdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) fit@extra$y0 c(head(fitted(fit), 1), with(bdata, median(y))) # Should be equal
y0 <- 1; nn <- 3000 bdata <- data.frame(y = rbenini(nn, y0 = y0, shape = exp(2))) fit <- vglm(y ~ 1, benini1(y0 = y0), data = bdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) fit@extra$y0 c(head(fitted(fit), 1), with(bdata, median(y))) # Should be equal
Density, distribution function, and random generation for the beta-binomial distribution and the inflated beta-binomial distribution.
dbetabinom(x, size, prob, rho = 0, log = FALSE) pbetabinom(q, size, prob, rho = 0, log.p = FALSE) rbetabinom(n, size, prob, rho = 0) dbetabinom.ab(x, size, shape1, shape2, log = FALSE, Inf.shape = exp(20), limit.prob = 0.5) pbetabinom.ab(q, size, shape1, shape2, limit.prob = 0.5, log.p = FALSE) rbetabinom.ab(n, size, shape1, shape2, limit.prob = 0.5, .dontuse.prob = NULL) dzoibetabinom(x, size, prob, rho = 0, pstr0 = 0, pstrsize = 0, log = FALSE) pzoibetabinom(q, size, prob, rho, pstr0 = 0, pstrsize = 0, lower.tail = TRUE, log.p = FALSE) rzoibetabinom(n, size, prob, rho = 0, pstr0 = 0, pstrsize = 0) dzoibetabinom.ab(x, size, shape1, shape2, pstr0 = 0, pstrsize = 0, log = FALSE) pzoibetabinom.ab(q, size, shape1, shape2, pstr0 = 0, pstrsize = 0, lower.tail = TRUE, log.p = FALSE) rzoibetabinom.ab(n, size, shape1, shape2, pstr0 = 0, pstrsize = 0)
dbetabinom(x, size, prob, rho = 0, log = FALSE) pbetabinom(q, size, prob, rho = 0, log.p = FALSE) rbetabinom(n, size, prob, rho = 0) dbetabinom.ab(x, size, shape1, shape2, log = FALSE, Inf.shape = exp(20), limit.prob = 0.5) pbetabinom.ab(q, size, shape1, shape2, limit.prob = 0.5, log.p = FALSE) rbetabinom.ab(n, size, shape1, shape2, limit.prob = 0.5, .dontuse.prob = NULL) dzoibetabinom(x, size, prob, rho = 0, pstr0 = 0, pstrsize = 0, log = FALSE) pzoibetabinom(q, size, prob, rho, pstr0 = 0, pstrsize = 0, lower.tail = TRUE, log.p = FALSE) rzoibetabinom(n, size, prob, rho = 0, pstr0 = 0, pstrsize = 0) dzoibetabinom.ab(x, size, shape1, shape2, pstr0 = 0, pstrsize = 0, log = FALSE) pzoibetabinom.ab(q, size, shape1, shape2, pstr0 = 0, pstrsize = 0, lower.tail = TRUE, log.p = FALSE) rzoibetabinom.ab(n, size, shape1, shape2, pstr0 = 0, pstrsize = 0)
x , q
|
vector of quantiles. |
size |
number of trials. |
n |
number of observations.
Same as |
prob |
the probability of success |
rho |
the correlation parameter |
shape1 , shape2
|
the two (positive) shape parameters of the standard
beta distribution. They are called |
log , log.p , lower.tail
|
Same meaning as |
Inf.shape |
Numeric. A large value such that,
if |
limit.prob |
Numerical vector; recycled if necessary.
If either shape parameters are |
.dontuse.prob |
An argument that should be ignored and not used. |
pstr0 |
Probability of a structual zero
(i.e., ignoring the beta-binomial distribution).
The default value of |
pstrsize |
Probability of a structual maximum value |
The beta-binomial distribution is a binomial distribution whose
probability of success is not a constant but it is generated
from a beta distribution with parameters shape1
and
shape2
. Note that the mean of this beta distribution
is mu = shape1/(shape1+shape2)
, which therefore is the
mean or the probability of success.
See betabinomial
and betabinomialff
,
the VGAM family functions for
estimating the parameters, for the formula of the probability
density function and other details.
For the inflated beta-binomial distribution, the probability mass function is
where is the probability mass function
of the beta-binomial distribution with the same shape parameters
(
pbetabinom.ab
),
pstr0
is the inflated probability at 0
and pstrsize
is the inflated probability at 1.
The default values of pstr0
and pstrsize
mean that these functions behave like the ordinary
Betabinom
when only the essential arguments
are inputted.
dbetabinom
and dbetabinom.ab
give the density,
pbetabinom
and pbetabinom.ab
give the
distribution function, and
rbetabinom
and rbetabinom.ab
generate random
deviates.
dzoibetabinom
and dzoibetabinom.ab
give the
inflated density,
pzoibetabinom
and pzoibetabinom.ab
give the
inflated distribution function, and
rzoibetabinom
and rzoibetabinom.ab
generate
random inflated deviates.
Setting rho = 1
is not recommended,
however the code may be
modified in the future to handle this special case.
pzoibetabinom
, pzoibetabinom.ab
,
pbetabinom
and pbetabinom.ab
can be particularly
slow.
The functions here ending in .ab
are called from those
functions which don't.
The simple transformations
and
are
used, where
and
are the
two shape parameters.
T. W. Yee and Xiangjie Xue
Extbetabinom
,
betabinomial
,
betabinomialff
,
Zoabeta
,
Beta
.
set.seed(1); rbetabinom(10, 100, prob = 0.5) set.seed(1); rbinom(10, 100, prob = 0.5) # The same as rho = 0 ## Not run: N <- 9; xx <- 0:N; s1 <- 2; s2 <- 3 dy <- dbetabinom.ab(xx, size = N, shape1 = s1, shape2 = s2) barplot(rbind(dy, dbinom(xx, size = N, prob = s1 / (s1+s2))), beside = TRUE, col = c("blue","green"), las = 1, main = paste("Beta-binomial (size=",N,", shape1=", s1, ", shape2=", s2, ") (blue) vs\n", " Binomial(size=", N, ", prob=", s1/(s1+s2), ") (green)", sep = ""), names.arg = as.character(xx), cex.main = 0.8) sum(dy * xx) # Check expected values are equal sum(dbinom(xx, size = N, prob = s1 / (s1+s2)) * xx) # Should be all 0: cumsum(dy) - pbetabinom.ab(xx, N, shape1 = s1, shape2 = s2) y <- rbetabinom.ab(n = 1e4, size = N, shape1 = s1, shape2 = s2) ty <- table(y) barplot(rbind(dy, ty / sum(ty)), beside = TRUE, col = c("blue", "orange"), las = 1, main = paste("Beta-binomial (size=", N, ", shape1=", s1, ", shape2=", s2, ") (blue) vs\n", " Random generated beta-binomial(size=", N, ", prob=", s1/(s1+s2), ") (orange)", sep = ""), cex.main = 0.8, names.arg = as.character(xx)) N <- 1e5; size <- 20; pstr0 <- 0.2; pstrsize <- 0.2 kk <- rzoibetabinom.ab(N, size, s1, s2, pstr0, pstrsize) hist(kk, probability = TRUE, border = "blue", ylim = c(0, 0.25), main = "Blue/green = inflated; orange = ordinary beta-binomial", breaks = -0.5 : (size + 0.5)) sum(kk == 0) / N # Proportion of 0 sum(kk == size) / N # Proportion of size lines(0 : size, dbetabinom.ab(0 : size, size, s1, s2), col = "orange") lines(0 : size, col = "green", type = "b", dzoibetabinom.ab(0 : size, size, s1, s2, pstr0, pstrsize)) ## End(Not run)
set.seed(1); rbetabinom(10, 100, prob = 0.5) set.seed(1); rbinom(10, 100, prob = 0.5) # The same as rho = 0 ## Not run: N <- 9; xx <- 0:N; s1 <- 2; s2 <- 3 dy <- dbetabinom.ab(xx, size = N, shape1 = s1, shape2 = s2) barplot(rbind(dy, dbinom(xx, size = N, prob = s1 / (s1+s2))), beside = TRUE, col = c("blue","green"), las = 1, main = paste("Beta-binomial (size=",N,", shape1=", s1, ", shape2=", s2, ") (blue) vs\n", " Binomial(size=", N, ", prob=", s1/(s1+s2), ") (green)", sep = ""), names.arg = as.character(xx), cex.main = 0.8) sum(dy * xx) # Check expected values are equal sum(dbinom(xx, size = N, prob = s1 / (s1+s2)) * xx) # Should be all 0: cumsum(dy) - pbetabinom.ab(xx, N, shape1 = s1, shape2 = s2) y <- rbetabinom.ab(n = 1e4, size = N, shape1 = s1, shape2 = s2) ty <- table(y) barplot(rbind(dy, ty / sum(ty)), beside = TRUE, col = c("blue", "orange"), las = 1, main = paste("Beta-binomial (size=", N, ", shape1=", s1, ", shape2=", s2, ") (blue) vs\n", " Random generated beta-binomial(size=", N, ", prob=", s1/(s1+s2), ") (orange)", sep = ""), cex.main = 0.8, names.arg = as.character(xx)) N <- 1e5; size <- 20; pstr0 <- 0.2; pstrsize <- 0.2 kk <- rzoibetabinom.ab(N, size, s1, s2, pstr0, pstrsize) hist(kk, probability = TRUE, border = "blue", ylim = c(0, 0.25), main = "Blue/green = inflated; orange = ordinary beta-binomial", breaks = -0.5 : (size + 0.5)) sum(kk == 0) / N # Proportion of 0 sum(kk == size) / N # Proportion of size lines(0 : size, dbetabinom.ab(0 : size, size, s1, s2), col = "orange") lines(0 : size, col = "green", type = "b", dzoibetabinom.ab(0 : size, size, s1, s2, pstr0, pstrsize)) ## End(Not run)
Fits a beta-binomial distribution by maximum likelihood estimation. The two parameters here are the mean and correlation coefficient.
betabinomial(lmu = "logitlink", lrho = "logitlink", irho = NULL, imethod = 1, ishrinkage = 0.95, nsimEIM = NULL, zero = "rho")
betabinomial(lmu = "logitlink", lrho = "logitlink", irho = NULL, imethod = 1, ishrinkage = 0.95, nsimEIM = NULL, zero = "rho")
lmu , lrho
|
Link functions applied to the two parameters.
See |
irho |
Optional initial value for the correlation parameter. If given,
it must be in |
imethod |
An integer with value |
zero |
Specifyies which
linear/additive predictor is to be modelled as an intercept
only. If assigned, the single value can be either |
ishrinkage , nsimEIM
|
See |
There are several parameterizations of the beta-binomial
distribution. This family function directly models the mean
and correlation parameter, i.e.,
the probability of success.
The model can be written
where
has a beta distribution with shape parameters
and
. Here,
is the number of trials (e.g., litter size),
is the number of successes, and
is the probability of a success (e.g., a malformation).
That is,
is the proportion of successes. Like
binomialff
, the fitted values are the
estimated probability
of success (i.e., and not
)
and the prior weights
are attached separately on the
object in a slot.
The probability function is
where , and
is the
beta
function
with shape parameters and
.
Recall
is the real response being modelled.
The default model is
and
because both
parameters lie between 0 and 1.
The mean (of
) is
and the variance (of
) is
.
Here, the correlation
is given by
and is the correlation between the
individuals
within a litter. A litter effect is typically reflected
by a positive value of
. It is known as the
over-dispersion parameter.
This family function uses Fisher scoring.
Elements of the second-order expected
derivatives with respect to and
are computed numerically, which may
fail for large
,
,
or else take a long time.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
.
Suppose fit
is a fitted beta-binomial
model. Then depvar(fit)
are the sample proportions ,
fitted(fit)
returns estimates of
,
and
weights(fit, type = "prior")
returns
the number of trials .
If the estimated rho parameter is close
to 0 then
a good solution is to use
extbetabinomial
.
Or you could try
lrho = "rhobitlink"
.
This family function is prone to numerical
difficulties due to the expected information
matrices not being positive-definite or
ill-conditioned over some regions of the
parameter space. If problems occur try
setting irho
to some numerical
value, nsimEIM = 100
, say, or
else use etastart
argument of
vglm
, etc.
This function processes the input in the same way
as binomialff
. But it does not handle
the case very well because there are two
parameters to estimate, not one, for each row of the input.
Cases where
can be omitted via the
subset
argument of vglm
.
The extended beta-binomial distribution
of Prentice (1986)
implemented by extbetabinomial
is the preferred VGAM
family function for BBD regression.
T. W. Yee
Moore, D. F. and Tsiatis, A. (1991). Robust estimation of the variance in moment methods for extra-binomial and extra-Poisson variation. Biometrics, 47, 383–401.
extbetabinomial
,
betabinomialff
,
Betabinom
,
binomialff
,
betaff
,
dirmultinomial
,
log1plink
,
cloglink
,
lirat
,
simulate.vlm
.
# Example 1 bdata <- data.frame(N = 10, mu = 0.5, rho = 0.8) bdata <- transform(bdata, y = rbetabinom(100, size = N, prob = mu, rho = rho)) fit <- vglm(cbind(y, N-y) ~ 1, betabinomial, bdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(cbind(depvar(fit), weights(fit, type = "prior"))) # Example 2 fit <- vglm(cbind(R, N-R) ~ 1, betabinomial, lirat, trace = TRUE, subset = N > 1) coef(fit, matrix = TRUE) Coef(fit) t(fitted(fit)) t(depvar(fit)) t(weights(fit, type = "prior")) # Example 3, which is more complicated lirat <- transform(lirat, fgrp = factor(grp)) summary(lirat) # Only 5 litters in group 3 fit2 <- vglm(cbind(R, N-R) ~ fgrp + hb, betabinomial(zero = 2), data = lirat, trace = TRUE, subset = N > 1) coef(fit2, matrix = TRUE) ## Not run: with(lirat, plot(hb[N > 1], fit2@misc$rho, xlab = "Hemoglobin", ylab = "Estimated rho", pch = as.character(grp[N > 1]), col = grp[N > 1])) ## End(Not run) ## Not run: # cf. Figure 3 of Moore and Tsiatis (1991) with(lirat, plot(hb, R / N, pch = as.character(grp), col = grp, xlab = "Hemoglobin level", ylab = "Proportion Dead", main = "Fitted values (lines)", las = 1)) smalldf <- with(lirat, lirat[N > 1, ]) for (gp in 1:4) { xx <- with(smalldf, hb[grp == gp]) yy <- with(smalldf, fitted(fit2)[grp == gp]) ooo <- order(xx) lines(xx[ooo], yy[ooo], col = gp, lwd = 2) } ## End(Not run)
# Example 1 bdata <- data.frame(N = 10, mu = 0.5, rho = 0.8) bdata <- transform(bdata, y = rbetabinom(100, size = N, prob = mu, rho = rho)) fit <- vglm(cbind(y, N-y) ~ 1, betabinomial, bdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(cbind(depvar(fit), weights(fit, type = "prior"))) # Example 2 fit <- vglm(cbind(R, N-R) ~ 1, betabinomial, lirat, trace = TRUE, subset = N > 1) coef(fit, matrix = TRUE) Coef(fit) t(fitted(fit)) t(depvar(fit)) t(weights(fit, type = "prior")) # Example 3, which is more complicated lirat <- transform(lirat, fgrp = factor(grp)) summary(lirat) # Only 5 litters in group 3 fit2 <- vglm(cbind(R, N-R) ~ fgrp + hb, betabinomial(zero = 2), data = lirat, trace = TRUE, subset = N > 1) coef(fit2, matrix = TRUE) ## Not run: with(lirat, plot(hb[N > 1], fit2@misc$rho, xlab = "Hemoglobin", ylab = "Estimated rho", pch = as.character(grp[N > 1]), col = grp[N > 1])) ## End(Not run) ## Not run: # cf. Figure 3 of Moore and Tsiatis (1991) with(lirat, plot(hb, R / N, pch = as.character(grp), col = grp, xlab = "Hemoglobin level", ylab = "Proportion Dead", main = "Fitted values (lines)", las = 1)) smalldf <- with(lirat, lirat[N > 1, ]) for (gp in 1:4) { xx <- with(smalldf, hb[grp == gp]) yy <- with(smalldf, fitted(fit2)[grp == gp]) ooo <- order(xx) lines(xx[ooo], yy[ooo], col = gp, lwd = 2) } ## End(Not run)
Fits a beta-binomial distribution by maximum likelihood estimation. The two parameters here are the shape parameters of the underlying beta distribution.
betabinomialff(lshape1 = "loglink", lshape2 = "loglink", ishape1 = 1, ishape2 = NULL, imethod = 1, ishrinkage = 0.95, nsimEIM = NULL, zero = NULL)
betabinomialff(lshape1 = "loglink", lshape2 = "loglink", ishape1 = 1, ishape2 = NULL, imethod = 1, ishrinkage = 0.95, nsimEIM = NULL, zero = NULL)
lshape1 , lshape2
|
Link functions for the two (positive) shape parameters
of the beta distribution.
See |
ishape1 , ishape2
|
Initial value for the shape parameters.
The first must be positive, and is recyled to the necessary
length. The second is optional. If a failure to converge
occurs, try assigning a different value to |
zero |
Can be
an integer specifying which linear/additive predictor
is to be modelled as an intercept only. If assigned, the
single value should be either |
ishrinkage , nsimEIM , imethod
|
See |
There are several parameterizations of the beta-binomial
distribution. This family function directly models the two
shape parameters of the associated beta distribution rather than
the probability of success (however, see Note below).
The model can be written
where
has a beta distribution with shape parameters
and
. Here,
is the number of trials (e.g., litter size),
is the number of successes, and
is the probability of a success (e.g., a malformation).
That is,
is the proportion of successes. Like
binomialff
, the fitted values are the
estimated probability
of success (i.e., and not
)
and the prior weights
are attached separately on the
object in a slot.
The probability function is
where , and
is the beta function
with shape parameters
and
.
Recall
is the real response being modelled.
The default model
is
and
because both
parameters are positive.
The mean (of
) is
and the variance (of
) is
.
Here, the correlation
is given by
and is the correlation between the
individuals
within a litter. A litter effect is typically reflected
by a positive value of
. It is known as the
over-dispersion parameter.
This family function uses Fisher scoring. The two diagonal
elements of the second-order expected
derivatives with respect to and
are computed numerically, which may
fail for large
,
,
or else take a long time.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
.
Suppose fit
is a fitted beta-binomial model. Then
fit@y
(better: depvar(fit)
) contains the sample
proportions ,
fitted(fit)
returns estimates of
, and
weights(fit, type = "prior")
returns
the number of trials .
This family function is prone to numerical difficulties due to
the expected information matrices not being positive-definite
or ill-conditioned over some regions of the parameter space.
If problems occur try setting ishape1
to be some other
positive value, using ishape2
and/or setting zero
= 2
.
This family function may be renamed in the future.
See the warnings in betabinomial
.
This function processes the input in the same way
as binomialff
. But it does not handle
the case very well because there are two
parameters to estimate, not one, for each row of the input.
Cases where
can be omitted via the
subset
argument of vglm
.
Although the two linear/additive predictors given above are
in terms of and
, basic
algebra shows that the default amounts to fitting a logit
link to the probability of success; subtracting the second
linear/additive predictor from the first gives that logistic
regression linear/additive predictor. That is,
. This is illustated
in one of the examples below.
The extended beta-binomial distribution
of Prentice (1986)
implemented by extbetabinomial
is the preferred VGAM
family function for BBD regression.
T. W. Yee
Moore, D. F. and Tsiatis, A. (1991). Robust estimation of the variance in moment methods for extra-binomial and extra-Poisson variation. Biometrics, 47, 383–401.
Prentice, R. L. (1986). Binary regression using an extended beta-binomial distribution, with discussion of correlation induced by covariate measurement errors. Journal of the American Statistical Association, 81, 321–327.
extbetabinomial
,
betabinomial
,
Betabinom
,
binomialff
,
betaff
,
dirmultinomial
,
lirat
,
simulate.vlm
.
# Example 1 N <- 10; s1 <- exp(1); s2 <- exp(2) y <- rbetabinom.ab(n = 100, size = N, shape1 = s1, shape2 = s2) fit <- vglm(cbind(y, N-y) ~ 1, betabinomialff, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fit@misc$rho) # The correlation parameter head(cbind(depvar(fit), weights(fit, type = "prior"))) # Example 2 fit <- vglm(cbind(R, N-R) ~ 1, betabinomialff, data = lirat, trace = TRUE, subset = N > 1) coef(fit, matrix = TRUE) Coef(fit) fit@misc$rho # The correlation parameter t(fitted(fit)) t(depvar(fit)) t(weights(fit, type = "prior")) # A "loglink" link for the 2 shape params is a logistic regression: all.equal(c(fitted(fit)), as.vector(logitlink(predict(fit)[, 1] - predict(fit)[, 2], inverse = TRUE))) # Example 3, which is more complicated lirat <- transform(lirat, fgrp = factor(grp)) summary(lirat) # Only 5 litters in group 3 fit2 <- vglm(cbind(R, N-R) ~ fgrp + hb, betabinomialff(zero = 2), data = lirat, trace = TRUE, subset = N > 1) coef(fit2, matrix = TRUE) coef(fit2, matrix = TRUE)[, 1] - coef(fit2, matrix = TRUE)[, 2] # logitlink(p) ## Not run: with(lirat, plot(hb[N > 1], fit2@misc$rho, xlab = "Hemoglobin", ylab = "Estimated rho", pch = as.character(grp[N > 1]), col = grp[N > 1])) ## End(Not run) ## Not run: # cf. Figure 3 of Moore and Tsiatis (1991) with(lirat, plot(hb, R / N, pch = as.character(grp), col = grp, xlab = "Hemoglobin level", ylab = "Proportion Dead", las = 1, main = "Fitted values (lines)")) smalldf <- with(lirat, lirat[N > 1, ]) for (gp in 1:4) { xx <- with(smalldf, hb[grp == gp]) yy <- with(smalldf, fitted(fit2)[grp == gp]) ooo <- order(xx) lines(xx[ooo], yy[ooo], col = gp, lwd = 2) } ## End(Not run)
# Example 1 N <- 10; s1 <- exp(1); s2 <- exp(2) y <- rbetabinom.ab(n = 100, size = N, shape1 = s1, shape2 = s2) fit <- vglm(cbind(y, N-y) ~ 1, betabinomialff, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fit@misc$rho) # The correlation parameter head(cbind(depvar(fit), weights(fit, type = "prior"))) # Example 2 fit <- vglm(cbind(R, N-R) ~ 1, betabinomialff, data = lirat, trace = TRUE, subset = N > 1) coef(fit, matrix = TRUE) Coef(fit) fit@misc$rho # The correlation parameter t(fitted(fit)) t(depvar(fit)) t(weights(fit, type = "prior")) # A "loglink" link for the 2 shape params is a logistic regression: all.equal(c(fitted(fit)), as.vector(logitlink(predict(fit)[, 1] - predict(fit)[, 2], inverse = TRUE))) # Example 3, which is more complicated lirat <- transform(lirat, fgrp = factor(grp)) summary(lirat) # Only 5 litters in group 3 fit2 <- vglm(cbind(R, N-R) ~ fgrp + hb, betabinomialff(zero = 2), data = lirat, trace = TRUE, subset = N > 1) coef(fit2, matrix = TRUE) coef(fit2, matrix = TRUE)[, 1] - coef(fit2, matrix = TRUE)[, 2] # logitlink(p) ## Not run: with(lirat, plot(hb[N > 1], fit2@misc$rho, xlab = "Hemoglobin", ylab = "Estimated rho", pch = as.character(grp[N > 1]), col = grp[N > 1])) ## End(Not run) ## Not run: # cf. Figure 3 of Moore and Tsiatis (1991) with(lirat, plot(hb, R / N, pch = as.character(grp), col = grp, xlab = "Hemoglobin level", ylab = "Proportion Dead", las = 1, main = "Fitted values (lines)")) smalldf <- with(lirat, lirat[N > 1, ]) for (gp in 1:4) { xx <- with(smalldf, hb[grp == gp]) yy <- with(smalldf, fitted(fit2)[grp == gp]) ooo <- order(xx) lines(xx[ooo], yy[ooo], col = gp, lwd = 2) } ## End(Not run)
Estimation of the mean and precision parameters of the beta distribution.
betaff(A = 0, B = 1, lmu = "logitlink", lphi = "loglink", imu = NULL, iphi = NULL, gprobs.y = ppoints(8), gphi = exp(-3:5)/4, zero = NULL)
betaff(A = 0, B = 1, lmu = "logitlink", lphi = "loglink", imu = NULL, iphi = NULL, gprobs.y = ppoints(8), gphi = exp(-3:5)/4, zero = NULL)
A , B
|
Lower and upper limits of the distribution. The defaults correspond to the standard beta distribution where the response lies between 0 and 1. |
lmu , lphi
|
Link function for the mean and precision parameters.
The values |
imu , iphi
|
Optional initial value for the mean and precision parameters
respectively. A |
gprobs.y , gphi , zero
|
See |
The two-parameter beta distribution can be written
for , and
is the beta function
(see
beta
).
The parameter satisfies
where
is the mean of
.
That is,
is the mean of of a
standard beta distribution:
,
and these are the fitted values of the object.
Also,
is positive
and
.
Here, the limits
and
are known.
Another parameterization of the beta distribution
involving the raw
shape parameters is implemented in betaR
.
For general and
, the variance of
is
.
Then
can be interpreted as
a precision parameter
in the sense that, for fixed
,
the larger the value of
, the smaller the variance of
.
Also,
and
.
Fisher scoring is implemented.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
The response must have values in the
interval (,
).
The user currently needs to manually choose
lmu
to
match the input of arguments A
and B
, e.g.,
with extlogitlink
; see the example below.
Thomas W. Yee
Ferrari, S. L. P. and Francisco C.-N. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31, 799–815.
betaR
,
Beta
,
dzoabeta
,
genbetaII
,
betaII
,
betabinomialff
,
betageometric
,
betaprime
,
rbetageom
,
rbetanorm
,
kumar
,
extlogitlink
,
simulate.vlm
.
bdata <- data.frame(y = rbeta(nn <- 1000, shape1 = exp(0), shape2 = exp(1))) fit1 <- vglm(y ~ 1, betaff, data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) # Useful for intercept-only models # General A and B, and with a covariate bdata <- transform(bdata, x2 = runif(nn)) bdata <- transform(bdata, mu = logitlink(0.5 - x2, inverse = TRUE), prec = exp(3.0 + x2)) # prec == phi bdata <- transform(bdata, shape2 = prec * (1 - mu), shape1 = mu * prec) bdata <- transform(bdata, y = rbeta(nn, shape1 = shape1, shape2 = shape2)) bdata <- transform(bdata, Y = 5 + 8 * y) # From 5--13, not 0--1 fit <- vglm(Y ~ x2, data = bdata, trace = TRUE, betaff(A = 5, B = 13, lmu = extlogitlink(min = 5, max = 13))) coef(fit, matrix = TRUE)
bdata <- data.frame(y = rbeta(nn <- 1000, shape1 = exp(0), shape2 = exp(1))) fit1 <- vglm(y ~ 1, betaff, data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) # Useful for intercept-only models # General A and B, and with a covariate bdata <- transform(bdata, x2 = runif(nn)) bdata <- transform(bdata, mu = logitlink(0.5 - x2, inverse = TRUE), prec = exp(3.0 + x2)) # prec == phi bdata <- transform(bdata, shape2 = prec * (1 - mu), shape1 = mu * prec) bdata <- transform(bdata, y = rbeta(nn, shape1 = shape1, shape2 = shape2)) bdata <- transform(bdata, Y = 5 + 8 * y) # From 5--13, not 0--1 fit <- vglm(Y ~ x2, data = bdata, trace = TRUE, betaff(A = 5, B = 13, lmu = extlogitlink(min = 5, max = 13))) coef(fit, matrix = TRUE)
Density, distribution function, and random generation for the beta-geometric distribution.
dbetageom(x, shape1, shape2, log = FALSE) pbetageom(q, shape1, shape2, log.p = FALSE) rbetageom(n, shape1, shape2)
dbetageom(x, shape1, shape2, log = FALSE) pbetageom(q, shape1, shape2, log.p = FALSE) rbetageom(n, shape1, shape2)
x , q
|
vector of quantiles. |
n |
number of observations.
Same as |
shape1 , shape2
|
the two (positive) shape parameters of the standard
beta distribution. They are called |
log , log.p
|
Logical.
If |
The beta-geometric distribution is a geometric distribution whose
probability of success is not a constant but it is generated
from a beta distribution with parameters shape1
and
shape2
. Note that the mean of this beta distribution
is shape1/(shape1+shape2)
, which therefore is the mean
of the probability of success.
dbetageom
gives the density,
pbetageom
gives the distribution function, and
rbetageom
generates random deviates.
pbetageom
can be particularly slow.
T. W. Yee
## Not run: shape1 <- 1; shape2 <- 2; y <- 0:30 proby <- dbetageom(y, shape1, shape2, log = FALSE) plot(y, proby, type = "h", col = "blue", ylab = "P[Y=y]", main = paste0( "Y ~ Beta-geometric(shape1=", shape1,", shape2=", shape2, ")")) sum(proby) ## End(Not run)
## Not run: shape1 <- 1; shape2 <- 2; y <- 0:30 proby <- dbetageom(y, shape1, shape2, log = FALSE) plot(y, proby, type = "h", col = "blue", ylab = "P[Y=y]", main = paste0( "Y ~ Beta-geometric(shape1=", shape1,", shape2=", shape2, ")")) sum(proby) ## End(Not run)
Maximum likelihood estimation for the beta-geometric distribution.
betageometric(lprob = "logitlink", lshape = "loglink", iprob = NULL, ishape = 0.1, moreSummation = c(2, 100), tolerance = 1.0e-10, zero = NULL)
betageometric(lprob = "logitlink", lshape = "loglink", iprob = NULL, ishape = 0.1, moreSummation = c(2, 100), tolerance = 1.0e-10, zero = NULL)
lprob , lshape
|
Parameter link functions applied to the
parameters |
iprob , ishape
|
Numeric.
Initial values for the two parameters.
A |
moreSummation |
Integer, of length 2.
When computing the expected information matrix a series summation
from 0 to |
tolerance |
Positive numeric. When all terms are less than this then the series is deemed to have converged. |
zero |
An integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
If used, the value must be from the set {1,2}.
See |
A random variable has a 2-parameter beta-geometric distribution
if
for
where
are generated from a standard beta distribution with
shape parameters
shape1
and shape2
.
The parameterization here is to focus on the parameters
and
,
where
is
shape
.
The default link functions for these ensure that the appropriate range
of the parameters is maintained.
The mean of is
if
shape1 > 1
, and if so, then this is returned as
the fitted values.
The geometric distribution is a special case of the beta-geometric
distribution with
(see
geometric
).
However, fitting data from a geometric distribution may result in
numerical problems because the estimate of
will 'converge' to
-Inf
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
The first iteration may be very slow;
if practical, it is best for the weights
argument of
vglm
etc. to be used rather than inputting a very
long vector as the response,
i.e., vglm(y ~ 1, ..., weights = wts)
is to be preferred over vglm(rep(y, wts) ~ 1, ...)
.
If convergence problems occur try inputting some values of argument
ishape
.
If an intercept-only model is fitted then the misc
slot of the
fitted object has list components shape1
and shape2
.
T. W. Yee
Paul, S. R. (2005). Testing goodness of fit of the geometric distribution: an application to human fecundability data. Journal of Modern Applied Statistical Methods, 4, 425–433.
## Not run: bdata <- data.frame(y = 0:11, wts = c(227,123,72,42,21,31,11,14,6,4,7,28)) fitb <- vglm(y ~ 1, betageometric, bdata, weight = wts, trace = TRUE) fitg <- vglm(y ~ 1, geometric, bdata, weight = wts, trace = TRUE) coef(fitb, matrix = TRUE) Coef(fitb) sqrt(diag(vcov(fitb, untransform = TRUE))) fitb@misc$shape1 fitb@misc$shape2 # Very strong evidence of a beta-geometric: pchisq(2 * (logLik(fitb) - logLik(fitg)), df = 1, lower.tail = FALSE) ## End(Not run)
## Not run: bdata <- data.frame(y = 0:11, wts = c(227,123,72,42,21,31,11,14,6,4,7,28)) fitb <- vglm(y ~ 1, betageometric, bdata, weight = wts, trace = TRUE) fitg <- vglm(y ~ 1, geometric, bdata, weight = wts, trace = TRUE) coef(fitb, matrix = TRUE) Coef(fitb) sqrt(diag(vcov(fitb, untransform = TRUE))) fitb@misc$shape1 fitb@misc$shape2 # Very strong evidence of a beta-geometric: pchisq(2 * (logLik(fitb) - logLik(fitg)), df = 1, lower.tail = FALSE) ## End(Not run)
Maximum likelihood estimation of the 3-parameter beta II distribution.
betaII(lscale = "loglink", lshape2.p = "loglink", lshape3.q = "loglink", iscale = NULL, ishape2.p = NULL, ishape3.q = NULL, imethod = 1, gscale = exp(-5:5), gshape2.p = exp(-5:5), gshape3.q = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
betaII(lscale = "loglink", lshape2.p = "loglink", lshape3.q = "loglink", iscale = NULL, ishape2.p = NULL, ishape3.q = NULL, imethod = 1, gscale = exp(-5:5), gshape2.p = exp(-5:5), gshape3.q = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
lscale , lshape2.p , lshape3.q
|
Parameter link functions applied to the
(positive) parameters |
iscale , ishape2.p , ishape3.q , imethod , zero
|
See |
gscale , gshape2.p , gshape3.q
|
See |
probs.y |
See |
The 3-parameter beta II is the 4-parameter
generalized beta II distribution with shape parameter .
It is also known as the Pearson VI distribution.
Other distributions which are special cases of the 3-parameter
beta II include the Lomax (
) and inverse Lomax
(
). More details can be found in Kleiber and Kotz
(2003).
The beta II distribution has density
for ,
,
,
.
Here,
is the scale parameter
scale
,
and the others are shape parameters.
The mean is
provided ; these are returned as the fitted values.
This family function handles multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
See the notes in genbetaII
.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
betaff
,
genbetaII
,
dagum
,
sinmad
,
fisk
,
inv.lomax
,
lomax
,
paralogistic
,
inv.paralogistic
.
bdata <- data.frame(y = rsinmad(2000, shape1.a = 1, shape3.q = exp(2), scale = exp(1))) # Not genuine data! # fit <- vglm(y ~ 1, betaII, data = bdata, trace = TRUE) fit <- vglm(y ~ 1, betaII(ishape2.p = 0.7, ishape3.q = 0.7), data = bdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit)
bdata <- data.frame(y = rsinmad(2000, shape1.a = 1, shape3.q = exp(2), scale = exp(1))) # Not genuine data! # fit <- vglm(y ~ 1, betaII, data = bdata, trace = TRUE) fit <- vglm(y ~ 1, betaII(ishape2.p = 0.7, ishape3.q = 0.7), data = bdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit)
Density, distribution function, quantile function and random generation for the univariate beta-normal distribution.
dbetanorm(x, shape1, shape2, mean = 0, sd = 1, log = FALSE) pbetanorm(q, shape1, shape2, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) qbetanorm(p, shape1, shape2, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) rbetanorm(n, shape1, shape2, mean = 0, sd = 1)
dbetanorm(x, shape1, shape2, mean = 0, sd = 1, log = FALSE) pbetanorm(q, shape1, shape2, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) qbetanorm(p, shape1, shape2, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) rbetanorm(n, shape1, shape2, mean = 0, sd = 1)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as |
shape1 , shape2
|
the two (positive) shape parameters of the standard beta
distribution. They are called |
mean , sd
|
the mean and standard deviation of the univariate
normal distribution
( |
log , log.p
|
Logical.
If |
lower.tail |
Logical. If |
The function betauninormal
, the VGAM family function
for estimating the parameters,
has not yet been written.
dbetanorm
gives the density,
pbetanorm
gives the distribution function,
qbetanorm
gives the quantile function, and
rbetanorm
generates random deviates.
T. W. Yee
Gupta, A. K. and Nadarajah, S. (2004). Handbook of Beta Distribution and Its Applications, pp.146–152. New York: Marcel Dekker.
## Not run: shape1 <- 0.1; shape2 <- 4; m <- 1 x <- seq(-10, 2, len = 501) plot(x, dbetanorm(x, shape1, shape2, m = m), type = "l", ylim = 0:1, las = 1, ylab = paste0("betanorm(",shape1,", ",shape2,", m=",m, ", sd=1)"), main = "Blue is density, orange is the CDF", sub = "Gray lines are the 10,20,...,90 percentiles", col = "blue") lines(x, pbetanorm(x, shape1, shape2, m = m), col = "orange") abline(h = 0, col = "black") probs <- seq(0.1, 0.9, by = 0.1) Q <- qbetanorm(probs, shape1, shape2, m = m) lines(Q, dbetanorm(Q, shape1, shape2, m = m), col = "gray50", lty = 2, type = "h") lines(Q, pbetanorm(Q, shape1, shape2, m = m), col = "gray50", lty = 2, type = "h") abline(h = probs, col = "gray50", lty = 2) pbetanorm(Q, shape1, shape2, m = m) - probs # Should be all 0 ## End(Not run)
## Not run: shape1 <- 0.1; shape2 <- 4; m <- 1 x <- seq(-10, 2, len = 501) plot(x, dbetanorm(x, shape1, shape2, m = m), type = "l", ylim = 0:1, las = 1, ylab = paste0("betanorm(",shape1,", ",shape2,", m=",m, ", sd=1)"), main = "Blue is density, orange is the CDF", sub = "Gray lines are the 10,20,...,90 percentiles", col = "blue") lines(x, pbetanorm(x, shape1, shape2, m = m), col = "orange") abline(h = 0, col = "black") probs <- seq(0.1, 0.9, by = 0.1) Q <- qbetanorm(probs, shape1, shape2, m = m) lines(Q, dbetanorm(Q, shape1, shape2, m = m), col = "gray50", lty = 2, type = "h") lines(Q, pbetanorm(Q, shape1, shape2, m = m), col = "gray50", lty = 2, type = "h") abline(h = probs, col = "gray50", lty = 2) pbetanorm(Q, shape1, shape2, m = m) - probs # Should be all 0 ## End(Not run)
Estimation of the two shape parameters of the beta-prime distribution by maximum likelihood estimation.
betaprime(lshape = "loglink", ishape1 = 2, ishape2 = NULL, zero = NULL)
betaprime(lshape = "loglink", ishape1 = 2, ishape2 = NULL, zero = NULL)
lshape |
Parameter link function applied to the two (positive) shape
parameters. See |
ishape1 , ishape2 , zero
|
See |
The beta-prime distribution is given by
for .
The shape parameters are positive, and
here,
is the beta function.
The mean of
is
provided
;
these are returned as the fitted values.
If has a
distribution then
and
have a
and
distribution respectively.
Also, if
has a
distribution
and
has a
distribution
then
has a
distribution.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
The response must have positive values only.
The beta-prime distribution is also known as the beta distribution of the second kind or the inverted beta distribution.
Thomas W. Yee
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995). Chapter 25 of: Continuous Univariate Distributions, 2nd edition, Volume 2, New York: Wiley.
nn <- 1000 bdata <- data.frame(shape1 = exp(1), shape2 = exp(3)) bdata <- transform(bdata, yb = rbeta(nn, shape1, shape2)) bdata <- transform(bdata, y1 = (1-yb) / yb, y2 = yb / (1-yb), y3 = rgamma(nn, exp(3)) / rgamma(nn, exp(2))) fit1 <- vglm(y1 ~ 1, betaprime, data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) fit2 <- vglm(y2 ~ 1, betaprime, data = bdata, trace = TRUE) coef(fit2, matrix = TRUE) fit3 <- vglm(y3 ~ 1, betaprime, data = bdata, trace = TRUE) coef(fit3, matrix = TRUE) # Compare the fitted values with(bdata, mean(y3)) head(fitted(fit3)) Coef(fit3) # Useful for intercept-only models
nn <- 1000 bdata <- data.frame(shape1 = exp(1), shape2 = exp(3)) bdata <- transform(bdata, yb = rbeta(nn, shape1, shape2)) bdata <- transform(bdata, y1 = (1-yb) / yb, y2 = yb / (1-yb), y3 = rgamma(nn, exp(3)) / rgamma(nn, exp(2))) fit1 <- vglm(y1 ~ 1, betaprime, data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) fit2 <- vglm(y2 ~ 1, betaprime, data = bdata, trace = TRUE) coef(fit2, matrix = TRUE) fit3 <- vglm(y3 ~ 1, betaprime, data = bdata, trace = TRUE) coef(fit3, matrix = TRUE) # Compare the fitted values with(bdata, mean(y3)) head(fitted(fit3)) Coef(fit3) # Useful for intercept-only models
Estimation of the shape parameters of the two-parameter beta distribution.
betaR(lshape1 = "loglink", lshape2 = "loglink", i1 = NULL, i2 = NULL, trim = 0.05, A = 0, B = 1, parallel = FALSE, zero = NULL)
betaR(lshape1 = "loglink", lshape2 = "loglink", i1 = NULL, i2 = NULL, trim = 0.05, A = 0, B = 1, parallel = FALSE, zero = NULL)
lshape1 , lshape2 , i1 , i2
|
Details at |
trim |
An argument which is fed into |
A , B
|
Lower and upper limits of the distribution. The defaults correspond to the standard beta distribution where the response lies between 0 and 1. |
parallel , zero
|
See |
The two-parameter beta distribution is given by
for , and
is the beta function
(see
beta
).
The shape parameters are positive, and
here, the limits and
are known.
The mean of
is
, and these are the fitted values of the object.
For the standard beta distribution the variance of is
.
If
then the variance of
can be written
where
is the mean of
.
Another parameterization of the beta distribution involving the mean
and a precision parameter is implemented in betaff
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
The response must have values in the interval (,
). VGAM 0.7-4 and prior called this function
betaff
.
Thomas W. Yee
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995). Chapter 25 of: Continuous Univariate Distributions, 2nd edition, Volume 2, New York: Wiley.
Gupta, A. K. and Nadarajah, S. (2004). Handbook of Beta Distribution and Its Applications, New York: Marcel Dekker.
betaff
,
Beta
,
genbetaII
,
betaII
,
betabinomialff
,
betageometric
,
betaprime
,
rbetageom
,
rbetanorm
,
kumar
,
simulate.vlm
.
bdata <- data.frame(y = rbeta(1000, shape1 = exp(0), shape2 = exp(1))) fit <- vglm(y ~ 1, betaR(lshape1 = "identitylink", lshape2 = "identitylink"), bdata, trace = TRUE, crit = "coef") fit <- vglm(y ~ 1, betaR, data = bdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) # Useful for intercept-only models bdata <- transform(bdata, Y = 5 + 8 * y) # From 5 to 13, not 0 to 1 fit <- vglm(Y ~ 1, betaR(A = 5, B = 13), data = bdata, trace = TRUE) Coef(fit) c(meanY = with(bdata, mean(Y)), head(fitted(fit),2))
bdata <- data.frame(y = rbeta(1000, shape1 = exp(0), shape2 = exp(1))) fit <- vglm(y ~ 1, betaR(lshape1 = "identitylink", lshape2 = "identitylink"), bdata, trace = TRUE, crit = "coef") fit <- vglm(y ~ 1, betaR, data = bdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) # Useful for intercept-only models bdata <- transform(bdata, Y = 5 + 8 * y) # From 5 to 13, not 0 to 1 fit <- vglm(Y ~ 1, betaR(A = 5, B = 13), data = bdata, trace = TRUE) Coef(fit) c(meanY = with(bdata, mean(Y)), head(fitted(fit),2))
Estimate the association parameter of Ali-Mikhail-Haq's bivariate distribution by maximum likelihood estimation.
biamhcop(lapar = "rhobitlink", iapar = NULL, imethod = 1, nsimEIM = 250)
biamhcop(lapar = "rhobitlink", iapar = NULL, imethod = 1, nsimEIM = 250)
lapar |
Link function applied to the association parameter
|
iapar |
Numeric. Optional initial value for |
imethod |
An integer with value |
nsimEIM |
See |
The cumulative distribution function is
for .
The support of the function is the unit square.
The marginal distributions are the standard uniform distributions.
When
the random variables are
independent.
This is an Archimedean copula.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to 0.5. This is because each marginal distribution corresponds to a standard uniform distribution.
T. W. Yee and C. S. Chee
Balakrishnan, N. and Lai, C.-D. (2009). Continuous Bivariate Distributions, 2nd ed. New York: Springer.
rbiamhcop
,
bifgmcop
,
bigumbelIexp
,
rbilogis
,
simulate.vlm
.
ymat <- rbiamhcop(1000, apar = rhobitlink(2, inverse = TRUE)) fit <- vglm(ymat ~ 1, biamhcop, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit)
ymat <- rbiamhcop(1000, apar = rhobitlink(2, inverse = TRUE)) fit <- vglm(ymat ~ 1, biamhcop, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit)
Density, distribution function, and random generation for the (one parameter) bivariate Ali-Mikhail-Haq distribution.
dbiamhcop(x1, x2, apar, log = FALSE) pbiamhcop(q1, q2, apar) rbiamhcop(n, apar)
dbiamhcop(x1, x2, apar, log = FALSE) pbiamhcop(q1, q2, apar) rbiamhcop(n, apar)
x1 , x2 , q1 , q2
|
vector of quantiles. |
n |
number of observations.
Same as |
apar |
the association parameter. |
log |
Logical.
If |
See biamhcop
, the VGAM
family functions for estimating the
parameter by maximum likelihood estimation, for the formula of
the cumulative distribution function and other details.
dbiamhcop
gives the density,
pbiamhcop
gives the distribution function, and
rbiamhcop
generates random deviates (a two-column matrix).
T. W. Yee and C. S. Chee
x <- seq(0, 1, len = (N <- 101)); apar <- 0.7 ox <- expand.grid(x, x) zedd <- dbiamhcop(ox[, 1], ox[, 2], apar = apar) ## Not run: contour(x, x, matrix(zedd, N, N), col = "blue") zedd <- pbiamhcop(ox[, 1], ox[, 2], apar = apar) contour(x, x, matrix(zedd, N, N), col = "blue") plot(r <- rbiamhcop(n = 1000, apar = apar), col = "blue") par(mfrow = c(1, 2)) hist(r[, 1]) # Should be uniform hist(r[, 2]) # Should be uniform ## End(Not run)
x <- seq(0, 1, len = (N <- 101)); apar <- 0.7 ox <- expand.grid(x, x) zedd <- dbiamhcop(ox[, 1], ox[, 2], apar = apar) ## Not run: contour(x, x, matrix(zedd, N, N), col = "blue") zedd <- pbiamhcop(ox[, 1], ox[, 2], apar = apar) contour(x, x, matrix(zedd, N, N), col = "blue") plot(r <- rbiamhcop(n = 1000, apar = apar), col = "blue") par(mfrow = c(1, 2)) hist(r[, 1]) # Should be uniform hist(r[, 2]) # Should be uniform ## End(Not run)
Estimate the correlation parameter of the (bivariate) Clayton copula distribution by maximum likelihood estimation.
biclaytoncop(lapar = "loglink", iapar = NULL, imethod = 1, parallel = FALSE, zero = NULL)
biclaytoncop(lapar = "loglink", iapar = NULL, imethod = 1, parallel = FALSE, zero = NULL)
lapar , iapar , imethod
|
Details at |
parallel , zero
|
Details at |
The cumulative distribution function is
for .
Here,
is the association parameter.
The support of the function is the interior of the unit square;
however, values of 0 and/or 1 are not allowed (currently).
The marginal distributions are the standard uniform distributions.
When
the random variables are independent.
This VGAM family function can handle multiple responses, for example, a six-column matrix where the first 2 columns is the first out of three responses, the next 2 columns being the next response, etc.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
The response matrix must have a multiple of two-columns. Currently, the fitted value is a matrix with the same number of columns and values equal to 0.5. This is because each marginal distribution corresponds to a standard uniform distribution.
This VGAM family function is fragile; each response must be in the interior of the unit square.
R. Feyter and T. W. Yee
Clayton, D. (1982). A model for association in bivariate survival data. Journal of the Royal Statistical Society, Series B, Methodological, 44, 414–422.
Schepsmeier, U. and Stober, J. (2014). Derivatives and Fisher information of bivariate copulas. Statistical Papers 55, 525–542.
rbiclaytoncop
,
dbiclaytoncop
,
kendall.tau
.
ymat <- rbiclaytoncop(n = (nn <- 1000), apar = exp(2)) bdata <- data.frame(y1 = ymat[, 1], y2 = ymat[, 2], y3 = ymat[, 1], y4 = ymat[, 2], x2 = runif(nn)) summary(bdata) ## Not run: plot(ymat, col = "blue") fit1 <- vglm(cbind(y1, y2, y3, y4) ~ 1, # 2 responses, e.g., (y1,y2) is the 1st biclaytoncop, data = bdata, trace = TRUE, crit = "coef") # Sometimes a good idea coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1) # Another example; apar is a function of x2 bdata <- transform(bdata, apar = exp(-0.5 + x2)) ymat <- rbiclaytoncop(n = nn, apar = with(bdata, apar)) bdata <- transform(bdata, y5 = ymat[, 1], y6 = ymat[, 2]) fit2 <- vgam(cbind(y5, y6) ~ s(x2), data = bdata, biclaytoncop(lapar = "loglink"), trace = TRUE) ## Not run: plot(fit2, lcol = "blue", scol = "orange", se = TRUE)
ymat <- rbiclaytoncop(n = (nn <- 1000), apar = exp(2)) bdata <- data.frame(y1 = ymat[, 1], y2 = ymat[, 2], y3 = ymat[, 1], y4 = ymat[, 2], x2 = runif(nn)) summary(bdata) ## Not run: plot(ymat, col = "blue") fit1 <- vglm(cbind(y1, y2, y3, y4) ~ 1, # 2 responses, e.g., (y1,y2) is the 1st biclaytoncop, data = bdata, trace = TRUE, crit = "coef") # Sometimes a good idea coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1) # Another example; apar is a function of x2 bdata <- transform(bdata, apar = exp(-0.5 + x2)) ymat <- rbiclaytoncop(n = nn, apar = with(bdata, apar)) bdata <- transform(bdata, y5 = ymat[, 1], y6 = ymat[, 2]) fit2 <- vgam(cbind(y5, y6) ~ s(x2), data = bdata, biclaytoncop(lapar = "loglink"), trace = TRUE) ## Not run: plot(fit2, lcol = "blue", scol = "orange", se = TRUE)
Density and random generation for the (one parameter) bivariate Clayton copula distribution.
dbiclaytoncop(x1, x2, apar = 0, log = FALSE) rbiclaytoncop(n, apar = 0)
dbiclaytoncop(x1, x2, apar = 0, log = FALSE) rbiclaytoncop(n, apar = 0)
x1 , x2
|
vector of quantiles.
The |
n |
number of observations.
Same as |
apar |
the association parameter.
Should be in the
interval |
log |
Logical.
If |
See biclaytoncop
, the VGAM
family functions for estimating the
parameter by maximum likelihood estimation,
for the formula of the
cumulative distribution function and other
details.
dbiclaytoncop
gives the density at point
(x1
,x2
),
rbiclaytoncop
generates random
deviates (a two-column matrix).
dbiclaytoncop()
does not yet handle
x1 = 0
and/or x2 = 0
.
R. Feyter and T. W. Yee
Clayton, D. (1982). A model for association in bivariate survival data. Journal of the Royal Statistical Society, Series B, Methodological, 44, 414–422.
biclaytoncop
,
binormalcop
,
binormal
.
## Not run: edge <- 0.01 # A small positive value N <- 101; x <- seq(edge, 1.0 - edge, len = N); Rho <- 0.7 ox <- expand.grid(x, x) zedd <- dbiclaytoncop(ox[, 1], ox[, 2], apar = Rho, log = TRUE) par(mfrow = c(1, 2)) contour(x, x, matrix(zedd, N, N), col = 4, labcex = 1.5, las = 1) plot(rbiclaytoncop(1000, 2), col = 4, las = 1) ## End(Not run)
## Not run: edge <- 0.01 # A small positive value N <- 101; x <- seq(edge, 1.0 - edge, len = N); Rho <- 0.7 ox <- expand.grid(x, x) zedd <- dbiclaytoncop(ox[, 1], ox[, 2], apar = Rho, log = TRUE) par(mfrow = c(1, 2)) contour(x, x, matrix(zedd, N, N), col = 4, labcex = 1.5, las = 1) plot(rbiclaytoncop(1000, 2), col = 4, las = 1) ## End(Not run)
Calculates the Bayesian information criterion (BIC) for a fitted model object for which a log-likelihood value has been obtained.
BICvlm(object, ..., k = log(nobs(object)))
BICvlm(object, ..., k = log(nobs(object)))
object , ...
|
Same as |
k |
Numeric, the penalty per parameter to be used;
the default is |
The so-called BIC or SBC (Schwarz's Bayesian criterion)
can be computed by calling AICvlm
with a
different k
argument.
See AICvlm
for information and caveats.
Returns a numeric value with the corresponding BIC, or ...,
depending on k
.
Like AICvlm
, this code has not been double-checked.
The general applicability of BIC
for the VGLM/VGAM classes
has not been developed fully.
In particular, BIC
should not be run on some VGAM family
functions because of violation of certain regularity conditions, etc.
Many VGAM family functions such as
cumulative
can have the number of
observations absorbed into the prior weights argument
(e.g., weights
in vglm
), either
before or after fitting. Almost all VGAM family
functions can have the number of observations defined by
the weights
argument, e.g., as an observed frequency.
BIC
simply uses the number of rows of the model matrix, say,
as defining n
, hence the user must be very careful
of this possible error.
Use at your own risk!!
BIC, AIC and other ICs can have have many additive constants added to them. The important thing are the differences since the minimum value corresponds to the best model.
BIC has not been defined for QRR-VGLMs yet.
T. W. Yee.
AICvlm
,
VGLMs are described in vglm-class
;
VGAMs are described in vgam-class
;
RR-VGLMs are described in rrvglm-class
;
BIC
,
AIC
.
pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) coef(fit1, matrix = TRUE) BIC(fit1) (fit2 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), data = pneumo)) coef(fit2, matrix = TRUE) BIC(fit2)
pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) coef(fit1, matrix = TRUE) BIC(fit1) (fit2 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), data = pneumo)) coef(fit2, matrix = TRUE) BIC(fit2)
Estimate the association parameter of Farlie-Gumbel-Morgenstern's bivariate distribution by maximum likelihood estimation.
bifgmcop(lapar = "rhobitlink", iapar = NULL, imethod = 1)
bifgmcop(lapar = "rhobitlink", iapar = NULL, imethod = 1)
lapar , iapar , imethod
|
Details at |
The cumulative distribution function is
for .
The support of the function is the unit square.
The marginal distributions are the standard uniform
distributions. When
the random
variables are independent.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to 0.5. This is because each marginal distribution corresponds to a standard uniform distribution.
T. W. Yee
Castillo, E., Hadi, A. S., Balakrishnan, N. and Sarabia, J. S. (2005). Extreme Value and Related Models with Applications in Engineering and Science, Hoboken, NJ, USA: Wiley-Interscience.
Smith, M. D. (2007). Invariance theorems for Fisher information. Communications in Statistics—Theory and Methods, 36(12), 2213–2222.
rbifgmcop
,
bifrankcop
,
bifgmexp
,
simulate.vlm
.
ymat <- rbifgmcop(1000, apar = rhobitlink(3, inverse = TRUE)) ## Not run: plot(ymat, col = "blue") fit <- vglm(ymat ~ 1, fam = bifgmcop, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit))
ymat <- rbifgmcop(1000, apar = rhobitlink(3, inverse = TRUE)) ## Not run: plot(ymat, col = "blue") fit <- vglm(ymat ~ 1, fam = bifgmcop, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit))
Density, distribution function, and random generation for the (one parameter) bivariate Farlie-Gumbel-Morgenstern's distribution.
dbifgmcop(x1, x2, apar, log = FALSE) pbifgmcop(q1, q2, apar) rbifgmcop(n, apar)
dbifgmcop(x1, x2, apar, log = FALSE) pbifgmcop(q1, q2, apar) rbifgmcop(n, apar)
x1 , x2 , q1 , q2
|
vector of quantiles. |
n |
number of observations.
Same as in |
apar |
the association parameter. |
log |
Logical.
If |
See bifgmcop
, the VGAM
family functions for estimating the
parameter by maximum likelihood estimation, for the formula of
the cumulative distribution function and other details.
dbifgmcop
gives the density,
pbifgmcop
gives the distribution function, and
rbifgmcop
generates random deviates (a two-column matrix).
T. W. Yee
## Not run: N <- 101; x <- seq(0.0, 1.0, len = N); apar <- 0.7 ox <- expand.grid(x, x) zedd <- dbifgmcop(ox[, 1], ox[, 2], apar = apar) contour(x, x, matrix(zedd, N, N), col = "blue") zedd <- pbifgmcop(ox[, 1], ox[, 2], apar = apar) contour(x, x, matrix(zedd, N, N), col = "blue") plot(r <- rbifgmcop(n = 3000, apar = apar), col = "blue") par(mfrow = c(1, 2)) hist(r[, 1]) # Should be uniform hist(r[, 2]) # Should be uniform ## End(Not run)
## Not run: N <- 101; x <- seq(0.0, 1.0, len = N); apar <- 0.7 ox <- expand.grid(x, x) zedd <- dbifgmcop(ox[, 1], ox[, 2], apar = apar) contour(x, x, matrix(zedd, N, N), col = "blue") zedd <- pbifgmcop(ox[, 1], ox[, 2], apar = apar) contour(x, x, matrix(zedd, N, N), col = "blue") plot(r <- rbifgmcop(n = 3000, apar = apar), col = "blue") par(mfrow = c(1, 2)) hist(r[, 1]) # Should be uniform hist(r[, 2]) # Should be uniform ## End(Not run)
Estimate the association parameter of FGM bivariate exponential distribution by maximum likelihood estimation.
bifgmexp(lapar = "rhobitlink", iapar = NULL, tola0 = 0.01, imethod = 1)
bifgmexp(lapar = "rhobitlink", iapar = NULL, tola0 = 0.01, imethod = 1)
lapar |
Link function for the
association parameter
|
iapar |
Numeric. Optional initial value for |
tola0 |
Positive numeric.
If the estimate of |
imethod |
An integer with value |
The cumulative distribution function is
for between
and
.
The support of the function is for
and
.
The marginal distributions are an exponential distribution with
unit mean.
When
then the random variables are
independent, and this causes some problems in the estimation
process since the distribution no longer depends on the
parameter.
A variant of Newton-Raphson is used, which only seems to
work for an intercept model.
It is a very good idea to set trace = TRUE
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to 1. This is because each marginal distribution corresponds to a exponential distribution with unit mean.
This VGAM family function should be used with caution.
T. W. Yee
Castillo, E., Hadi, A. S., Balakrishnan, N. and Sarabia, J. S. (2005). Extreme Value and Related Models with Applications in Engineering and Science, Hoboken, NJ, USA: Wiley-Interscience.
N <- 1000; mdata <- data.frame(y1 = rexp(N), y2 = rexp(N)) ## Not run: plot(ymat) fit <- vglm(cbind(y1, y2) ~ 1, bifgmexp, data = mdata, trace = TRUE) fit <- vglm(cbind(y1, y2) ~ 1, bifgmexp, data = mdata, # May fail trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit))
N <- 1000; mdata <- data.frame(y1 = rexp(N), y2 = rexp(N)) ## Not run: plot(ymat) fit <- vglm(cbind(y1, y2) ~ 1, bifgmexp, data = mdata, trace = TRUE) fit <- vglm(cbind(y1, y2) ~ 1, bifgmexp, data = mdata, # May fail trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit))
Estimate the association parameter of Frank's bivariate distribution by maximum likelihood estimation.
bifrankcop(lapar = "loglink", iapar = 2, nsimEIM = 250)
bifrankcop(lapar = "loglink", iapar = 2, nsimEIM = 250)
lapar |
Link function applied to the (positive) association parameter
|
iapar |
Numeric. Initial value for |
nsimEIM |
The cumulative distribution function is
for .
Note the logarithm here is to base
.
The support of the function is the unit square.
When the probability density function
is symmetric with respect to the lines
and
.
When
then
.
then
,
i.e., uniform on the unit square.
As
approaches 0 then
.
As
approaches infinity then
.
The default is to use Fisher scoring implemented using
rbifrankcop
.
For intercept-only models an alternative is to set
nsimEIM=NULL
so that a variant of Newton-Raphson is used.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to a half. This is because the marginal distributions correspond to a standard uniform distribution.
T. W. Yee
Genest, C. (1987). Frank's family of bivariate distributions. Biometrika, 74, 549–555.
rbifrankcop
,
bifgmcop
,
simulate.vlm
.
## Not run: ymat <- rbifrankcop(n = 2000, apar = exp(4)) plot(ymat, col = "blue") fit <- vglm(ymat ~ 1, fam = bifrankcop, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) head(fitted(fit)) summary(fit) ## End(Not run)
## Not run: ymat <- rbifrankcop(n = 2000, apar = exp(4)) plot(ymat, col = "blue") fit <- vglm(ymat ~ 1, fam = bifrankcop, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) head(fitted(fit)) summary(fit) ## End(Not run)
Estimate the association parameter of Gumbel's Type I bivariate distribution by maximum likelihood estimation.
bigumbelIexp(lapar = "identitylink", iapar = NULL, imethod = 1)
bigumbelIexp(lapar = "identitylink", iapar = NULL, imethod = 1)
lapar |
Link function applied to the association parameter
|
iapar |
Numeric. Optional initial value for |
imethod |
An integer with value |
The cumulative distribution function is
for real .
The support of the function is for
and
.
The marginal distributions are an exponential distribution with
unit mean.
A variant of Newton-Raphson is used, which only seems
to work for an intercept model.
It is a very good idea to set trace=TRUE
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
The response must be a two-column matrix. Currently, the fitted value is a matrix with two columns and values equal to 1. This is because each marginal distribution corresponds to a exponential distribution with unit mean.
This VGAM family function should be used with caution.
T. W. Yee
Gumbel, E. J. (1960). Bivariate Exponential Distributions. Journal of the American Statistical Association, 55, 698–707.
nn <- 1000 gdata <- data.frame(y1 = rexp(nn), y2 = rexp(nn)) ## Not run: with(gdata, plot(cbind(y1, y2))) fit <- vglm(cbind(y1, y2) ~ 1, bigumbelIexp, gdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit))
nn <- 1000 gdata <- data.frame(y1 = rexp(nn), y2 = rexp(nn)) ## Not run: with(gdata, plot(cbind(y1, y2))) fit <- vglm(cbind(y1, y2) ~ 1, bigumbelIexp, gdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit))
Density, distribution function, quantile function and random generation for the 4-parameter bivariate logistic distribution.
dbilogis(x1, x2, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1, log = FALSE) pbilogis(q1, q2, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1) rbilogis(n, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1)
dbilogis(x1, x2, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1, log = FALSE) pbilogis(q1, q2, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1) rbilogis(n, loc1 = 0, scale1 = 1, loc2 = 0, scale2 = 1)
x1 , x2 , q1 , q2
|
vector of quantiles. |
n |
number of observations.
Same as |
loc1 , loc2
|
the location parameters |
scale1 , scale2
|
the scale parameters |
log |
Logical.
If |
See bilogis
, the VGAM family function for
estimating the four parameters by maximum likelihood estimation,
for the formula of the cumulative distribution function and
other details.
dbilogis
gives the density,
pbilogis
gives the distribution function, and
rbilogis
generates random deviates (a two-column matrix).
Gumbel (1961) proposed two bivariate logistic distributions with
logistic distribution marginals, which he called Type I and Type II.
The Type I is this one.
The Type II belongs to the Morgenstern type.
The biamhcop
distribution has, as a special case,
this distribution, which is when the random variables are
independent.
T. W. Yee
Gumbel, E. J. (1961). Bivariate logistic distributions. Journal of the American Statistical Association, 56, 335–349.
## Not run: par(mfrow = c(1, 3)) ymat <- rbilogis(n = 2000, loc1 = 5, loc2 = 7, scale2 = exp(1)) myxlim <- c(-2, 15); myylim <- c(-10, 30) plot(ymat, xlim = myxlim, ylim = myylim) N <- 100 x1 <- seq(myxlim[1], myxlim[2], len = N) x2 <- seq(myylim[1], myylim[2], len = N) ox <- expand.grid(x1, x2) z <- dbilogis(ox[,1], ox[,2], loc1 = 5, loc2 = 7, scale2 = exp(1)) contour(x1, x2, matrix(z, N, N), main = "density") z <- pbilogis(ox[,1], ox[,2], loc1 = 5, loc2 = 7, scale2 = exp(1)) contour(x1, x2, matrix(z, N, N), main = "cdf") ## End(Not run)
## Not run: par(mfrow = c(1, 3)) ymat <- rbilogis(n = 2000, loc1 = 5, loc2 = 7, scale2 = exp(1)) myxlim <- c(-2, 15); myylim <- c(-10, 30) plot(ymat, xlim = myxlim, ylim = myylim) N <- 100 x1 <- seq(myxlim[1], myxlim[2], len = N) x2 <- seq(myylim[1], myylim[2], len = N) ox <- expand.grid(x1, x2) z <- dbilogis(ox[,1], ox[,2], loc1 = 5, loc2 = 7, scale2 = exp(1)) contour(x1, x2, matrix(z, N, N), main = "density") z <- pbilogis(ox[,1], ox[,2], loc1 = 5, loc2 = 7, scale2 = exp(1)) contour(x1, x2, matrix(z, N, N), main = "cdf") ## End(Not run)
Estimates the four parameters of the bivariate logistic distribution by maximum likelihood estimation.
bilogistic(llocation = "identitylink", lscale = "loglink", iloc1 = NULL, iscale1 = NULL, iloc2 = NULL, iscale2 = NULL, imethod = 1, nsimEIM = 250, zero = NULL)
bilogistic(llocation = "identitylink", lscale = "loglink", iloc1 = NULL, iscale1 = NULL, iloc2 = NULL, iscale2 = NULL, imethod = 1, nsimEIM = 250, zero = NULL)
llocation |
Link function applied to both location parameters
|
lscale |
Parameter link function applied to both
(positive) scale parameters |
iloc1 , iloc2
|
Initial values for the location parameters.
By default, initial values are chosen internally using
|
iscale1 , iscale2
|
Initial values for the scale parameters.
By default, initial values are chosen internally using
|
imethod |
An integer with value |
nsimEIM , zero
|
See |
The four-parameter bivariate logistic distribution has a density that can be written as
where and
are the scale parameters,
and
and
are the location parameters.
Each of the two responses are unbounded, i.e.,
.
The mean of
is
etc.
The fitted values are returned in a 2-column matrix.
The cumulative distribution function is
The marginal distribution of is
By default, ,
,
,
are the linear/additive
predictors.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
T. W. Yee
Gumbel, E. J. (1961). Bivariate logistic distributions. Journal of the American Statistical Association, 56, 335–349.
Castillo, E., Hadi, A. S., Balakrishnan, N. and Sarabia, J. S. (2005). Extreme Value and Related Models with Applications in Engineering and Science, Hoboken, NJ, USA: Wiley-Interscience.
## Not run: ymat <- rbilogis(n <- 50, loc1 = 5, loc2 = 7, scale2 = exp(1)) plot(ymat) bfit <- vglm(ymat ~ 1, family = bilogistic, trace = TRUE) coef(bfit, matrix = TRUE) Coef(bfit) head(fitted(bfit)) vcov(bfit) head(weights(bfit, type = "work")) summary(bfit) ## End(Not run)
## Not run: ymat <- rbilogis(n <- 50, loc1 = 5, loc2 = 7, scale2 = exp(1)) plot(ymat) bfit <- vglm(ymat ~ 1, family = bilogistic, trace = TRUE) coef(bfit, matrix = TRUE) Coef(bfit) head(fitted(bfit)) vcov(bfit) head(weights(bfit, type = "work")) summary(bfit) ## End(Not run)
Fits a Palmgren (bivariate odds-ratio model, or bivariate logistic regression) model to two binary responses. Actually, a bivariate logistic/probit/cloglog/cauchit model can be fitted. The odds ratio is used as a measure of dependency.
binom2.or(lmu = "logitlink", lmu1 = lmu, lmu2 = lmu, loratio = "loglink", imu1 = NULL, imu2 = NULL, ioratio = NULL, zero = "oratio", exchangeable = FALSE, tol = 0.001, more.robust = FALSE)
binom2.or(lmu = "logitlink", lmu1 = lmu, lmu2 = lmu, loratio = "loglink", imu1 = NULL, imu2 = NULL, ioratio = NULL, zero = "oratio", exchangeable = FALSE, tol = 0.001, more.robust = FALSE)
lmu |
Link function applied to the two marginal probabilities.
See |
lmu1 , lmu2
|
Link function applied to the first and second of the two marginal probabilities. |
loratio |
Link function applied to the odds ratio.
See |
imu1 , imu2 , ioratio
|
Optional initial values for the marginal probabilities and odds
ratio. See |
zero |
Which linear/additive predictor is modelled as an intercept only?
The default is for the odds ratio.
A |
exchangeable |
Logical.
If |
tol |
Tolerance for testing independence. Should be some small positive numerical value. |
more.robust |
Logical. If |
Also known informally as the Palmgren model,
the bivariate logistic model is
a full-likelihood based model defined as two logistic regressions plus
log(oratio) = eta3
where eta3
is the third linear/additive
predictor relating the odds ratio to explanatory variables.
Explicitly, the default model is
for the marginals, and
specifies the dependency between the two responses. Here, the responses
equal 1 for a success and a 0 for a failure, and the odds ratio is often
written .
The model is fitted by maximum likelihood estimation since the full
likelihood is specified.
The two binary responses are independent if and only if the odds ratio
is unity, or equivalently, the log odds ratio is 0. Fisher scoring
is implemented.
The default models as a single parameter only,
i.e., an intercept-only model, but this can be circumvented by
setting
zero = NULL
in order to model the odds ratio as
a function of all the explanatory variables.
The function binom2.or()
can handle other
probability link functions such as probitlink
,
clogloglink
and cauchitlink
links
as well, so is quite general. In fact, the two marginal
probabilities can each have a different link function.
A similar model is the bivariate probit model
(binom2.rho
), which is based on a standard
bivariate normal distribution, but the bivariate probit model
is less interpretable and flexible.
The exchangeable
argument should be used when the error
structure is exchangeable, e.g., with eyes or ears data.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
When fitted, the fitted.values
slot of the
object contains the four joint probabilities, labelled
as = (0,0), (0,1), (1,0), (1,1),
respectively. These estimated probabilities should be extracted
with the
fitted
generic function.
At present we call binom2.or
families a
bivariate odds-ratio model.
The response should be either a 4-column matrix of counts
(whose columns correspond
to = (0,0), (0,1), (1,0),
(1,1) respectively), or a two-column matrix where each column
has two distinct values, or a factor with four levels.
The function
rbinom2.or
may be used to generate
such data. Successful convergence requires at least one case
of each of the four possible outcomes.
By default, a constant odds ratio is fitted because zero
= 3
. Set zero = NULL
if you want the odds ratio to be
modelled as a function of the explanatory variables; however,
numerical problems are more likely to occur.
The argument lmu
, which is actually redundant, is used for
convenience and for upward compatibility: specifying lmu
only means the link function will be applied to lmu1
and lmu2
. Users who want a different link function for
each of the two marginal probabilities should use the lmu1
and lmu2
arguments, and the argument lmu
is then
ignored. It doesn't make sense to specify exchangeable =
TRUE
and have different link functions for the two marginal
probabilities.
Regarding Yee and Dirnbock (2009),
the xij
(see vglm.control
) argument enables
environmental variables with different values at the two time
points to be entered into an exchangeable binom2.or
model. See the author's webpage for sample code.
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
le Cessie, S. and van Houwelingen, J. C. (1994). Logistic regression for correlated binary data. Applied Statistics, 43, 95–108.
Palmgren, J. (1989). Regression Models for Bivariate Binary Responses. Technical Report no. 101, Department of Biostatistics, University of Washington, Seattle.
Yee, T. W. and Dirnbock, T. (2009). Models for analysing species' presence/absence data at two time points. Journal of Theoretical Biology, 259(4), 684–694.
rbinom2.or
,
binom2.rho
,
loglinb2
,
loglinb3
,
zipebcom
,
coalminers
,
binomialff
,
logitlink
,
probitlink
,
clogloglink
,
cauchitlink
.
# Fit the model in Table 6.7 in McCullagh and Nelder (1989) coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vglm(cbind(nBnW, nBW, BnW, BW) ~ Age, binom2.or(zero = NULL), data = coalminers) fitted(fit) summary(fit) coef(fit, matrix = TRUE) c(weights(fit, type = "prior")) * fitted(fit) # Table 6.8 ## Not run: with(coalminers, matplot(Age, fitted(fit), type = "l", las = 1, xlab = "(age - 42) / 5", lwd = 2)) with(coalminers, matpoints(Age, depvar(fit), col=1:4)) legend(x = -4, y = 0.5, lty = 1:4, col = 1:4, lwd = 2, legend = c("1 = (Breathlessness=0, Wheeze=0)", "2 = (Breathlessness=0, Wheeze=1)", "3 = (Breathlessness=1, Wheeze=0)", "4 = (Breathlessness=1, Wheeze=1)")) ## End(Not run) # Another model: pet ownership ## Not run: data(xs.nz, package = "VGAMdata") # More homogeneous: petdata <- subset(xs.nz, ethnicity == "European" & age < 70 & sex == "M") petdata <- na.omit(petdata[, c("cat", "dog", "age")]) summary(petdata) with(petdata, table(cat, dog)) # Can compute the odds ratio fit <- vgam(cbind((1-cat) * (1-dog), (1-cat) * dog, cat * (1-dog), cat * dog) ~ s(age, df = 5), binom2.or(zero = 3), data = petdata, trace = TRUE) colSums(depvar(fit)) coef(fit, matrix = TRUE) ## End(Not run) ## Not run: # Plot the estimated probabilities ooo <- order(with(petdata, age)) matplot(with(petdata, age)[ooo], fitted(fit)[ooo, ], type = "l", xlab = "Age", ylab = "Probability", main = "Pet ownership", ylim = c(0, max(fitted(fit))), las = 1, lwd = 1.5) legend("topleft", col=1:4, lty = 1:4, leg = c("no cat or dog ", "dog only", "cat only", "cat and dog"), lwd = 1.5) ## End(Not run)
# Fit the model in Table 6.7 in McCullagh and Nelder (1989) coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vglm(cbind(nBnW, nBW, BnW, BW) ~ Age, binom2.or(zero = NULL), data = coalminers) fitted(fit) summary(fit) coef(fit, matrix = TRUE) c(weights(fit, type = "prior")) * fitted(fit) # Table 6.8 ## Not run: with(coalminers, matplot(Age, fitted(fit), type = "l", las = 1, xlab = "(age - 42) / 5", lwd = 2)) with(coalminers, matpoints(Age, depvar(fit), col=1:4)) legend(x = -4, y = 0.5, lty = 1:4, col = 1:4, lwd = 2, legend = c("1 = (Breathlessness=0, Wheeze=0)", "2 = (Breathlessness=0, Wheeze=1)", "3 = (Breathlessness=1, Wheeze=0)", "4 = (Breathlessness=1, Wheeze=1)")) ## End(Not run) # Another model: pet ownership ## Not run: data(xs.nz, package = "VGAMdata") # More homogeneous: petdata <- subset(xs.nz, ethnicity == "European" & age < 70 & sex == "M") petdata <- na.omit(petdata[, c("cat", "dog", "age")]) summary(petdata) with(petdata, table(cat, dog)) # Can compute the odds ratio fit <- vgam(cbind((1-cat) * (1-dog), (1-cat) * dog, cat * (1-dog), cat * dog) ~ s(age, df = 5), binom2.or(zero = 3), data = petdata, trace = TRUE) colSums(depvar(fit)) coef(fit, matrix = TRUE) ## End(Not run) ## Not run: # Plot the estimated probabilities ooo <- order(with(petdata, age)) matplot(with(petdata, age)[ooo], fitted(fit)[ooo, ], type = "l", xlab = "Age", ylab = "Probability", main = "Pet ownership", ylim = c(0, max(fitted(fit))), las = 1, lwd = 1.5) legend("topleft", col=1:4, lty = 1:4, leg = c("no cat or dog ", "dog only", "cat only", "cat and dog"), lwd = 1.5) ## End(Not run)
Density and random generation for a bivariate binary regression model using an odds ratio as the measure of dependency.
rbinom2.or(n, mu1, mu2 = if (exchangeable) mu1 else stop("argument 'mu2' not specified"), oratio = 1, exchangeable = FALSE, tol = 0.001, twoCols = TRUE, colnames = if (twoCols) c("y1","y2") else c("00", "01", "10", "11"), ErrorCheck = TRUE) dbinom2.or(mu1, mu2 = if (exchangeable) mu1 else stop("'mu2' not specified"), oratio = 1, exchangeable = FALSE, tol = 0.001, colnames = c("00", "01", "10", "11"), ErrorCheck = TRUE)
rbinom2.or(n, mu1, mu2 = if (exchangeable) mu1 else stop("argument 'mu2' not specified"), oratio = 1, exchangeable = FALSE, tol = 0.001, twoCols = TRUE, colnames = if (twoCols) c("y1","y2") else c("00", "01", "10", "11"), ErrorCheck = TRUE) dbinom2.or(mu1, mu2 = if (exchangeable) mu1 else stop("'mu2' not specified"), oratio = 1, exchangeable = FALSE, tol = 0.001, colnames = c("00", "01", "10", "11"), ErrorCheck = TRUE)
n |
number of observations.
Same as in |
mu1 , mu2
|
The marginal probabilities.
Only |
oratio |
Odds ratio. Must be numeric and positive. The default value of unity means the responses are statistically independent. |
exchangeable |
Logical. If |
twoCols |
Logical.
If |
colnames |
The |
tol |
Tolerance for testing independence. Should be some small positive numerical value. |
ErrorCheck |
Logical. Do some error checking of the input parameters? |
The function rbinom2.or
generates data coming from a
bivariate binary response model.
The data might be fitted with
the VGAM family function binom2.or
.
The function dbinom2.or
does not really compute the
density (because that does not make sense here) but rather
returns the four joint probabilities.
The function rbinom2.or
returns
either a 2 or 4 column matrix of 1s and 0s, depending on the
argument twoCols
.
The function dbinom2.or
returns
a 4 column matrix of joint probabilities; each row adds up
to unity.
T. W. Yee
nn <- 1000 # Example 1 ymat <- rbinom2.or(nn, mu1 = logitlink(1, inv = TRUE), oratio = exp(2), exch = TRUE) (mytab <- table(ymat[, 1], ymat[, 2], dnn = c("Y1", "Y2"))) (myor <- mytab["0","0"] * mytab["1","1"] / (mytab["1","0"] * mytab["0","1"])) fit <- vglm(ymat ~ 1, binom2.or(exch = TRUE)) coef(fit, matrix = TRUE) bdata <- data.frame(x2 = sort(runif(nn))) # Example 2 bdata <- transform(bdata, mu1 = logitlink(-2 + 4 * x2, inverse = TRUE), mu2 = logitlink(-1 + 3 * x2, inverse = TRUE)) dmat <- with(bdata, dbinom2.or(mu1 = mu1, mu2 = mu2, oratio = exp(2))) ymat <- with(bdata, rbinom2.or(n = nn, mu1 = mu1, mu2 = mu2, oratio = exp(2))) fit2 <- vglm(ymat ~ x2, binom2.or, data = bdata) coef(fit2, matrix = TRUE) ## Not run: matplot(with(bdata, x2), dmat, lty = 1:4, col = 1:4, main = "Joint probabilities", ylim = 0:1, type = "l", ylab = "Probabilities", xlab = "x2", las = 1) legend("top", lty = 1:4, col = 1:4, legend = c("1 = (y1=0, y2=0)", "2 = (y1=0, y2=1)", "3 = (y1=1, y2=0)", "4 = (y1=1, y2=1)")) ## End(Not run)
nn <- 1000 # Example 1 ymat <- rbinom2.or(nn, mu1 = logitlink(1, inv = TRUE), oratio = exp(2), exch = TRUE) (mytab <- table(ymat[, 1], ymat[, 2], dnn = c("Y1", "Y2"))) (myor <- mytab["0","0"] * mytab["1","1"] / (mytab["1","0"] * mytab["0","1"])) fit <- vglm(ymat ~ 1, binom2.or(exch = TRUE)) coef(fit, matrix = TRUE) bdata <- data.frame(x2 = sort(runif(nn))) # Example 2 bdata <- transform(bdata, mu1 = logitlink(-2 + 4 * x2, inverse = TRUE), mu2 = logitlink(-1 + 3 * x2, inverse = TRUE)) dmat <- with(bdata, dbinom2.or(mu1 = mu1, mu2 = mu2, oratio = exp(2))) ymat <- with(bdata, rbinom2.or(n = nn, mu1 = mu1, mu2 = mu2, oratio = exp(2))) fit2 <- vglm(ymat ~ x2, binom2.or, data = bdata) coef(fit2, matrix = TRUE) ## Not run: matplot(with(bdata, x2), dmat, lty = 1:4, col = 1:4, main = "Joint probabilities", ylim = 0:1, type = "l", ylab = "Probabilities", xlab = "x2", las = 1) legend("top", lty = 1:4, col = 1:4, legend = c("1 = (y1=0, y2=0)", "2 = (y1=0, y2=1)", "3 = (y1=1, y2=0)", "4 = (y1=1, y2=1)")) ## End(Not run)
Fits a bivariate probit model to two binary responses. The correlation parameter rho is the measure of dependency.
binom2.rho(lmu = "probitlink", lrho = "rhobitlink", imu1 = NULL, imu2 = NULL, irho = NULL, imethod = 1, zero = "rho", exchangeable = FALSE, grho = seq(-0.95, 0.95, by = 0.05), nsimEIM = NULL) binom2.Rho(rho = 0, imu1 = NULL, imu2 = NULL, exchangeable = FALSE, nsimEIM = NULL)
binom2.rho(lmu = "probitlink", lrho = "rhobitlink", imu1 = NULL, imu2 = NULL, irho = NULL, imethod = 1, zero = "rho", exchangeable = FALSE, grho = seq(-0.95, 0.95, by = 0.05), nsimEIM = NULL) binom2.Rho(rho = 0, imu1 = NULL, imu2 = NULL, exchangeable = FALSE, nsimEIM = NULL)
lmu |
Link function applied to the marginal probabilities. Should be left alone. |
lrho |
Link function applied to the |
imu1 , imu2
|
Optional initial values for the two marginal probabilities. May be a vector. |
irho |
Optional initial value for |
zero |
Specifies which linear/additive predictors are modelled as
intercept-only.
A |
exchangeable |
Logical.
If |
imethod , nsimEIM , grho
|
See |
rho |
Numeric vector.
Values are recycled to the needed length,
and ought to be in range, which is |
The bivariate probit model was one of the
earliest regression models to handle two binary responses
jointly. It has a probit link for each of the two marginal
probabilities, and models the association between the
responses by the parameter of a standard
bivariate normal distribution (with zero means and unit
variances). One can think of the joint probabilities being
where
is the cumulative distribution function of a
standard bivariate normal distribution.
Explicitly, the default model is
for the marginals, and
The joint probability
,
and from these the other three joint probabilities are easily
computed. The model is fitted by maximum likelihood estimation
since the full likelihood is specified. Fisher scoring is
implemented.
The default models as a single parameter
only, i.e., an intercept-only model for rho, but this can be
circumvented by setting
zero = NULL
in order to model
rho as a function of all the explanatory variables.
The bivariate probit model should not be confused with
a bivariate logit model with a probit link (see
binom2.or
). The latter uses the odds ratio to
quantify the association. Actually, the bivariate logit model
is recommended over the bivariate probit model because the
odds ratio is a more natural way of measuring the association
between two binary responses.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
, and vgam
.
When fitted, the fitted.values
slot of the object
contains the four joint probabilities, labelled as
= (0,0), (0,1), (1,0), (1,1),
respectively.
See binom2.or
about the form of input the response
should have.
By default, a constant is fitted because
zero = "rho"
. Set zero = NULL
if you want
the parameter to be modelled as a function
of the explanatory variables. The value
lies in the interval
, therefore a
rhobitlink
link is default.
Converge problems can occur.
If so, assign irho
a range of
values and monitor convergence (e.g., set trace = TRUE
).
Else try imethod
.
Practical experience shows that local solutions can occur,
and that irho
needs to be quite close to the (global)
solution.
Also, imu1
and imu2
may be used.
This help file is mainly about binom2.rho()
.
binom2.Rho()
fits a bivariate probit model with
known .
The inputted
rho
is saved in the misc
slot of
the fitted object, with rho
as the component name.
In some econometrics applications (e.g., Freedman 2010, Freedman and Sekhon 2010) one response is used as an explanatory variable, e.g., a recursive binomial probit model. Such will not work here. Historically, the bivariate probit model was the first VGAM I ever wrote, based on Ashford and Sowden (1970). I don't think they ever thought of it either! Hence the criticisms raised go beyond the use of what was originally intended.
Thomas W. Yee
Ashford, J. R. and Sowden, R. R. (1970). Multi-variate probit analysis. Biometrics, 26, 535–546.
Freedman, D. A. (2010). Statistical Models and Causal Inference: a Dialogue with the Social Sciences, Cambridge: Cambridge University Press.
Freedman, D. A. and Sekhon, J. S. (2010). Endogeneity in probit response models. Political Analysis, 18, 138–150.
rbinom2.rho
,
rhobitlink
,
pbinorm
,
binom2.or
,
loglinb2
,
coalminers
,
binomialff
,
rhobitlink
,
fisherzlink
.
coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vglm(cbind(nBnW, nBW, BnW, BW) ~ Age, binom2.rho, data = coalminers, trace = TRUE) summary(fit) coef(fit, matrix = TRUE)
coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vglm(cbind(nBnW, nBW, BnW, BW) ~ Age, binom2.rho, data = coalminers, trace = TRUE) summary(fit) coef(fit, matrix = TRUE)
Density and random generation for a bivariate probit model. The correlation parameter rho is the measure of dependency.
rbinom2.rho(n, mu1, mu2 = if (exchangeable) mu1 else stop("argument 'mu2' not specified"), rho = 0, exchangeable = FALSE, twoCols = TRUE, colnames = if (twoCols) c("y1","y2") else c("00", "01", "10", "11"), ErrorCheck = TRUE) dbinom2.rho(mu1, mu2 = if (exchangeable) mu1 else stop("'mu2' not specified"), rho = 0, exchangeable = FALSE, colnames = c("00", "01", "10", "11"), ErrorCheck = TRUE)
rbinom2.rho(n, mu1, mu2 = if (exchangeable) mu1 else stop("argument 'mu2' not specified"), rho = 0, exchangeable = FALSE, twoCols = TRUE, colnames = if (twoCols) c("y1","y2") else c("00", "01", "10", "11"), ErrorCheck = TRUE) dbinom2.rho(mu1, mu2 = if (exchangeable) mu1 else stop("'mu2' not specified"), rho = 0, exchangeable = FALSE, colnames = c("00", "01", "10", "11"), ErrorCheck = TRUE)
n |
number of observations.
Same as in |
mu1 , mu2
|
The marginal probabilities.
Only |
rho |
The correlation parameter.
Must be numeric and lie between |
exchangeable |
Logical. If |
twoCols |
Logical.
If |
colnames |
The |
ErrorCheck |
Logical. Do some error checking of the input parameters? |
The function rbinom2.rho
generates data coming from a
bivariate probit model.
The data might be fitted with the VGAM family function
binom2.rho
.
The function dbinom2.rho
does not really compute the
density (because that does not make sense here) but rather
returns the four joint probabilities.
The function rbinom2.rho
returns
either a 2 or 4 column matrix of 1s and 0s, depending on the
argument twoCols
.
The function dbinom2.rho
returns
a 4 column matrix of joint probabilities; each row adds up
to unity.
T. W. Yee
(myrho <- rhobitlink(2, inverse = TRUE)) # Example 1 nn <- 2000 ymat <- rbinom2.rho(nn, mu1 = 0.8, rho = myrho, exch = TRUE) (mytab <- table(ymat[, 1], ymat[, 2], dnn = c("Y1", "Y2"))) fit <- vglm(ymat ~ 1, binom2.rho(exch = TRUE)) coef(fit, matrix = TRUE) bdata <- data.frame(x2 = sort(runif(nn))) # Example 2 bdata <- transform(bdata, mu1 = probitlink(-2+4*x2, inv = TRUE), mu2 = probitlink(-1+3*x2, inv = TRUE)) dmat <- with(bdata, dbinom2.rho(mu1, mu2, myrho)) ymat <- with(bdata, rbinom2.rho(nn, mu1, mu2, myrho)) fit2 <- vglm(ymat ~ x2, binom2.rho, data = bdata) coef(fit2, matrix = TRUE) ## Not run: matplot(with(bdata, x2), dmat, lty = 1:4, col = 1:4, type = "l", main = "Joint probabilities", ylim = 0:1, lwd = 2, ylab = "Probability") legend(x = 0.25, y = 0.9, lty = 1:4, col = 1:4, lwd = 2, legend = c("1 = (y1=0, y2=0)", "2 = (y1=0, y2=1)", "3 = (y1=1, y2=0)", "4 = (y1=1, y2=1)")) ## End(Not run)
(myrho <- rhobitlink(2, inverse = TRUE)) # Example 1 nn <- 2000 ymat <- rbinom2.rho(nn, mu1 = 0.8, rho = myrho, exch = TRUE) (mytab <- table(ymat[, 1], ymat[, 2], dnn = c("Y1", "Y2"))) fit <- vglm(ymat ~ 1, binom2.rho(exch = TRUE)) coef(fit, matrix = TRUE) bdata <- data.frame(x2 = sort(runif(nn))) # Example 2 bdata <- transform(bdata, mu1 = probitlink(-2+4*x2, inv = TRUE), mu2 = probitlink(-1+3*x2, inv = TRUE)) dmat <- with(bdata, dbinom2.rho(mu1, mu2, myrho)) ymat <- with(bdata, rbinom2.rho(nn, mu1, mu2, myrho)) fit2 <- vglm(ymat ~ x2, binom2.rho, data = bdata) coef(fit2, matrix = TRUE) ## Not run: matplot(with(bdata, x2), dmat, lty = 1:4, col = 1:4, type = "l", main = "Joint probabilities", ylim = 0:1, lwd = 2, ylab = "Probability") legend(x = 0.25, y = 0.9, lty = 1:4, col = 1:4, lwd = 2, legend = c("1 = (y1=0, y2=0)", "2 = (y1=0, y2=1)", "3 = (y1=1, y2=0)", "4 = (y1=1, y2=1)")) ## End(Not run)
Family function for fitting generalized linear models to binomial responses
binomialff(link = "logitlink", multiple.responses = FALSE, parallel = FALSE, zero = NULL, bred = FALSE, earg.link = FALSE)
binomialff(link = "logitlink", multiple.responses = FALSE, parallel = FALSE, zero = NULL, bred = FALSE, earg.link = FALSE)
link |
Link function;
see |
multiple.responses |
Multivariate response? If If |
parallel |
A logical or formula. Used only if |
zero |
An integer-valued vector specifying which linear/additive predictors
are modelled as intercepts only. The values must be from the set
{1,2,..., |
earg.link |
Details at |
bred |
Details at |
This function is largely to
mimic binomial
,
however there are some differences.
When used with cqo
and cao
, it may be
preferable to use the clogloglink
link.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as
vglm
,
vgam
,
rrvglm
,
cqo
,
and cao
.
See the above note regarding bred
.
The maximum likelihood estimate will not exist if the data is
completely separable or quasi-completely separable.
See Chapter 10 of Altman et al. (2004) for more details,
and safeBinaryRegression
and hdeff.vglm
.
Yet to do: add a sepcheck = TRUE
, say, argument to
further detect this problem and give an appropriate warning.
If multiple.responses
is FALSE
(default) then
the response can be of one
of two formats:
a factor (first level taken as failure),
or a 2-column matrix (first column = successes) of counts.
The argument weights
in the modelling function can
also be specified as any vector of positive values.
In general, 1 means success and 0 means failure
(to check, see the y
slot of the fitted object).
Note that a general vector of proportions of success is no
longer accepted.
The notation is used to denote the number of linear/additive
predictors.
If multiple.responses
is TRUE
, then the matrix response
can only be of one format: a matrix of 1's and 0's (1 = success).
Fisher scoring is used. This can sometimes fail to converge by oscillating between successive iterations (Ridout, 1990). See the example below.
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Altman, M. and Gill, J. and McDonald, M. P. (2004). Numerical Issues in Statistical Computing for the Social Scientist, Hoboken, NJ, USA: Wiley-Interscience.
Ridout, M. S. (1990). Non-convergence of Fisher's method of scoring—a simple example. GLIM Newsletter, 20(6).
hdeff.vglm
,
Links
,
alogitlink
,
asinlink
,
N1binomial
,
rrvglm
,
cqo
,
cao
,
betabinomial
,
posbinomial
,
zibinomial
,
double.expbinomial
,
seq2binomial
,
amlbinomial
,
simplex
,
binomial
,
simulate.vlm
,
safeBinaryRegression,
residualsvglm
.
shunua <- hunua[sort.list(with(hunua, altitude)), ] # Sort by altitude fit <- vglm(agaaus ~ poly(altitude, 2), binomialff(link = clogloglink), data = shunua) ## Not run: plot(agaaus ~ jitter(altitude), shunua, ylab = "Pr(Agaaus = 1)", main = "Presence/absence of Agathis australis", col = 4, las = 1) with(shunua, lines(altitude, fitted(fit), col = "orange", lwd = 2)) ## End(Not run) # Fit two species simultaneously fit2 <- vgam(cbind(agaaus, kniexc) ~ s(altitude), binomialff(multiple.responses = TRUE), data = shunua) ## Not run: with(shunua, matplot(altitude, fitted(fit2), type = "l", main = "Two species response curves", las = 1)) ## End(Not run) # Shows that Fisher scoring can sometime fail. See Ridout (1990). ridout <- data.frame(v = c(1000, 100, 10), r = c(4, 3, 3), n = rep(5, 3)) (ridout <- transform(ridout, logv = log(v))) # The iterations oscillates between two local solutions: glm.fail <- glm(r / n ~ offset(logv) + 1, weight = n, binomial(link = 'cloglog'), ridout, trace = TRUE) coef(glm.fail) # vglm()'s half-stepping ensures the MLE of -5.4007 is obtained: vglm.ok <- vglm(cbind(r, n-r) ~ offset(logv) + 1, binomialff(link = clogloglink), ridout, trace = TRUE) coef(vglm.ok) # Separable data set.seed(123) threshold <- 0 bdata <- data.frame(x2 = sort(rnorm(nn <- 100))) bdata <- transform(bdata, y1 = ifelse(x2 < threshold, 0, 1)) fit <- vglm(y1 ~ x2, binomialff(bred = TRUE), data = bdata, criter = "coef", trace = TRUE) coef(fit, matrix = TRUE) # Finite!! summary(fit) ## Not run: plot(depvar(fit) ~ x2, data = bdata, col = "blue", las = 1) lines(fitted(fit) ~ x2, data = bdata, col = "orange") abline(v = threshold, col = "gray", lty = "dashed") ## End(Not run)
shunua <- hunua[sort.list(with(hunua, altitude)), ] # Sort by altitude fit <- vglm(agaaus ~ poly(altitude, 2), binomialff(link = clogloglink), data = shunua) ## Not run: plot(agaaus ~ jitter(altitude), shunua, ylab = "Pr(Agaaus = 1)", main = "Presence/absence of Agathis australis", col = 4, las = 1) with(shunua, lines(altitude, fitted(fit), col = "orange", lwd = 2)) ## End(Not run) # Fit two species simultaneously fit2 <- vgam(cbind(agaaus, kniexc) ~ s(altitude), binomialff(multiple.responses = TRUE), data = shunua) ## Not run: with(shunua, matplot(altitude, fitted(fit2), type = "l", main = "Two species response curves", las = 1)) ## End(Not run) # Shows that Fisher scoring can sometime fail. See Ridout (1990). ridout <- data.frame(v = c(1000, 100, 10), r = c(4, 3, 3), n = rep(5, 3)) (ridout <- transform(ridout, logv = log(v))) # The iterations oscillates between two local solutions: glm.fail <- glm(r / n ~ offset(logv) + 1, weight = n, binomial(link = 'cloglog'), ridout, trace = TRUE) coef(glm.fail) # vglm()'s half-stepping ensures the MLE of -5.4007 is obtained: vglm.ok <- vglm(cbind(r, n-r) ~ offset(logv) + 1, binomialff(link = clogloglink), ridout, trace = TRUE) coef(vglm.ok) # Separable data set.seed(123) threshold <- 0 bdata <- data.frame(x2 = sort(rnorm(nn <- 100))) bdata <- transform(bdata, y1 = ifelse(x2 < threshold, 0, 1)) fit <- vglm(y1 ~ x2, binomialff(bred = TRUE), data = bdata, criter = "coef", trace = TRUE) coef(fit, matrix = TRUE) # Finite!! summary(fit) ## Not run: plot(depvar(fit) ~ x2, data = bdata, col = "blue", las = 1) lines(fitted(fit) ~ x2, data = bdata, col = "orange") abline(v = threshold, col = "gray", lty = "dashed") ## End(Not run)
Density, cumulative distribution function and random generation for the bivariate normal distribution distribution.
dbinorm(x1, x2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0, log = FALSE) pbinorm(q1, q2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0) rbinorm(n, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0) pnorm2(x1, x2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0)
dbinorm(x1, x2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0, log = FALSE) pbinorm(q1, q2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0) rbinorm(n, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0) pnorm2(x1, x2, mean1 = 0, mean2 = 0, var1 = 1, var2 = 1, cov12 = 0)
x1 , x2 , q1 , q2
|
vector of quantiles. |
mean1 , mean2 , var1 , var2 , cov12
|
vector of means, variances and the covariance. |
n |
number of observations.
Same as |
log |
Logical.
If |
The default arguments correspond to the standard bivariate normal
distribution with correlation parameter .
That is, two independent standard normal distributions.
Let
sd1
(say) be sqrt(var1)
and
written , etc.
Then the general formula for the correlation coefficient is
where
is argument
cov12
.
Thus if arguments var1
and var2
are left alone then
cov12
can be inputted with .
One can think of this function as an extension of
pnorm
to two dimensions, however note
that the argument names have been changed for VGAM
0.9-1 onwards.
dbinorm
gives the density,
pbinorm
gives the cumulative distribution function,
rbinorm
generates random deviates ( by 2 matrix).
Being based on an approximation, the results of pbinorm()
may be negative!
Also,
pnorm2()
should be withdrawn soon;
use pbinorm()
instead because it is identical.
For rbinorm()
,
if the th variance-covariance matrix is not
positive-definite then the
th row is all
NA
s.
pbinorm()
is
based on Donnelly (1973),
the code was translated from FORTRAN to ratfor using struct, and
then from ratfor to C manually.
The function was originally called bivnor
, and TWY only
wrote a wrapper function.
Donnelly, T. G. (1973). Algorithm 462: Bivariate Normal Distribution. Communications of the ACM, 16, 638.
yvec <- c(-5, -1.96, 0, 1.96, 5) ymat <- expand.grid(yvec, yvec) cbind(ymat, pbinorm(ymat[, 1], ymat[, 2])) ## Not run: rhovec <- seq(-0.95, 0.95, by = 0.01) plot(rhovec, pbinorm(0, 0, cov12 = rhovec), xlab = expression(rho), lwd = 2, type = "l", col = "blue", las = 1) abline(v = 0, h = 0.25, col = "gray", lty = "dashed") ## End(Not run)
yvec <- c(-5, -1.96, 0, 1.96, 5) ymat <- expand.grid(yvec, yvec) cbind(ymat, pbinorm(ymat[, 1], ymat[, 2])) ## Not run: rhovec <- seq(-0.95, 0.95, by = 0.01) plot(rhovec, pbinorm(0, 0, cov12 = rhovec), xlab = expression(rho), lwd = 2, type = "l", col = "blue", las = 1) abline(v = 0, h = 0.25, col = "gray", lty = "dashed") ## End(Not run)
Maximum likelihood estimation of the five parameters of a bivariate normal distribution.
binormal(lmean1 = "identitylink", lmean2 = "identitylink", lsd1 = "loglink", lsd2 = "loglink", lrho = "rhobitlink", imean1 = NULL, imean2 = NULL, isd1 = NULL, isd2 = NULL, irho = NULL, imethod = 1, eq.mean = FALSE, eq.sd = FALSE, zero = c("sd", "rho"), rho.arg = NA)
binormal(lmean1 = "identitylink", lmean2 = "identitylink", lsd1 = "loglink", lsd2 = "loglink", lrho = "rhobitlink", imean1 = NULL, imean2 = NULL, isd1 = NULL, isd2 = NULL, irho = NULL, imethod = 1, eq.mean = FALSE, eq.sd = FALSE, zero = c("sd", "rho"), rho.arg = NA)
lmean1 , lmean2 , lsd1 , lsd2 , lrho
|
Link functions applied to the means, standard deviations and
|
imean1 , imean2 , isd1 , isd2 , irho , imethod , zero
|
See |
eq.mean , eq.sd
|
Logical or formula. Constrains the means or the standard deviations to be equal. |
rho.arg |
If |
For the bivariate normal distribution,
this fits a linear model (LM) to the means, and
by default,
the other parameters are intercept-only.
The response should be a two-column matrix.
The correlation parameter is rho
,
which lies between and
(thus the
rhobitlink
link is
a reasonable choice).
The fitted means are returned as the fitted
values, which is in
the form of a two-column matrix.
Fisher scoring is implemented.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
This function may be renamed to
normal2()
or something like that at
a later date.
If both equal means and equal standard
deviations are desired then use something
like
constraints = list("(Intercept)" =
matrix(c(1,1,0,0,0, 0,0,1,1,0 ,0,0,0,0,1), 5, 3))
and maybe
zero = NULL
etc.
T. W. Yee
uninormal
,
trinormal
,
pbinorm
,
bistudentt
,
rhobitlink
.
set.seed(123); nn <- 1000 bdata <- data.frame(x2 = runif(nn), x3 = runif(nn)) bdata <- transform(bdata, y1 = rnorm(nn, 1 + 2 * x2), y2 = rnorm(nn, 3 + 4 * x2)) fit1 <- vglm(cbind(y1, y2) ~ x2, binormal(eq.sd = TRUE), bdata, trace = TRUE) coef(fit1, matrix = TRUE) constraints(fit1) summary(fit1) # Estimated P(Y1 <= y1, Y2 <= y2) under the fitted model var1 <- loglink(2 * predict(fit1)[, "loglink(sd1)"], inv = TRUE) var2 <- loglink(2 * predict(fit1)[, "loglink(sd2)"], inv = TRUE) cov12 <- rhobitlink(predict(fit1)[, "rhobitlink(rho)"], inv = TRUE) head(with(bdata, pbinorm(y1, y2, mean1 = predict(fit1)[, "mean1"], mean2 = predict(fit1)[, "mean2"], var1 = var1, var2 = var2, cov12 = cov12)))
set.seed(123); nn <- 1000 bdata <- data.frame(x2 = runif(nn), x3 = runif(nn)) bdata <- transform(bdata, y1 = rnorm(nn, 1 + 2 * x2), y2 = rnorm(nn, 3 + 4 * x2)) fit1 <- vglm(cbind(y1, y2) ~ x2, binormal(eq.sd = TRUE), bdata, trace = TRUE) coef(fit1, matrix = TRUE) constraints(fit1) summary(fit1) # Estimated P(Y1 <= y1, Y2 <= y2) under the fitted model var1 <- loglink(2 * predict(fit1)[, "loglink(sd1)"], inv = TRUE) var2 <- loglink(2 * predict(fit1)[, "loglink(sd2)"], inv = TRUE) cov12 <- rhobitlink(predict(fit1)[, "rhobitlink(rho)"], inv = TRUE) head(with(bdata, pbinorm(y1, y2, mean1 = predict(fit1)[, "mean1"], mean2 = predict(fit1)[, "mean2"], var1 = var1, var2 = var2, cov12 = cov12)))
Estimate the correlation parameter of the (bivariate) Gaussian copula distribution by maximum likelihood estimation.
binormalcop(lrho = "rhobitlink", irho = NULL, imethod = 1, parallel = FALSE, zero = NULL)
binormalcop(lrho = "rhobitlink", irho = NULL, imethod = 1, parallel = FALSE, zero = NULL)
lrho , irho , imethod
|
Details at |
parallel , zero
|
Details at |
The cumulative distribution function is
for ,
is the
cumulative distribution function
of a standard bivariate normal (see
pbinorm
), and
is the cumulative distribution function
of a standard univariate normal (see
pnorm
).
The support of the function is the interior
of the unit square; however, values of 0
and/or 1 are not allowed. The marginal
distributions are the standard uniform
distributions. When
the random variables are independent.
This VGAM family function can handle multiple responses, for example, a six-column matrix where the first 2 columns is the first out of three responses, the next 2 columns being the next response, etc.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
The response matrix must have a multiple of two-columns. Currently, the fitted value is a matrix with the same number of columns and values equal to 0.5. This is because each marginal distribution corresponds to a standard uniform distribution.
This VGAM family function is fragile;
each response must be in the interior of the unit square.
Setting crit = "coef"
is sometimes a
good idea because
inaccuracies in pbinorm
might mean
unnecessary half-stepping will occur near the solution.
T. W. Yee
Schepsmeier, U. and Stober, J. (2014). Derivatives and Fisher information of bivariate copulas. Statistical Papers 55, 525–542.
rbinormcop
,
rhobitlink
,
pnorm
,
kendall.tau
.
nn <- 1000 ymat <- rbinormcop(nn, rho = rhobitlink(-0.9, inverse = TRUE)) bdata <- data.frame(y1 = ymat[, 1], y2 = ymat[, 2], y3 = ymat[, 1], y4 = ymat[, 2], x2 = runif(nn)) summary(bdata) ## Not run: plot(ymat, col = "blue") fit1 <- # 2 responses, e.g., (y1,y2) is the 1st vglm(cbind(y1, y2, y3, y4) ~ 1, fam = binormalcop, crit = "coef", # Sometimes a good idea data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1) # Another example; rho is a linear function of x2 bdata <- transform(bdata, rho = -0.5 + x2) ymat <- rbinormcop(n = nn, rho = with(bdata, rho)) bdata <- transform(bdata, y5 = ymat[, 1], y6 = ymat[, 2]) fit2 <- vgam(cbind(y5, y6) ~ s(x2), data = bdata, binormalcop(lrho = "identitylink"), trace = TRUE) ## Not run: plot(fit2, lcol = "blue", scol = "orange", se = TRUE)
nn <- 1000 ymat <- rbinormcop(nn, rho = rhobitlink(-0.9, inverse = TRUE)) bdata <- data.frame(y1 = ymat[, 1], y2 = ymat[, 2], y3 = ymat[, 1], y4 = ymat[, 2], x2 = runif(nn)) summary(bdata) ## Not run: plot(ymat, col = "blue") fit1 <- # 2 responses, e.g., (y1,y2) is the 1st vglm(cbind(y1, y2, y3, y4) ~ 1, fam = binormalcop, crit = "coef", # Sometimes a good idea data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1) # Another example; rho is a linear function of x2 bdata <- transform(bdata, rho = -0.5 + x2) ymat <- rbinormcop(n = nn, rho = with(bdata, rho)) bdata <- transform(bdata, y5 = ymat[, 1], y6 = ymat[, 2]) fit2 <- vgam(cbind(y5, y6) ~ s(x2), data = bdata, binormalcop(lrho = "identitylink"), trace = TRUE) ## Not run: plot(fit2, lcol = "blue", scol = "orange", se = TRUE)
Density, distribution function, and random generation for the (one parameter) bivariate Gaussian copula distribution.
dbinormcop(x1, x2, rho = 0, log = FALSE) pbinormcop(q1, q2, rho = 0) rbinormcop(n, rho = 0)
dbinormcop(x1, x2, rho = 0, log = FALSE) pbinormcop(q1, q2, rho = 0) rbinormcop(n, rho = 0)
x1 , x2 , q1 , q2
|
vector of quantiles.
The |
n |
number of observations.
Same as |
rho |
the correlation parameter.
Should be in the interval |
log |
Logical.
If |
See binormalcop
, the VGAM
family functions for estimating the
parameter by maximum likelihood estimation,
for the formula of the
cumulative distribution function and other details.
dbinormcop
gives the density,
pbinormcop
gives the distribution function, and
rbinormcop
generates random deviates (a two-column matrix).
Yettodo: allow x1
and/or x2
to have values 1,
and to allow any values for x1
and/or x2
to be
outside the unit square.
T. W. Yee
## Not run: edge <- 0.01 # A small positive value N <- 101; x <- seq(edge, 1.0 - edge, len = N); Rho <- 0.7 ox <- expand.grid(x, x) zedd <- dbinormcop(ox[, 1], ox[, 2], rho = Rho, log = TRUE) contour(x, x, matrix(zedd, N, N), col = "blue", labcex = 1.5) zedd <- pbinormcop(ox[, 1], ox[, 2], rho = Rho) contour(x, x, matrix(zedd, N, N), col = "blue", labcex = 1.5) ## End(Not run)
## Not run: edge <- 0.01 # A small positive value N <- 101; x <- seq(edge, 1.0 - edge, len = N); Rho <- 0.7 ox <- expand.grid(x, x) zedd <- dbinormcop(ox[, 1], ox[, 2], rho = Rho, log = TRUE) contour(x, x, matrix(zedd, N, N), col = "blue", labcex = 1.5) zedd <- pbinormcop(ox[, 1], ox[, 2], rho = Rho) contour(x, x, matrix(zedd, N, N), col = "blue", labcex = 1.5) ## End(Not run)
Density, distribution function, and random generation for the (one parameter) bivariate Plackett copula.
dbiplackcop(x1, x2, oratio, log = FALSE) pbiplackcop(q1, q2, oratio) rbiplackcop(n, oratio)
dbiplackcop(x1, x2, oratio, log = FALSE) pbiplackcop(q1, q2, oratio) rbiplackcop(n, oratio)
x1 , x2 , q1 , q2
|
vector of quantiles. |
n |
number of observations.
Same as in |
oratio |
the positive odds ratio |
log |
Logical.
If |
See biplackettcop
, the VGAM
family functions for estimating the
parameter by maximum likelihood estimation, for the formula of
the cumulative distribution function and other details.
dbiplackcop
gives the density,
pbiplackcop
gives the distribution function, and
rbiplackcop
generates random deviates (a two-column
matrix).
T. W. Yee
Mardia, K. V. (1967). Some contributions to contingency-type distributions. Biometrika, 54, 235–249.
## Not run: N <- 101; oratio <- exp(1) x <- seq(0.0, 1.0, len = N) ox <- expand.grid(x, x) zedd <- dbiplackcop(ox[, 1], ox[, 2], oratio = oratio) contour(x, x, matrix(zedd, N, N), col = "blue") zedd <- pbiplackcop(ox[, 1], ox[, 2], oratio = oratio) contour(x, x, matrix(zedd, N, N), col = "blue") plot(rr <- rbiplackcop(n = 3000, oratio = oratio)) par(mfrow = c(1, 2)) hist(rr[, 1]) # Should be uniform hist(rr[, 2]) # Should be uniform ## End(Not run)
## Not run: N <- 101; oratio <- exp(1) x <- seq(0.0, 1.0, len = N) ox <- expand.grid(x, x) zedd <- dbiplackcop(ox[, 1], ox[, 2], oratio = oratio) contour(x, x, matrix(zedd, N, N), col = "blue") zedd <- pbiplackcop(ox[, 1], ox[, 2], oratio = oratio) contour(x, x, matrix(zedd, N, N), col = "blue") plot(rr <- rbiplackcop(n = 3000, oratio = oratio)) par(mfrow = c(1, 2)) hist(rr[, 1]) # Should be uniform hist(rr[, 2]) # Should be uniform ## End(Not run)
Estimate the association parameter of Plackett's bivariate distribution (copula) by maximum likelihood estimation.
biplackettcop(link = "loglink", ioratio = NULL, imethod = 1, nsimEIM = 200)
biplackettcop(link = "loglink", ioratio = NULL, imethod = 1, nsimEIM = 200)
link |
Link function applied to the (positive) odds ratio |
ioratio |
Numeric. Optional initial value for |
imethod , nsimEIM
|
The defining equation is
where
is the cumulative distribution function.
The density function is
for .
Some writers call
the cross product ratio
but it is called the odds ratio here.
The support of the function is the unit square.
The marginal distributions here are the standard uniform although
it is commonly generalized to other distributions.
If then
,
i.e., independence.
As the odds ratio tends to infinity one has
.
As the odds ratio tends to 0 one has
.
Fisher scoring is implemented using rbiplackcop
.
Convergence is often quite slow.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
The response must be a two-column matrix. Currently, the fitted value is a 2-column matrix with 0.5 values because the marginal distributions correspond to a standard uniform distribution.
T. W. Yee
Plackett, R. L. (1965). A class of bivariate distributions. Journal of the American Statistical Association, 60, 516–522.
## Not run: ymat <- rbiplackcop(n = 2000, oratio = exp(2)) plot(ymat, col = "blue") fit <- vglm(ymat ~ 1, fam = biplackettcop, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) head(fitted(fit)) summary(fit) ## End(Not run)
## Not run: ymat <- rbiplackcop(n = 2000, oratio = exp(2)) plot(ymat, col = "blue") fit <- vglm(ymat ~ 1, fam = biplackettcop, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) head(fitted(fit)) summary(fit) ## End(Not run)
biplot
is a generic function applied to RR-VGLMs and
QRR-VGLMs etc. These apply to rank-1 and rank-2 models of
these only. For RR-VGLMs these plot the second latent variable
scores against the first latent variable scores.
The object from which the latent variables are extracted and/or plotted.
See lvplot
which is very much related to biplots.
Estimates the shape and scale parameters of the Birnbaum-Saunders distribution by maximum likelihood estimation.
bisa(lscale = "loglink", lshape = "loglink", iscale = 1, ishape = NULL, imethod = 1, zero = "shape", nowarning = FALSE)
bisa(lscale = "loglink", lshape = "loglink", iscale = 1, ishape = NULL, imethod = 1, zero = "shape", nowarning = FALSE)
nowarning |
Logical. Suppress a warning? Ignored for VGAM 0.9-7 and higher. |
lscale , lshape
|
Parameter link functions applied to the shape and
scale parameters
( |
iscale , ishape
|
Initial values for |
imethod |
An integer with value |
zero |
Specifies which linear/additive predictor is
modelled as intercept-only.
If used, choose one value from the set {1,2}.
See |
The (two-parameter) Birnbaum-Saunders distribution has a cumulative distribution function that can be written as
where is the
cumulative distribution function of a standard normal
(see
pnorm
),
,
,
is the shape parameter,
is the scale parameter.
The mean of
(which is the fitted value) is
.
and the variance is
.
By default,
and
for this
family function.
Note that and
are orthogonal,
i.e., the Fisher information matrix is diagonal.
This family function implements Fisher scoring, and
it is unnecessary to compute any integrals numerically.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
T. W. Yee
Lemonte, A. J. and Cribari-Neto, F. and Vasconcellos, K. L. P. (2007). Improved statistical inference for the two-parameter Birnbaum-Saunders distribution. Computational Statistics & Data Analysis, 51, 4656–4681.
Birnbaum, Z. W. and Saunders, S. C. (1969). A new family of life distributions. Journal of Applied Probability, 6, 319–327.
Birnbaum, Z. W. and Saunders, S. C. (1969). Estimation for a family of life distributions with applications to fatigue. Journal of Applied Probability, 6, 328–347.
Engelhardt, M. and Bain, L. J. and Wright, F. T. (1981). Inferences on the parameters of the Birnbaum-Saunders fatigue life distribution based on maximum likelihood estimation. Technometrics, 23, 251–256.
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995). Continuous Univariate Distributions, 2nd edition, Volume 2, New York: Wiley.
pbisa
,
inv.gaussianff
,
CommonVGAMffArguments
.
bdata1 <- data.frame(x2 = runif(nn <- 1000)) bdata1 <- transform(bdata1, shape = exp(-0.5 + x2), scale = exp(1.5)) bdata1 <- transform(bdata1, y = rbisa(nn, scale, shape)) fit1 <- vglm(y ~ x2, bisa(zero = 1), data = bdata1, trace = TRUE) coef(fit1, matrix = TRUE) ## Not run: bdata2 <- data.frame(shape = exp(-0.5), scale = exp(0.5)) bdata2 <- transform(bdata2, y = rbisa(nn, scale, shape)) fit <- vglm(y ~ 1, bisa, data = bdata2, trace = TRUE) with(bdata2, hist(y, prob = TRUE, ylim = c(0, 0.5), col = "lightblue")) coef(fit, matrix = TRUE) with(bdata2, mean(y)) head(fitted(fit)) x <- with(bdata2, seq(0, max(y), len = 200)) lines(dbisa(x, Coef(fit)[1], Coef(fit)[2]) ~ x, data = bdata2, col = "orange", lwd = 2) ## End(Not run)
bdata1 <- data.frame(x2 = runif(nn <- 1000)) bdata1 <- transform(bdata1, shape = exp(-0.5 + x2), scale = exp(1.5)) bdata1 <- transform(bdata1, y = rbisa(nn, scale, shape)) fit1 <- vglm(y ~ x2, bisa(zero = 1), data = bdata1, trace = TRUE) coef(fit1, matrix = TRUE) ## Not run: bdata2 <- data.frame(shape = exp(-0.5), scale = exp(0.5)) bdata2 <- transform(bdata2, y = rbisa(nn, scale, shape)) fit <- vglm(y ~ 1, bisa, data = bdata2, trace = TRUE) with(bdata2, hist(y, prob = TRUE, ylim = c(0, 0.5), col = "lightblue")) coef(fit, matrix = TRUE) with(bdata2, mean(y)) head(fitted(fit)) x <- with(bdata2, seq(0, max(y), len = 200)) lines(dbisa(x, Coef(fit)[1], Coef(fit)[2]) ~ x, data = bdata2, col = "orange", lwd = 2) ## End(Not run)
Density, distribution function, and random generation for the Birnbaum-Saunders distribution.
dbisa(x, scale = 1, shape, log = FALSE) pbisa(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qbisa(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rbisa(n, scale = 1, shape)
dbisa(x, scale = 1, shape, log = FALSE) pbisa(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qbisa(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rbisa(n, scale = 1, shape)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
Same as in |
scale , shape
|
the (positive) scale and shape parameters. |
log |
Logical.
If |
lower.tail , log.p
|
The Birnbaum-Saunders distribution
is a distribution which is used in survival analysis.
See bisa
, the VGAM family function
for estimating the parameters,
for more details.
dbisa
gives the density,
pbisa
gives the distribution function, and
qbisa
gives the quantile function, and
rbisa
generates random deviates.
T. W. Yee and Kai Huang
bisa
.
## Not run: x <- seq(0, 6, len = 400) plot(x, dbisa(x, shape = 1), type = "l", col = "blue", ylab = "Density", lwd = 2, ylim = c(0,1.3), lty = 3, main = "X ~ Birnbaum-Saunders(shape, scale = 1)") lines(x, dbisa(x, shape = 2), col = "orange", lty = 2, lwd = 2) lines(x, dbisa(x, shape = 0.5), col = "green", lty = 1, lwd = 2) legend(x = 3, y = 0.9, legend = paste("shape = ",c(0.5, 1,2)), col = c("green","blue","orange"), lty = 1:3, lwd = 2) shape <- 1; x <- seq(0.0, 4, len = 401) plot(x, dbisa(x, shape = shape), type = "l", col = "blue", main = "Blue is density, orange is the CDF", las = 1, sub = "Red lines are the 10,20,...,90 percentiles", ylab = "", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(x, pbisa(x, shape = shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qbisa(probs, shape = shape) lines(Q, dbisa(Q, shape = shape), col = "red", lty = 3, type = "h") pbisa(Q, shape = shape) - probs # Should be all zero abline(h = probs, col = "red", lty = 3) lines(Q, pbisa(Q, shape = shape), col = "red", lty = 3, type = "h") ## End(Not run)
## Not run: x <- seq(0, 6, len = 400) plot(x, dbisa(x, shape = 1), type = "l", col = "blue", ylab = "Density", lwd = 2, ylim = c(0,1.3), lty = 3, main = "X ~ Birnbaum-Saunders(shape, scale = 1)") lines(x, dbisa(x, shape = 2), col = "orange", lty = 2, lwd = 2) lines(x, dbisa(x, shape = 0.5), col = "green", lty = 1, lwd = 2) legend(x = 3, y = 0.9, legend = paste("shape = ",c(0.5, 1,2)), col = c("green","blue","orange"), lty = 1:3, lwd = 2) shape <- 1; x <- seq(0.0, 4, len = 401) plot(x, dbisa(x, shape = shape), type = "l", col = "blue", main = "Blue is density, orange is the CDF", las = 1, sub = "Red lines are the 10,20,...,90 percentiles", ylab = "", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(x, pbisa(x, shape = shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qbisa(probs, shape = shape) lines(Q, dbisa(Q, shape = shape), col = "red", lty = 3, type = "h") pbisa(Q, shape = shape) - probs # Should be all zero abline(h = probs, col = "red", lty = 3) lines(Q, pbisa(Q, shape = shape), col = "red", lty = 3, type = "h") ## End(Not run)
Estimate the degrees of freedom and correlation parameters of the (bivariate) Student-t distribution by maximum likelihood estimation.
bistudentt(ldf = "logloglink", lrho = "rhobitlink", idf = NULL, irho = NULL, imethod = 1, parallel = FALSE, zero = "rho")
bistudentt(ldf = "logloglink", lrho = "rhobitlink", idf = NULL, irho = NULL, imethod = 1, parallel = FALSE, zero = "rho")
ldf , lrho , idf , irho , imethod
|
Details at |
parallel , zero
|
Details at |
The density function is
for ,
and real
and
.
This VGAM family function can handle multiple responses, for example, a six-column matrix where the first 2 columns is the first out of three responses, the next 2 columns being the next response, etc.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
The working weight matrices have not been fully checked.
The response matrix must have a multiple of two-columns. Currently, the fitted value is a matrix with the same number of columns and values equal to 0.0.
T. W. Yee, with help from Thibault Vatter.
Schepsmeier, U. and Stober, J. (2014). Derivatives and Fisher information of bivariate copulas. Statistical Papers 55, 525–542.
nn <- 1000 mydof <- logloglink(1, inverse = TRUE) ymat <- cbind(rt(nn, df = mydof), rt(nn, df = mydof)) bdata <- data.frame(y1 = ymat[, 1], y2 = ymat[, 2], y3 = ymat[, 1], y4 = ymat[, 2], x2 = runif(nn)) summary(bdata) ## Not run: plot(ymat, col = "blue") fit1 <- # 2 responses, e.g., (y1,y2) is the 1st vglm(cbind(y1, y2, y3, y4) ~ 1, bistudentt, # crit = "coef", # Sometimes a good idea data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1)
nn <- 1000 mydof <- logloglink(1, inverse = TRUE) ymat <- cbind(rt(nn, df = mydof), rt(nn, df = mydof)) bdata <- data.frame(y1 = ymat[, 1], y2 = ymat[, 2], y3 = ymat[, 1], y4 = ymat[, 2], x2 = runif(nn)) summary(bdata) ## Not run: plot(ymat, col = "blue") fit1 <- # 2 responses, e.g., (y1,y2) is the 1st vglm(cbind(y1, y2, y3, y4) ~ 1, bistudentt, # crit = "coef", # Sometimes a good idea data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1)
Density for the bivariate Student-t distribution.
dbistudentt(x1, x2, df, rho = 0, log = FALSE)
dbistudentt(x1, x2, df, rho = 0, log = FALSE)
x1 , x2
|
vector of quantiles. |
df , rho
|
vector of degrees of freedom and correlation parameter.
For |
log |
Logical.
If |
One can think of this function as an extension of
dt
to two dimensions.
See bistudentt
for more information.
dbistudentt
gives the density.
bistudentt
,
dt
.
## Not run: N <- 101; x <- seq(-4, 4, len = N); Rho <- 0.7 mydf <- 10; ox <- expand.grid(x, x) zedd <- dbistudentt(ox[, 1], ox[, 2], df = mydf, rho = Rho, log = TRUE) contour(x, x, matrix(zedd, N, N), col = "blue", labcex = 1.5) ## End(Not run)
## Not run: N <- 101; x <- seq(-4, 4, len = N); Rho <- 0.7 mydf <- 10; ox <- expand.grid(x, x) zedd <- dbistudentt(ox[, 1], ox[, 2], df = mydf, rho = Rho, log = TRUE) contour(x, x, matrix(zedd, N, N), col = "blue", labcex = 1.5) ## End(Not run)
The body mass indexes and ages from an approximate random sample of 700 New Zealand adults.
data(bmi.nz)
data(bmi.nz)
A data frame with 700 observations on the following 2 variables.
a numeric vector; their age (years).
a numeric vector; their body mass indexes, which is
their weight divided by the square of their height
(kg / ).
They are a random sample from the Fletcher Challenge/Auckland Heart and Health survey conducted in the early 1990s.
There are some outliers in the data set.
A variable gender
would be useful, and may be added later.
Formerly the Clinical Trials Research Unit, University of Auckland, New Zealand.
MacMahon, S., Norton, R., Jackson, R., Mackie, M. J., Cheng, A., Vander Hoorn, S., Milne, A., McCulloch, A. (1995) Fletcher Challenge-University of Auckland Heart & Health Study: design and baseline findings. New Zealand Medical Journal, 108, 499–502.
## Not run: with(bmi.nz, plot(age, BMI, col = "blue")) fit <- vgam(BMI ~ s(age, df = c(2, 4, 2)), lms.yjn, data = bmi.nz, trace = TRUE) qtplot(fit, pcol = "blue", tcol = "brown", lcol = "brown") ## End(Not run)
## Not run: with(bmi.nz, plot(age, BMI, col = "blue")) fit <- vgam(BMI ~ s(age, df = c(2, 4, 2)), lms.yjn, data = bmi.nz, trace = TRUE) qtplot(fit, pcol = "blue", tcol = "brown", lcol = "brown") ## End(Not run)
Estimates the parameter of a Borel-Tanner distribution by maximum likelihood estimation.
borel.tanner(Qsize = 1, link = "logitlink", imethod = 1)
borel.tanner(Qsize = 1, link = "logitlink", imethod = 1)
Qsize |
A positive integer.
It is called |
link |
Link function for the parameter;
see |
imethod |
See |
The Borel-Tanner distribution (Tanner, 1953) describes the
distribution of the total number of customers served before a
queue vanishes given a single queue with random arrival times
of customers (at a constant rate per unit time, and
each customer taking a constant time
to be served).
Initially the queue has
people and the first one starts
to be served.
The two parameters appear in the density only in the form of the
product
, therefore we use
, say, to denote
the single parameter to be estimated. The density function is
where .
The case
corresponds to the Borel distribution
(Borel, 1942).
For the
case it is necessary for
for the
distribution to be proper.
The Borel distribution is a basic Lagrangian distribution of the
first kind.
The Borel-Tanner distribution is an
-fold convolution of the
Borel distribution.
The mean is (returned as the fitted values) and the
variance is
.
The distribution has a very long tail unless
is small.
Fisher scoring is implemented.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
T. W. Yee
Tanner, J. C. (1953). A problem of interference between two queues. Biometrika, 40, 58–69.
Borel, E. (1942). Sur l'emploi du theoreme de Bernoulli pour faciliter le calcul d'une infinite de coefficients. Application au probleme de l'attente a un guichet. Comptes Rendus, Academie des Sciences, Paris, Series A, 214, 452–456.
Johnson N. L., Kemp, A. W. and Kotz S. (2005). Univariate Discrete Distributions, 3rd edition, p.328. Hoboken, New Jersey: Wiley.
Consul, P. C. and Famoye, F. (2006). Lagrangian Probability Distributions, Boston, MA, USA: Birkhauser.
bdata <- data.frame(y = rbort(n <- 200)) fit <- vglm(y ~ 1, borel.tanner, bdata, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) Coef(fit) summary(fit)
bdata <- data.frame(y = rbort(n <- 200)) fit <- vglm(y ~ 1, borel.tanner, bdata, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) Coef(fit) summary(fit)
Density and random generation for the Borel-Tanner distribution.
dbort(x, Qsize = 1, a = 0.5, log = FALSE) rbort(n, Qsize = 1, a = 0.5)
dbort(x, Qsize = 1, a = 0.5, log = FALSE) rbort(n, Qsize = 1, a = 0.5)
x |
vector of quantiles. |
n |
number of observations. Must be a positive integer of length 1. |
Qsize , a
|
See |
log |
Logical.
If |
See borel.tanner
, the VGAM family function
for estimating the parameter,
for the formula of the probability density function and other
details.
dbort
gives the density,
rbort
generates random deviates.
Looping is used for rbort
, therefore
values of a
close to 1 will result in long (or infinite!)
computational times.
The default value of a
is subjective.
T. W. Yee
## Not run: qsize <- 1; a <- 0.5; x <- qsize:(qsize+10) plot(x, dbort(x, qsize, a), type = "h", las = 1, col = "blue", ylab = paste("fbort(qsize=", qsize, ", a=", a, ")"), log = "y", main = "Borel-Tanner density function") ## End(Not run)
## Not run: qsize <- 1; a <- 0.5; x <- qsize:(qsize+10) plot(x, dbort(x, qsize, a), type = "h", las = 1, col = "blue", ylab = paste("fbort(qsize=", qsize, ", a=", a, ")"), log = "y", main = "Borel-Tanner density function") ## End(Not run)
Fits a Bradley Terry model (intercept-only model) by maximum likelihood estimation.
brat(refgp = "last", refvalue = 1, ialpha = 1)
brat(refgp = "last", refvalue = 1, ialpha = 1)
refgp |
Integer whose value must be from the set
{1,..., |
refvalue |
Numeric. A positive value for the reference group. |
ialpha |
Initial values for the |
The Bradley Terry model involves competitors
who either win or lose against each other (no draws/ties
allowed in this implementation–see
bratt
if there are ties). The probability that Competitor
beats Competitor
is
,
where all the
s are positive.
Loosely, the
s can be thought of as
the competitors' ‘abilities’. For identifiability, one
of the
is set to a known value
refvalue
, e.g., 1. By default, this function
chooses the last competitor to have this reference value.
The data can be represented in the form of a
by
matrix of counts, where winners are the
rows and losers are the columns. However, this is not
the way the data should be inputted (see below).
Excluding the reference value/group, this function
chooses as the
linear predictors. The log link ensures that
the
s are positive.
The Bradley Terry model can be fitted by logistic regression, but this approach is not taken here. The Bradley Terry model can be fitted with covariates, e.g., a home advantage variable, but unfortunately, this lies outside the VGLM theoretical framework and therefore cannot be handled with this code.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
.
Presently, the residuals are wrong, and the prior weights
are not handled correctly. Ideally, the total number of
counts should be the prior weights, after the response has
been converted to proportions. This would make it similar
to family functions such as multinomial
and binomialff
.
The function Brat
is useful for coercing
a by
matrix of counts into a one-row
matrix suitable for
brat
. Diagonal elements are
skipped, and the usual S order of c(a.matrix)
of elements is used. There should be no missing values
apart from the diagonal elements of the square matrix.
The matrix should have winners as the rows, and losers
as the columns. In general, the response should be a
1-row matrix with columns.
Only an intercept model is recommended with brat
.
It doesn't make sense really to include covariates because
of the limited VGLM framework.
Notationally, note that the VGAM family function
brat
has contestants, while
bratt
has contestants.
T. W. Yee
Agresti, A. (2013). Categorical Data Analysis, 3rd ed. Hoboken, NJ, USA: Wiley.
Stigler, S. (1994). Citation patterns in the journals of statistics and probability. Statistical Science, 9, 94–108.
The BradleyTerry2 package has more comprehensive capabilities than this function.
bratt
,
Brat
,
multinomial
,
binomialff
.
# Citation statistics: being cited is a 'win'; citing is a 'loss' journal <- c("Biometrika", "Comm.Statist", "JASA", "JRSS-B") mat <- matrix(c( NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(mat) <- list(winner = journal, loser = journal) fit <- vglm(Brat(mat) ~ 1, brat(refgp = 1), trace = TRUE) fit <- vglm(Brat(mat) ~ 1, brat(refgp = 1), trace = TRUE, crit = "coef") summary(fit) c(0, coef(fit)) # Log-abilities (in order of "journal") c(1, Coef(fit)) # Abilities (in order of "journal") fitted(fit) # Probabilities of winning in awkward form (check <- InverseBrat(fitted(fit))) # Probabilities of winning check + t(check) # Should be 1's in the off-diagonals
# Citation statistics: being cited is a 'win'; citing is a 'loss' journal <- c("Biometrika", "Comm.Statist", "JASA", "JRSS-B") mat <- matrix(c( NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(mat) <- list(winner = journal, loser = journal) fit <- vglm(Brat(mat) ~ 1, brat(refgp = 1), trace = TRUE) fit <- vglm(Brat(mat) ~ 1, brat(refgp = 1), trace = TRUE, crit = "coef") summary(fit) c(0, coef(fit)) # Log-abilities (in order of "journal") c(1, Coef(fit)) # Abilities (in order of "journal") fitted(fit) # Probabilities of winning in awkward form (check <- InverseBrat(fitted(fit))) # Probabilities of winning check + t(check) # Should be 1's in the off-diagonals
Takes in a square matrix of counts and outputs
them in a form that is accessible to the brat
and bratt
family functions.
Brat(mat, ties = 0 * mat, string = c(">", "=="), whitespace = FALSE)
Brat(mat, ties = 0 * mat, string = c(">", "=="), whitespace = FALSE)
mat |
Matrix of counts, which is considered |
ties |
Matrix of counts.
This should be the same dimension as |
string |
Character.
The matrices are labelled with the first value of the
descriptor, e.g., |
whitespace |
Logical. If |
In the VGAM package it is necessary for each
matrix to be represented as a single row of data by
brat
and bratt
. Hence the
non-diagonal elements of the by
matrix are concatenated into
values (no
ties), while if there are ties, the non-diagonal elements
of the
by
matrix are concatenated into
values.
A matrix with 1 row and either or
columns.
This is a data preprocessing function for
brat
and bratt
.
Yet to do: merge InverseBrat
into brat
.
T. W. Yee
Agresti, A. (2013). Categorical Data Analysis, 3rd ed. Hoboken, NJ, USA: Wiley.
journal <- c("Biometrika", "Comm Statist", "JASA", "JRSS-B") mat <- matrix(c( NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(mat) <- list(winner = journal, loser = journal) Brat(mat) # Less readable Brat(mat, whitespace = TRUE) # More readable vglm(Brat(mat, whitespace = TRUE) ~ 1, brat, trace = TRUE)
journal <- c("Biometrika", "Comm Statist", "JASA", "JRSS-B") mat <- matrix(c( NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(mat) <- list(winner = journal, loser = journal) Brat(mat) # Less readable Brat(mat, whitespace = TRUE) # More readable vglm(Brat(mat, whitespace = TRUE) ~ 1, brat, trace = TRUE)
Fits a Bradley Terry model with ties (intercept-only model) by maximum likelihood estimation.
bratt(refgp = "last", refvalue = 1, ialpha = 1, i0 = 0.01)
bratt(refgp = "last", refvalue = 1, ialpha = 1, i0 = 0.01)
refgp |
Integer whose value must be from the set {1,..., |
refvalue |
Numeric. A positive value for the reference group. |
ialpha |
Initial values for the |
i0 |
Initial value for |
There are several models that extend the ordinary
Bradley Terry model to handle ties. This family function
implements one of these models. It involves
competitors who either win or lose or tie against
each other. (If there are no draws/ties then use
brat
). The probability that Competitor
beats Competitor
is
, where all the
s
are positive. The probability that Competitor
ties with Competitor
is
. Loosely, the
s
can be thought of as the competitors' ‘abilities’,
and
is an added parameter
to model ties. For identifiability, one of the
is set to a known value
refvalue
, e.g., 1. By default, this function
chooses the last competitor to have this reference value.
The data can be represented in the form of a
by
matrix of counts, where winners are the rows
and losers are the columns. However, this is not the
way the data should be inputted (see below).
Excluding the reference value/group, this function
chooses as the first
linear predictors. The log link ensures that
the
s are positive. The last linear
predictor is
.
The Bradley Terry model can be fitted with covariates, e.g., a home advantage variable, but unfortunately, this lies outside the VGLM theoretical framework and therefore cannot be handled with this code.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
.
The function Brat
is useful for coercing
a by
matrix of counts into a one-row
matrix suitable for
bratt
. Diagonal elements
are skipped, and the usual S order of c(a.matrix)
of elements is used. There should be no missing values
apart from the diagonal elements of the square matrix.
The matrix should have winners as the rows, and losers
as the columns. In general, the response should be a
matrix with columns.
Also, a symmetric matrix of ties should be passed into
Brat
. The diagonal of this matrix should
be all NA
s.
Only an intercept model is recommended with bratt
.
It doesn't make sense really to include covariates because
of the limited VGLM framework.
Notationally, note that the VGAM family function
brat
has contestants, while
bratt
has contestants.
T. W. Yee
Torsney, B. (2004). Fitting Bradley Terry models using a multiplicative algorithm. In: Antoch, J. (ed.) Proceedings in Computational Statistics COMPSTAT 2004, Physica-Verlag: Heidelberg. Pages 513–526.
brat
,
Brat
,
binomialff
.
# citation statistics: being cited is a 'win'; citing is a 'loss' journal <- c("Biometrika", "Comm.Statist", "JASA", "JRSS-B") mat <- matrix(c( NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(mat) <- list(winner = journal, loser = journal) # Add some ties. This is fictitional data. ties <- 5 + 0 * mat ties[2, 1] <- ties[1,2] <- 9 # Now fit the model fit <- vglm(Brat(mat, ties) ~ 1, bratt(refgp = 1), trace = TRUE, crit = "coef") summary(fit) c(0, coef(fit)) # Log-abilities (last is log(alpha0)) c(1, Coef(fit)) # Abilities (last is alpha0) fit@misc$alpha # alpha_1,...,alpha_M fit@misc$alpha0 # alpha_0 fitted(fit) # Probabilities of winning and tying, in awkward form predict(fit) (check <- InverseBrat(fitted(fit))) # Probabilities of winning qprob <- attr(fitted(fit), "probtie") # Probabilities of a tie qprobmat <- InverseBrat(c(qprob), NCo = nrow(ties)) # Pr(tie) check + t(check) + qprobmat # Should be 1s in the off-diagonals
# citation statistics: being cited is a 'win'; citing is a 'loss' journal <- c("Biometrika", "Comm.Statist", "JASA", "JRSS-B") mat <- matrix(c( NA, 33, 320, 284, 730, NA, 813, 276, 498, 68, NA, 325, 221, 17, 142, NA), 4, 4) dimnames(mat) <- list(winner = journal, loser = journal) # Add some ties. This is fictitional data. ties <- 5 + 0 * mat ties[2, 1] <- ties[1,2] <- 9 # Now fit the model fit <- vglm(Brat(mat, ties) ~ 1, bratt(refgp = 1), trace = TRUE, crit = "coef") summary(fit) c(0, coef(fit)) # Log-abilities (last is log(alpha0)) c(1, Coef(fit)) # Abilities (last is alpha0) fit@misc$alpha # alpha_1,...,alpha_M fit@misc$alpha0 # alpha_0 fitted(fit) # Probabilities of winning and tying, in awkward form predict(fit) (check <- InverseBrat(fitted(fit))) # Probabilities of winning qprob <- attr(fitted(fit), "probtie") # Probabilities of a tie qprobmat <- InverseBrat(c(qprob), NCo = nrow(ties)) # Pr(tie) check + t(check) + qprobmat # Should be 1s in the off-diagonals
Counts of western spuce budworm (Choristoneura freemani) across seven developmental stages (five larval instars, pupae, and adults) on 12 sampling occasions.
data(budworm)
data(budworm)
A data frame with the following variables.
Degree days.
Sum of stages 1–7.
Successive stages.
Successive stages.
This data concerns the development of a defoliating
moth widespread in western North America
(i.e., north of Mexico).
According to Boersch-Supan (2021),
the insect passes hrough successive stages
, delimited by
moults.
The data was originally used in a 1986 publication
but has been corrected for two sampling occasions;
the data appears in Candy (1990) and
was analyzed in Boersch-Supan (2021).
See the latter for more references.
Candy, S. G. (1990).
Biology of the mountain pinhole borer,
Platypus subgranosus Scheld, in Tasmania.
MA thesis, University of Tasmania, Australia.
https://eprints.utas.edu.au/18864/
.
Boersch-Supan, P. H. (2021). Modeling insect phenology using ordinal regression and continuation ratio models. ReScience C, 7.1, 1–14.
budworm summary(budworm)
budworm summary(budworm)
calibrate
is a generic function used to produce calibrations
from various model fitting functions. The function invokes
particular ‘methods’ which depend on the ‘class’ of the first
argument.
calibrate(object, ...)
calibrate(object, ...)
object |
An object for which a calibration is desired. |
... |
Additional arguments affecting the calibration produced.
Usually the most important argument in |
Given a regression model with explanatory variables X and
response Y,
calibration involves estimating X from Y using the
regression model.
It can be loosely thought of as the opposite of predict
(which takes an X and returns a Y of some sort.)
In general,
the central algorithm is maximum likelihood calibration.
In general, given a new response Y,
some function of the explanatory variables X are returned.
For example,
for constrained ordination models such as CQO and CAO models,
it is usually not possible to return X, so the latent
variables are returned instead (they are
linear combinations of the X).
See the specific calibrate
methods functions to see
what they return.
This function was not called predictx
because of the
inability of constrained ordination models to return X;
they can only return the latent variable values
(also known as site scores) instead.
T. W. Yee
ter Braak, C. J. F. and van Dam, H. (1989). Inferring pH from diatoms: a comparison of old and new calibration methods. Hydrobiologia, 178, 209–223.
predict
,
calibrate.rrvglm
,
calibrate.qrrvglm
.
## Not run: hspider[, 1:6] <- scale(hspider[, 1:6]) # Stdzed environmental vars set.seed(123) pcao1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, Bestof = 3, df1.nl = c(Zoraspin = 2, 1.9), Crow1positive = TRUE) siteNos <- 1:2 # Calibrate these sites cpcao1 <- calibrate(pcao1, trace = TRUE, newdata = data.frame(depvar(pcao1)[siteNos, ], model.matrix(pcao1)[siteNos, ])) # Graphically compare the actual site scores with their calibrated values persp(pcao1, main = "Site scores: solid=actual, dashed=calibrated", label = TRUE, col = "blue", las = 1) abline(v = latvar(pcao1)[siteNos], col = seq(siteNos)) # Actual scores abline(v = cpcao1, lty = 2, col = seq(siteNos)) # Calibrated values ## End(Not run)
## Not run: hspider[, 1:6] <- scale(hspider[, 1:6]) # Stdzed environmental vars set.seed(123) pcao1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, Bestof = 3, df1.nl = c(Zoraspin = 2, 1.9), Crow1positive = TRUE) siteNos <- 1:2 # Calibrate these sites cpcao1 <- calibrate(pcao1, trace = TRUE, newdata = data.frame(depvar(pcao1)[siteNos, ], model.matrix(pcao1)[siteNos, ])) # Graphically compare the actual site scores with their calibrated values persp(pcao1, main = "Site scores: solid=actual, dashed=calibrated", label = TRUE, col = "blue", las = 1) abline(v = latvar(pcao1)[siteNos], col = seq(siteNos)) # Actual scores abline(v = cpcao1, lty = 2, col = seq(siteNos)) # Calibrated values ## End(Not run)
calibrate
is a generic function applied to
RR-VGLMs,
QRR-VGLMs and
RR-VGAMs, etc.
The object from which the calibration is performed.
Performs maximum likelihood calibration for constrained quadratic and additive ordination models (CQO and CAO models are better known as QRR-VGLMs and RR-VGAMs respectively).
calibrate.qrrvglm(object, newdata = NULL, type = c("latvar", "predictors", "response", "vcov", "everything"), lr.confint = FALSE, cf.confint = FALSE, level = 0.95, initial.vals = NULL, ...)
calibrate.qrrvglm(object, newdata = NULL, type = c("latvar", "predictors", "response", "vcov", "everything"), lr.confint = FALSE, cf.confint = FALSE, level = 0.95, initial.vals = NULL, ...)
object |
The fitted CQO/CAO model. |
newdata |
A data frame with new response data, such as new species data. The default is to use the original data used to fit the model; however, the calibration may take a long time to compute because the computations are expensive. Note that the creation of the model frame associated with
|
type |
What type of result to be returned.
The first are the calibrated latent variables or site scores.
This is always computed.
The |
lr.confint , level
|
Compute approximate
likelihood ratio based confidence intervals?
If |
cf.confint |
Compute approximate
characteristic function based confidence intervals?
If |
initial.vals |
Initial values for the search.
For rank-1 models, this should be a vector having length
equal to |
... |
Arguments that are fed into
|
Given a fitted regression CQO/CAO model, maximum likelihood calibration is theoretically easy and elegant. However, the method assumes that all the responses are independent, which is often not true in practice. More details and references are given in Yee (2018) and ch.6 of Yee (2015).
The function optim
is used to search for
the maximum likelihood solution. Good initial values are
needed, and arguments in calibrate.qrrvglm.control
allows the user some control over the choice of these.
Several methods are implemented to obtain
confidence intervals/regions for the calibration estimates.
One method is when lr.confint = TRUE
,
then a 4-column matrix is returned
with the confidence limits being the final 2 columns
(if type = "everything"
then the matrix is
returned in the lr.confint
list component).
Another similar method is when cf.confint = TRUE
.
There may be some redundancy in whatever is returned.
Other methods are returned by using type
and they are described as follows.
The argument type
determines what is returned.
If type = "everything"
then all the type
values
are returned in a list, with the following components.
Each component has length nrow(newdata)
.
latvar |
Calibrated latent variables or site scores
(the default).
This may have the attribute |
predictors |
linear/quadratic or additive predictors. For example, for Poisson families, this will be on a log scale, and for binomial families, this will be on a logit scale. |
response |
Fitted values of the response, evaluated at the calibrated latent variables. |
vcov |
Wald-type estimated variance-covariance matrices of the
calibrated latent variables or site scores. Actually,
these are stored in a 3-D array whose dimension is
|
This function is computationally expensive.
Setting trace = TRUE
to get a running log can be a good idea.
This function has been tested but not extensively.
Despite the name of this function, CAO models are handled as well
to a certain extent.
Some combinations of parameters are not handled, e.g.,
lr.confint = TRUE
only works for rank-1,
type = "vcov"
only works for
binomialff
and poissonff
models with canonical links and noRRR = ~ 1
,
and higher-order rank models need
eq.tolerances = TRUE
or I.tolerances = TRUE
as well.
For rank-1 objects, lr.confint = TRUE
is recommended
above type = "vcov"
in terms of accuracy and overall generality.
For class "qrrvglm"
objects it is necessary that
all response' tolerance matrices are positive-definite
which correspond to bell-shaped response curves/surfaces.
For binomialff
and poissonff
models
the deviance
slot is used for the optimization rather than
the loglikelihood
slot, therefore one can calibrate using
real-valued responses. (If the loglikelihood
slot were used
then functions such as dpois
would be used
with log = TRUE
and then would be restricted to feed in
integer-valued response values.)
Maximum likelihood calibration for
Gaussian logit regression models may be performed by
rioja but this applies to a single environmental variable
such as pH
in data("SWAP", package = "rioja")
.
In VGAM calibrate()
estimates values of the
latent variable rather than individual explanatory variables,
hence the setting is more on ordination.
T. W. Yee.
Recent work on the standard errors by
David Zucker and
Sam Oman at HUJI
is gratefully acknowledged—these are returned in the
vcov
component and provided inspiration for lr.confint
and cf.confint
.
A joint publication is being prepared on this subject.
Abate, J. and Whitt, W. (1992). The Fourier-series method for inverting transforms of probability distributions. Queueing Systems, 10, 5–88.
ter Braak, C. J. F. (1995). Calibration. In: Data Analysis in Community and Landscape Ecology by Jongman, R. H. G., ter Braak, C. J. F. and van Tongeren, O. F. R. (Eds.) Cambridge University Press, Cambridge.
calibrate.qrrvglm.control
,
calibrate.rrvglm
,
calibrate
,
cqo
,
cao
,
optim
,
uniroot
.
## Not run: hspider[, 1:6] <- scale(hspider[, 1:6]) # Stdze environmental variables set.seed(123) siteNos <- c(1, 5) # Calibrate these sites pet1 <- cqo(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, trace = FALSE, data = hspider[-siteNos, ], # Sites not in fitted model family = poissonff, I.toler = TRUE, Crow1positive = TRUE) y0 <- hspider[siteNos, colnames(depvar(pet1))] # Species counts (cpet1 <- calibrate(pet1, trace = TRUE, newdata = data.frame(y0))) (clrpet1 <- calibrate(pet1, lr.confint = TRUE, newdata = data.frame(y0))) (ccfpet1 <- calibrate(pet1, cf.confint = TRUE, newdata = data.frame(y0))) (cp1wald <- calibrate(pet1, newdata = y0, type = "everything")) ## End(Not run) ## Not run: # Graphically compare the actual site scores with their calibrated # values. 95 percent likelihood-based confidence intervals in green. persp(pet1, main = "Site scores: solid=actual, dashed=calibrated", label = TRUE, col = "gray50", las = 1) # Actual site scores: xvars <- rownames(concoef(pet1)) # Variables comprising the latvar est.latvar <- as.matrix(hspider[siteNos, xvars]) %*% concoef(pet1) abline(v = est.latvar, col = seq(siteNos)) abline(v = cpet1, lty = 2, col = seq(siteNos)) # Calibrated values arrows(clrpet1[, 3], c(60, 60), clrpet1[, 4], c(60, 60), # Add CIs length = 0.08, col = "orange", angle = 90, code = 3, lwd = 2) arrows(ccfpet1[, 3], c(70, 70), ccfpet1[, 4], c(70, 70), # Add CIs length = 0.08, col = "limegreen", angle = 90, code = 3, lwd = 2) arrows(cp1wald$latvar - 1.96 * sqrt(cp1wald$vcov), c(65, 65), cp1wald$latvar + 1.96 * sqrt(cp1wald$vcov), c(65, 65), # Wald CIs length = 0.08, col = "blue", angle = 90, code = 3, lwd = 2) legend("topright", lwd = 2, leg = c("CF interval", "Wald interval", "LR interval"), col = c("limegreen", "blue", "orange"), lty = 1) ## End(Not run)
## Not run: hspider[, 1:6] <- scale(hspider[, 1:6]) # Stdze environmental variables set.seed(123) siteNos <- c(1, 5) # Calibrate these sites pet1 <- cqo(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, trace = FALSE, data = hspider[-siteNos, ], # Sites not in fitted model family = poissonff, I.toler = TRUE, Crow1positive = TRUE) y0 <- hspider[siteNos, colnames(depvar(pet1))] # Species counts (cpet1 <- calibrate(pet1, trace = TRUE, newdata = data.frame(y0))) (clrpet1 <- calibrate(pet1, lr.confint = TRUE, newdata = data.frame(y0))) (ccfpet1 <- calibrate(pet1, cf.confint = TRUE, newdata = data.frame(y0))) (cp1wald <- calibrate(pet1, newdata = y0, type = "everything")) ## End(Not run) ## Not run: # Graphically compare the actual site scores with their calibrated # values. 95 percent likelihood-based confidence intervals in green. persp(pet1, main = "Site scores: solid=actual, dashed=calibrated", label = TRUE, col = "gray50", las = 1) # Actual site scores: xvars <- rownames(concoef(pet1)) # Variables comprising the latvar est.latvar <- as.matrix(hspider[siteNos, xvars]) %*% concoef(pet1) abline(v = est.latvar, col = seq(siteNos)) abline(v = cpet1, lty = 2, col = seq(siteNos)) # Calibrated values arrows(clrpet1[, 3], c(60, 60), clrpet1[, 4], c(60, 60), # Add CIs length = 0.08, col = "orange", angle = 90, code = 3, lwd = 2) arrows(ccfpet1[, 3], c(70, 70), ccfpet1[, 4], c(70, 70), # Add CIs length = 0.08, col = "limegreen", angle = 90, code = 3, lwd = 2) arrows(cp1wald$latvar - 1.96 * sqrt(cp1wald$vcov), c(65, 65), cp1wald$latvar + 1.96 * sqrt(cp1wald$vcov), c(65, 65), # Wald CIs length = 0.08, col = "blue", angle = 90, code = 3, lwd = 2) legend("topright", lwd = 2, leg = c("CF interval", "Wald interval", "LR interval"), col = c("limegreen", "blue", "orange"), lty = 1) ## End(Not run)
Algorithmic constants and parameters for running
calibrate.qrrvglm
are set using this function.
calibrate.qrrvglm.control(object, trace = FALSE, method.optim = "BFGS", gridSize = ifelse(Rank == 1, 21, 9), varI.latvar = FALSE, ...)
calibrate.qrrvglm.control(object, trace = FALSE, method.optim = "BFGS", gridSize = ifelse(Rank == 1, 21, 9), varI.latvar = FALSE, ...)
object |
The fitted CQO/CAO model. The user should ignore this argument. |
trace |
Logical indicating if output should be produced for each iteration.
It is a good idea to set this argument to be |
method.optim |
Character. Fed into the |
gridSize |
Numeric, recycled to length |
varI.latvar |
Logical. For CQO objects only, this argument is fed into
|
... |
Avoids an error message for extraneous arguments. |
Most CQO/CAO users will only need to make use of trace
and gridSize
. These arguments should be used inside their
call to calibrate.qrrvglm
, not this function
directly.
A list which with the following components.
trace |
Numeric (even though the input can be logical). |
gridSize |
Positive integer. |
varI.latvar |
Logical. |
Despite the name of this function, CAO models are handled as well.
Yee, T. W. (2020). On constrained and unconstrained quadratic ordination. Manuscript in preparation.
calibrate.qrrvglm
,
Coef.qrrvglm
.
## Not run: hspider[, 1:6] <- scale(hspider[, 1:6]) # Needed for I.tol=TRUE set.seed(123) p1 <- cqo(cbind(Alopacce, Alopcune, Pardlugu, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, I.tol = TRUE) sort(deviance(p1, history = TRUE)) # A history of all the iterations siteNos <- 3:4 # Calibrate these sites cp1 <- calibrate(p1, trace = TRUE, new = data.frame(depvar(p1)[siteNos, ])) ## End(Not run) ## Not run: # Graphically compare the actual site scores with their calibrated values persp(p1, main = "Site scores: solid=actual, dashed=calibrated", label = TRUE, col = "blue", las = 1) abline(v = latvar(p1)[siteNos], col = seq(siteNos)) # Actual site scores abline(v = cp1, lty = 2, col = seq(siteNos)) # Calibrated values ## End(Not run)
## Not run: hspider[, 1:6] <- scale(hspider[, 1:6]) # Needed for I.tol=TRUE set.seed(123) p1 <- cqo(cbind(Alopacce, Alopcune, Pardlugu, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, I.tol = TRUE) sort(deviance(p1, history = TRUE)) # A history of all the iterations siteNos <- 3:4 # Calibrate these sites cp1 <- calibrate(p1, trace = TRUE, new = data.frame(depvar(p1)[siteNos, ])) ## End(Not run) ## Not run: # Graphically compare the actual site scores with their calibrated values persp(p1, main = "Site scores: solid=actual, dashed=calibrated", label = TRUE, col = "blue", las = 1) abline(v = latvar(p1)[siteNos], col = seq(siteNos)) # Actual site scores abline(v = cp1, lty = 2, col = seq(siteNos)) # Calibrated values ## End(Not run)
Performs maximum likelihood calibration for constrained linear ordination models (CLO models are better known as RR-VGLMs).
calibrate.rrvglm(object, newdata = NULL, type = c("latvar", "predictors", "response", "vcov", "everything"), lr.confint = FALSE, cf.confint = FALSE, level = 0.95, initial.vals = NULL, ...)
calibrate.rrvglm(object, newdata = NULL, type = c("latvar", "predictors", "response", "vcov", "everything"), lr.confint = FALSE, cf.confint = FALSE, level = 0.95, initial.vals = NULL, ...)
object |
The fitted |
newdata |
See |
type |
See |
lr.confint , cf.confint , level
|
Same as |
initial.vals |
Same as |
... |
Arguments that are fed into
|
Given a fitted regression CLO model, maximum likelihood calibration is theoretically easy and elegant. However, the method assumes that all responses are independent. More details and references are given in Yee (2015).
Calibration requires grouped or non-sparse data
as the response.
For example,
if the family function is multinomial
then
one cannot usually calibrate y0
if it is a vector of 0s
except for one 1.
Instead, the response vector should be from grouped data
so that there are few 0s.
Indeed, it is found empirically that the stereotype model
(also known as a reduced-rank multinomial
logit
model) calibrates well only with grouped data, and
if the response vector is all 0s except for one 1 then
the MLE will probably be at -Inf
or +Inf
.
As another example, if the family function is poissonff
then y0
must not be a vector of all 0s; instead, the response
vector should have few 0s ideally. In general, you can use simulation
to see what type of data calibrates acceptably.
Internally, this function is a simplification of
calibrate.qrrvglm
and users should look at
that function for details.
Good initial values are
needed, and a grid is constructed to obtain these.
The function calibrate.rrvglm.control
allows the user some control over the choice of these.
See calibrate.qrrvglm
.
Of course, the quadratic term in the latent variables vanishes
for RR-VGLMs, so the model is simpler.
See calibrate.qrrvglm
.
See calibrate.qrrvglm
about, e.g.,
calibration using real-valued responses.
T. W. Yee
calibrate.qrrvglm
,
calibrate
,
rrvglm
,
weightsvglm
,
optim
,
uniroot
.
## Not run: # Example 1 nona.xs.nz <- na.omit(xs.nz) # Overkill!! (Data in VGAMdata package) nona.xs.nz$dmd <- with(nona.xs.nz, round(drinkmaxday)) nona.xs.nz$feethr <- with(nona.xs.nz, round(feethour)) nona.xs.nz$sleephr <- with(nona.xs.nz, round(sleep)) nona.xs.nz$beats <- with(nona.xs.nz, round(pulse)) p2 <- rrvglm(cbind(dmd, feethr, sleephr, beats) ~ age + smokenow + depressed + embarrassed + fedup + hurt + miserable + # 11 psychological nofriend + moody + nervous + tense + worry + worrier, # variables noRRR = ~ age + smokenow, trace = FALSE, poissonff, data = nona.xs.nz, Rank = 2) cp2 <- calibrate(p2, newdata = head(nona.xs.nz, 9), trace = TRUE) cp2 two.cases <- nona.xs.nz[1:2, ] # Another calibration example two.cases$dmd <- c(4, 10) two.cases$feethr <- c(4, 7) two.cases$sleephr <- c(7, 8) two.cases$beats <- c(62, 71) (cp2b <- calibrate(p2, newdata = two.cases)) # Example 2 p1 <- rrvglm(cbind(dmd, feethr, sleephr, beats) ~ age + smokenow + depressed + embarrassed + fedup + hurt + miserable + # 11 psychological nofriend + moody + nervous + tense + worry + worrier, # variables noRRR = ~ age + smokenow, trace = FALSE, poissonff, data = nona.xs.nz, Rank = 1) (cp1c <- calibrate(p1, newdata = two.cases, lr.confint = TRUE)) ## End(Not run)
## Not run: # Example 1 nona.xs.nz <- na.omit(xs.nz) # Overkill!! (Data in VGAMdata package) nona.xs.nz$dmd <- with(nona.xs.nz, round(drinkmaxday)) nona.xs.nz$feethr <- with(nona.xs.nz, round(feethour)) nona.xs.nz$sleephr <- with(nona.xs.nz, round(sleep)) nona.xs.nz$beats <- with(nona.xs.nz, round(pulse)) p2 <- rrvglm(cbind(dmd, feethr, sleephr, beats) ~ age + smokenow + depressed + embarrassed + fedup + hurt + miserable + # 11 psychological nofriend + moody + nervous + tense + worry + worrier, # variables noRRR = ~ age + smokenow, trace = FALSE, poissonff, data = nona.xs.nz, Rank = 2) cp2 <- calibrate(p2, newdata = head(nona.xs.nz, 9), trace = TRUE) cp2 two.cases <- nona.xs.nz[1:2, ] # Another calibration example two.cases$dmd <- c(4, 10) two.cases$feethr <- c(4, 7) two.cases$sleephr <- c(7, 8) two.cases$beats <- c(62, 71) (cp2b <- calibrate(p2, newdata = two.cases)) # Example 2 p1 <- rrvglm(cbind(dmd, feethr, sleephr, beats) ~ age + smokenow + depressed + embarrassed + fedup + hurt + miserable + # 11 psychological nofriend + moody + nervous + tense + worry + worrier, # variables noRRR = ~ age + smokenow, trace = FALSE, poissonff, data = nona.xs.nz, Rank = 1) (cp1c <- calibrate(p1, newdata = two.cases, lr.confint = TRUE)) ## End(Not run)
Algorithmic constants and parameters for running
calibrate.rrvglm
are set using this function.
calibrate.rrvglm.control(object, trace = FALSE, method.optim = "BFGS", gridSize = ifelse(Rank == 1, 17, 9), ...)
calibrate.rrvglm.control(object, trace = FALSE, method.optim = "BFGS", gridSize = ifelse(Rank == 1, 17, 9), ...)
object |
The fitted |
trace , method.optim
|
Same as |
gridSize |
Same as |
... |
Avoids an error message for extraneous arguments. |
Most CLO users will only need to make use of trace
and gridSize
. These arguments should be used inside their
call to calibrate.rrvglm
, not this function
directly.
Similar to calibrate.qrrvglm.control
.
calibrate.rrvglm
,
Coef.rrvglm
.
A constrained additive ordination (CAO) model is fitted using the reduced-rank vector generalized additive model (RR-VGAM) framework.
cao(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = cao.control(...), offset = NULL, method = "cao.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, smart = TRUE, ...)
cao(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = cao.control(...), offset = NULL, method = "cao.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, smart = TRUE, ...)
formula |
a symbolic description of the model to be fit. The RHS of
the formula is used to construct the latent variables, upon
which the smooths are applied. All the variables in the
formula are used for the construction of latent variables
except for those specified by the argument |
family |
a function of class |
data |
an optional data frame containing the variables in
the model. By default the variables are taken from
|
weights |
an optional vector or matrix of (prior) weights to be used
in the fitting process. For |
subset |
an optional logical vector specifying a subset of observations to be used in the fitting process. |
na.action |
a function which indicates what should happen when
the data contain |
etastart |
starting values for the linear predictors. It is a
|
mustart |
starting values for the fitted values. It can be a vector
or a matrix. Some family functions do not make use of
this argument. For |
coefstart |
starting values for the coefficient vector. For |
control |
a list of parameters for controlling the fitting process.
See |
offset |
a vector or |
method |
the method to be used in fitting the model. The default
(and presently only) method |
model |
a logical value indicating whether the model frame
should be assigned in the |
x.arg , y.arg
|
logical values indicating whether the model matrix and
response vector/matrix used in the fitting process should
be assigned in the |
contrasts |
an optional list. See the |
constraints |
an optional list of constraint matrices. For
|
extra |
an optional list with any extra information that might
be needed by the family function. For |
qr.arg |
For |
smart |
logical value indicating whether smart prediction
( |
... |
further arguments passed into |
The arguments of cao
are a mixture of those from
vgam
and cqo
, but with some extras
in cao.control
. Currently, not all of the
arguments work properly.
CAO can be loosely be thought of as the result of fitting
generalized additive models (GAMs) to several responses
(e.g., species) against a very small number of latent
variables. Each latent variable is a linear combination of
the explanatory variables; the coefficients C (called
below) are called constrained coefficients
or canonical coefficients, and are interpreted as
weights or loadings. The C are estimated by maximum
likelihood estimation. It is often a good idea to apply
scale
to each explanatory variable first.
For each response (e.g., species), each latent variable
is smoothed by a cubic smoothing spline, thus CAO is
data-driven. If each smooth were a quadratic then CAO
would simplify to constrained quadratic ordination
(CQO; formerly called canonical Gaussian ordination
or CGO). If each smooth were linear then CAO would simplify
to constrained linear ordination (CLO). CLO can
theoretically be fitted with cao
by specifying
df1.nl=0
, however it is more efficient to use
rrvglm
.
Currently, only Rank=1
is implemented, and only
noRRR = ~1
models are handled.
With binomial data, the default formula is
where is a vector of environmental variables, and
is a
-vector of latent
variables. The
is an additive predictor
for species
, and it models the probabilities
of presence as an additive model on the logit scale.
The matrix
is estimated from the data, as well as
the smooth functions
. The argument
noRRR =
~ 1
specifies that the vector , defined for
RR-VGLMs and QRR-VGLMs, is simply a 1 for an intercept. Here,
the intercept in the model is absorbed into the functions.
A
clogloglink
link may be preferable over a
logitlink
link.
With Poisson count data, the formula is
which models the mean response as an additive models on the log scale.
The fitted latent variables (site scores) are scaled to have
unit variance. The concept of a tolerance is undefined for
CAO models, but the optimums and maximums are defined. The
generic functions Max
and Opt
should work for CAO objects, but note that if the maximum
occurs at the boundary then Max
will return a
NA
. Inference for CAO models is currently undeveloped.
An object of class "cao"
(this may change to "rrvgam"
in the future).
Several generic functions can be applied to the object, e.g.,
Coef
, concoef
, lvplot
,
summary
.
CAO is very costly to compute. With version 0.7-8 it took 28 minutes on a fast machine. I hope to look at ways of speeding things up in the future.
Use set.seed
just prior to calling
cao()
to make your results reproducible. The reason
for this is finding the optimal CAO model presents a difficult
optimization problem, partly because the log-likelihood
function contains many local solutions. To obtain the
(global) solution the user is advised to try many
initial values. This can be done by setting Bestof
some appropriate value (see cao.control
). Trying
many initial values becomes progressively more important as
the nonlinear degrees of freedom of the smooths increase.
CAO models are computationally expensive, therefore setting
trace = TRUE
is a good idea, as well as running it
on a simple random sample of the data set instead.
Sometimes the IRLS algorithm does not converge within the FORTRAN code. This results in warnings being issued. In particular, if an error code of 3 is issued, then this indicates the IRLS algorithm has not converged. One possible remedy is to increase or decrease the nonlinear degrees of freedom so that the curves become more or less flexible, respectively.
T. W. Yee
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
cao.control
,
Coef.cao
,
cqo
,
latvar
,
Opt
,
Max
,
calibrate.qrrvglm
,
persp.cao
,
poissonff
,
binomialff
,
negbinomial
,
gamma2
,
set.seed
,
gam()
in gam,
trapO
.
## Not run: hspider[, 1:6] <- scale(hspider[, 1:6]) # Stdzd environmental vars set.seed(149) # For reproducible results ap1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, df1.nl = c(Pardpull= 2.7, 2.5), Bestof = 7, Crow1positive = FALSE) sort(deviance(ap1, history = TRUE)) # A history of all the iterations Coef(ap1) concoef(ap1) par(mfrow = c(2, 2)) plot(ap1) # All the curves are unimodal; some quite symmetric par(mfrow = c(1, 1), las = 1) index <- 1:ncol(depvar(ap1)) lvplot(ap1, lcol = index, pcol = index, y = TRUE) trplot(ap1, label = TRUE, col = index) abline(a = 0, b = 1, lty = 2) trplot(ap1, label = TRUE, col = "blue", log = "xy", which.sp = c(1, 3)) abline(a = 0, b = 1, lty = 2) persp(ap1, col = index, lwd = 2, label = TRUE) abline(v = Opt(ap1), lty = 2, col = index) abline(h = Max(ap1), lty = 2, col = index) ## End(Not run)
## Not run: hspider[, 1:6] <- scale(hspider[, 1:6]) # Stdzd environmental vars set.seed(149) # For reproducible results ap1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, df1.nl = c(Pardpull= 2.7, 2.5), Bestof = 7, Crow1positive = FALSE) sort(deviance(ap1, history = TRUE)) # A history of all the iterations Coef(ap1) concoef(ap1) par(mfrow = c(2, 2)) plot(ap1) # All the curves are unimodal; some quite symmetric par(mfrow = c(1, 1), las = 1) index <- 1:ncol(depvar(ap1)) lvplot(ap1, lcol = index, pcol = index, y = TRUE) trplot(ap1, label = TRUE, col = index) abline(a = 0, b = 1, lty = 2) trplot(ap1, label = TRUE, col = "blue", log = "xy", which.sp = c(1, 3)) abline(a = 0, b = 1, lty = 2) persp(ap1, col = index, lwd = 2, label = TRUE) abline(v = Opt(ap1), lty = 2, col = index) abline(h = Max(ap1), lty = 2, col = index) ## End(Not run)
Algorithmic constants and parameters for a constrained additive
ordination (CAO), by fitting a reduced-rank vector generalized
additive model (RR-VGAM), are set using this function.
This is the control function for cao
.
cao.control(Rank = 1, all.knots = FALSE, criterion = "deviance", Cinit = NULL, Crow1positive = TRUE, epsilon = 1.0e-05, Etamat.colmax = 10, GradientFunction = FALSE, iKvector = 0.1, iShape = 0.1, noRRR = ~ 1, Norrr = NA, SmallNo = 5.0e-13, Use.Init.Poisson.QO = TRUE, Bestof = if (length(Cinit)) 1 else 10, maxitl = 10, imethod = 1, bf.epsilon = 1.0e-7, bf.maxit = 10, Maxit.optim = 250, optim.maxit = 20, sd.sitescores = 1.0, sd.Cinit = 0.02, suppress.warnings = TRUE, trace = TRUE, df1.nl = 2.5, df2.nl = 2.5, spar1 = 0, spar2 = 0, ...)
cao.control(Rank = 1, all.knots = FALSE, criterion = "deviance", Cinit = NULL, Crow1positive = TRUE, epsilon = 1.0e-05, Etamat.colmax = 10, GradientFunction = FALSE, iKvector = 0.1, iShape = 0.1, noRRR = ~ 1, Norrr = NA, SmallNo = 5.0e-13, Use.Init.Poisson.QO = TRUE, Bestof = if (length(Cinit)) 1 else 10, maxitl = 10, imethod = 1, bf.epsilon = 1.0e-7, bf.maxit = 10, Maxit.optim = 250, optim.maxit = 20, sd.sitescores = 1.0, sd.Cinit = 0.02, suppress.warnings = TRUE, trace = TRUE, df1.nl = 2.5, df2.nl = 2.5, spar1 = 0, spar2 = 0, ...)
Rank |
The numerical rank |
all.knots |
Logical indicating if all distinct points of the smoothing
variables are to be used as knots. Assigning the value
|
criterion |
Convergence criterion. Currently, only one is supported: the deviance is minimized. |
Cinit |
Optional initial C matrix which may speed up convergence. |
Crow1positive |
Logical vector of length |
epsilon |
Positive numeric. Used to test for convergence for GLMs fitted in FORTRAN. Larger values mean a loosening of the convergence criterion. |
Etamat.colmax |
Positive integer, no smaller than |
GradientFunction |
Logical. Whether |
iKvector , iShape
|
See |
noRRR |
Formula giving terms that are not to be included
in the reduced-rank regression (or formation of the latent
variables). The default is to omit the intercept term from
the latent variables. Currently, only |
Norrr |
Defunct. Please use |
SmallNo |
Positive numeric between |
Use.Init.Poisson.QO |
Logical. If |
Bestof |
Integer. The best of |
maxitl |
Positive integer. Maximum number of Newton-Raphson/Fisher-scoring/local-scoring iterations allowed. |
imethod |
See |
bf.epsilon |
Positive numeric. Tolerance used by the modified vector backfitting algorithm for testing convergence. |
bf.maxit |
Positive integer. Number of backfitting iterations allowed in the compiled code. |
Maxit.optim |
Positive integer.
Number of iterations given to the function
|
optim.maxit |
Positive integer.
Number of times |
sd.sitescores |
Numeric. Standard deviation of the
initial values of the site scores, which are generated from
a normal distribution.
Used when |
sd.Cinit |
Standard deviation of the initial values for the elements
of C.
These are normally distributed with mean zero.
This argument is used only if |
suppress.warnings |
Logical. Suppress warnings? |
trace |
Logical indicating if output should be produced for each
iteration. Having the value |
df1.nl , df2.nl
|
Numeric and non-negative, recycled to length S.
Nonlinear degrees
of freedom for smooths of the first and second latent variables.
A value of 0 means the smooth is linear. Roughly, a value between
1.0 and 2.0 often has the approximate flexibility of a quadratic.
The user should not assign too large a value to this argument, e.g.,
the value 4.0 is probably too high. The argument |
spar1 , spar2
|
Numeric and non-negative, recycled to length S.
Smoothing parameters of the
smooths of the first and second latent variables. The larger
the value, the more smooth (less wiggly) the fitted curves.
These arguments are an
alternative to specifying |
... |
Ignored at present. |
Many of these arguments are identical to
qrrvglm.control
. Here, is the
Rank
, is the number of additive predictors, and
is the number of responses (species). Thus
for binomial and Poisson responses, and
for the
negative binomial and 2-parameter gamma distributions.
Allowing the smooths too much flexibility means the CAO
optimization problem becomes more difficult to solve. This
is because the number of local solutions increases as
the nonlinearity of the smooths increases. In situations
of high nonlinearity, many initial values should be used,
so that Bestof
should be assigned a larger value. In
general, there should be a reasonable value of df1.nl
somewhere between 0 and about 3 for most data sets.
A list with the components corresponding to its arguments, after some basic error checking.
The argument df1.nl
can be inputted in the format
c(spp1 = 2, spp2 = 3, 2.5)
, say, meaning the default
value is 2.5, but two species have alternative values.
If spar1 = 0
and df1.nl = 0
then this represents
fitting linear functions (CLO). Currently, this is handled in
the awkward manner of setting df1.nl
to be a small
positive value, so that the smooth is almost linear but
not quite. A proper fix to this special case should done
in the short future.
T. W. Yee
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
Green, P. J. and Silverman, B. W. (1994). Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach, London: Chapman & Hall.
cao
.
## Not run: hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars set.seed(123) ap1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, df1.nl = c(Zoraspin = 2.3, 2.1), Bestof = 10, Crow1positive = FALSE) sort(deviance(ap1, history = TRUE)) # A history of all the iterations Coef(ap1) par(mfrow = c(2, 3)) # All or most of the curves are unimodal; some are plot(ap1, lcol = "blue") # quite symmetric. Hence a CQO model should be ok par(mfrow = c(1, 1), las = 1) index <- 1:ncol(depvar(ap1)) # lvplot is jagged because only 28 sites lvplot(ap1, lcol = index, pcol = index, y = TRUE) trplot(ap1, label = TRUE, col = index) abline(a = 0, b = 1, lty = 2) persp(ap1, label = TRUE, col = 1:4) ## End(Not run)
## Not run: hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars set.seed(123) ap1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, df1.nl = c(Zoraspin = 2.3, 2.1), Bestof = 10, Crow1positive = FALSE) sort(deviance(ap1, history = TRUE)) # A history of all the iterations Coef(ap1) par(mfrow = c(2, 3)) # All or most of the curves are unimodal; some are plot(ap1, lcol = "blue") # quite symmetric. Hence a CQO model should be ok par(mfrow = c(1, 1), las = 1) index <- 1:ncol(depvar(ap1)) # lvplot is jagged because only 28 sites lvplot(ap1, lcol = index, pcol = index, y = TRUE) trplot(ap1, label = TRUE, col = index) abline(a = 0, b = 1, lty = 2) persp(ap1, label = TRUE, col = 1:4) ## End(Not run)
Density, distribution function, quantile function and random generation for the cardioid distribution.
dcard(x, mu, rho, log = FALSE) pcard(q, mu, rho, lower.tail = TRUE, log.p = FALSE) qcard(p, mu, rho, tolerance = 1e-07, maxits = 500, lower.tail = TRUE, log.p = FALSE) rcard(n, mu, rho, ...)
dcard(x, mu, rho, log = FALSE) pcard(q, mu, rho, lower.tail = TRUE, log.p = FALSE) qcard(p, mu, rho, tolerance = 1e-07, maxits = 500, lower.tail = TRUE, log.p = FALSE) rcard(n, mu, rho, ...)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as in |
mu , rho
|
See |
tolerance , maxits , ...
|
The first two are control parameters for the algorithm used
to solve for the roots of a nonlinear system of equations;
|
log |
Logical.
If |
lower.tail , log.p
|
See cardioid
, the VGAM family function
for estimating the two parameters by maximum likelihood
estimation, for the formula of the probability density
function and other details.
dcard
gives the density,
pcard
gives the distribution function,
qcard
gives the quantile function, and
rcard
generates random deviates.
Convergence problems might occur with rcard
.
Thomas W. Yee and Kai Huang
## Not run: mu <- 4; rho <- 0.4; x <- seq(0, 2*pi, len = 501) plot(x, dcard(x, mu, rho), type = "l", las = 1, ylim = c(0, 1), ylab = paste("[dp]card(mu=", mu, ", rho=", rho, ")"), main = "Blue is density, orange is the CDF", col = "blue", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pcard(x, mu, rho), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qcard(probs, mu, rho) lines(Q, dcard(Q, mu, rho), col = "purple", lty = 3, type = "h") lines(Q, pcard(Q, mu, rho), col = "purple", lty = 3, type = "h") abline(h = c(0,probs, 1), v = c(0, 2*pi), col = "purple", lty = 3) max(abs(pcard(Q, mu, rho) - probs)) # Should be 0 ## End(Not run)
## Not run: mu <- 4; rho <- 0.4; x <- seq(0, 2*pi, len = 501) plot(x, dcard(x, mu, rho), type = "l", las = 1, ylim = c(0, 1), ylab = paste("[dp]card(mu=", mu, ", rho=", rho, ")"), main = "Blue is density, orange is the CDF", col = "blue", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pcard(x, mu, rho), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qcard(probs, mu, rho) lines(Q, dcard(Q, mu, rho), col = "purple", lty = 3, type = "h") lines(Q, pcard(Q, mu, rho), col = "purple", lty = 3, type = "h") abline(h = c(0,probs, 1), v = c(0, 2*pi), col = "purple", lty = 3) max(abs(pcard(Q, mu, rho) - probs)) # Should be 0 ## End(Not run)
Estimates the two parameters of the cardioid distribution by maximum likelihood estimation.
cardioid(lmu = extlogitlink(min = 0, max = 2*pi), lrho = extlogitlink(min = -0.5, max = 0.5), imu = NULL, irho = 0.3, nsimEIM = 100, zero = NULL)
cardioid(lmu = extlogitlink(min = 0, max = 2*pi), lrho = extlogitlink(min = -0.5, max = 0.5), imu = NULL, irho = 0.3, nsimEIM = 100, zero = NULL)
lmu , lrho
|
Parameter link functions applied to the |
imu , irho
|
Initial values.
A |
nsimEIM , zero
|
See |
The two-parameter cardioid distribution has a density that can be written as
where ,
, and
is the concentration
parameter.
The default link functions enforce the range constraints of
the parameters.
For positive the distribution is unimodal and
symmetric about
.
The mean of
(which make up the fitted values) is
.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
, rrvglm
and vgam
.
Numerically, this distribution can be difficult to fit because
of a log-likelihood having multiple maximums. The user is
therefore encouraged to try different starting values, i.e.,
make use of imu
and irho
.
Fisher scoring using simulation is used.
T. W. Yee
Jammalamadaka, S. R. and SenGupta, A. (2001). Topics in Circular Statistics, Singapore: World Scientific.
rcard
,
extlogitlink
,
vonmises
.
CircStats and circular currently have a lot more R functions for circular data than the VGAM package.
## Not run: cdata <- data.frame(y = rcard(n = 1000, mu = 4, rho = 0.45)) fit <- vglm(y ~ 1, cardioid, data = cdata, trace = TRUE) coef(fit, matrix=TRUE) Coef(fit) c(with(cdata, mean(y)), head(fitted(fit), 1)) summary(fit) ## End(Not run)
## Not run: cdata <- data.frame(y = rcard(n = 1000, mu = 4, rho = 0.45)) fit <- vglm(y ~ 1, cardioid, data = cdata, trace = TRUE) coef(fit, matrix=TRUE) Coef(fit) c(with(cdata, mean(y)), head(fitted(fit), 1)) summary(fit) ## End(Not run)
Computes the cauchit (tangent) link transformation, including its inverse and the first two derivatives.
cauchitlink(theta, bvalue = .Machine$double.eps, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
cauchitlink(theta, bvalue = .Machine$double.eps, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
bvalue |
See |
inverse , deriv , short , tag
|
Details at |
This link function is an alternative link function for parameters that lie in the unit interval. This type of link bears the same relation to the Cauchy distribution as the probit link bears to the Gaussian. One characteristic of this link function is that the tail is heavier relative to the other links (see examples below).
Numerical values of theta
close to 0 or 1 or out
of range result in Inf
, -Inf
, NA
or NaN
.
For deriv = 0
, the tangent of theta
, i.e.,
tan(pi * (theta-0.5))
when inverse = FALSE
,
and if inverse = TRUE
then
0.5 + atan(theta)/pi
.
For deriv = 1
, then the function returns
d eta
/ d theta
as a function of
theta
if inverse = FALSE
, else if inverse
= TRUE
then it returns the reciprocal.
Numerical instability may occur when theta
is close to
1 or 0. One way of overcoming this is to use bvalue
.
As mentioned above,
in terms of the threshold approach with cumulative
probabilities for an ordinal response this link
function corresponds to the Cauchy distribution (see
cauchy1
).
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
logitlink
,
probitlink
,
clogloglink
,
loglink
,
cauchy
,
cauchy1
,
Cauchy
.
p <- seq(0.01, 0.99, by = 0.01) cauchitlink(p) max(abs(cauchitlink(cauchitlink(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by=0.01), seq(0.97, 1.02, by = 0.01)) cauchitlink(p) # Has no NAs ## Not run: par(mfrow = c(2, 2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) p <- seq(0.01, 0.99, by = 0.01) for (d in 0:1) { matplot(p, cbind(logitlink(p, deriv = d), probitlink(p, deriv = d)), type = "n", col = "purple", ylab = "transformation", las = 1, main = if (d == 0) "Some probability link functions" else "First derivative") lines(p, logitlink(p, deriv = d), col = "limegreen") lines(p, probitlink(p, deriv = d), col = "purple") lines(p, clogloglink(p, deriv = d), col = "chocolate") lines(p, cauchitlink(p, deriv = d), col = "tan") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logitlink", "probitlink", "clogloglink", "cauchitlink"), lwd = mylwd, col = c("limegreen", "purple", "chocolate", "tan")) } else abline(v = 0.5, lty = "dashed") } for (d in 0) { matplot(y, cbind( logitlink(y, deriv = d, inverse = TRUE), probitlink(y, deriv = d, inverse = TRUE)), type = "n", col = "purple", xlab = "transformation", ylab = "p", main = if (d == 0) "Some inverse probability link functions" else "First derivative", las=1) lines(y, logitlink(y, deriv = d, inverse = TRUE), col = "limegreen") lines(y, probitlink(y, deriv = d, inverse = TRUE), col = "purple") lines(y, clogloglink(y, deriv = d, inverse = TRUE), col = "chocolate") lines(y, cauchitlink(y, deriv = d, inverse = TRUE), col = "tan") if (d == 0) { abline(h = 0.5, v = 0, lty = "dashed") legend(-4, 1, c("logitlink", "probitlink", "clogloglink", "cauchitlink"), lwd = mylwd, col = c("limegreen", "purple", "chocolate", "tan")) } } par(lwd = 1) ## End(Not run)
p <- seq(0.01, 0.99, by = 0.01) cauchitlink(p) max(abs(cauchitlink(cauchitlink(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by=0.01), seq(0.97, 1.02, by = 0.01)) cauchitlink(p) # Has no NAs ## Not run: par(mfrow = c(2, 2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) p <- seq(0.01, 0.99, by = 0.01) for (d in 0:1) { matplot(p, cbind(logitlink(p, deriv = d), probitlink(p, deriv = d)), type = "n", col = "purple", ylab = "transformation", las = 1, main = if (d == 0) "Some probability link functions" else "First derivative") lines(p, logitlink(p, deriv = d), col = "limegreen") lines(p, probitlink(p, deriv = d), col = "purple") lines(p, clogloglink(p, deriv = d), col = "chocolate") lines(p, cauchitlink(p, deriv = d), col = "tan") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logitlink", "probitlink", "clogloglink", "cauchitlink"), lwd = mylwd, col = c("limegreen", "purple", "chocolate", "tan")) } else abline(v = 0.5, lty = "dashed") } for (d in 0) { matplot(y, cbind( logitlink(y, deriv = d, inverse = TRUE), probitlink(y, deriv = d, inverse = TRUE)), type = "n", col = "purple", xlab = "transformation", ylab = "p", main = if (d == 0) "Some inverse probability link functions" else "First derivative", las=1) lines(y, logitlink(y, deriv = d, inverse = TRUE), col = "limegreen") lines(y, probitlink(y, deriv = d, inverse = TRUE), col = "purple") lines(y, clogloglink(y, deriv = d, inverse = TRUE), col = "chocolate") lines(y, cauchitlink(y, deriv = d, inverse = TRUE), col = "tan") if (d == 0) { abline(h = 0.5, v = 0, lty = "dashed") legend(-4, 1, c("logitlink", "probitlink", "clogloglink", "cauchitlink"), lwd = mylwd, col = c("limegreen", "purple", "chocolate", "tan")) } } par(lwd = 1) ## End(Not run)
Estimates either the location parameter or both the location and scale parameters of the Cauchy distribution by maximum likelihood estimation.
cauchy(llocation = "identitylink", lscale = "loglink", imethod = 1, ilocation = NULL, iscale = NULL, gprobs.y = ppoints(19), gscale.mux = exp(-3:3), zero = "scale") cauchy1(scale.arg = 1, llocation = "identitylink", ilocation = NULL, imethod = 1, gprobs.y = ppoints(19), zero = NULL)
cauchy(llocation = "identitylink", lscale = "loglink", imethod = 1, ilocation = NULL, iscale = NULL, gprobs.y = ppoints(19), gscale.mux = exp(-3:3), zero = "scale") cauchy1(scale.arg = 1, llocation = "identitylink", ilocation = NULL, imethod = 1, gprobs.y = ppoints(19), zero = NULL)
llocation , lscale
|
Parameter link functions for the location parameter |
ilocation , iscale
|
Optional initial value for |
imethod |
Integer, either 1 or 2 or 3.
Initial method, three algorithms are implemented.
The user should try all possible values to help avoid
converging to a local solution.
Also, choose the another value if convergence fails, or use
|
gprobs.y , gscale.mux , zero
|
See |
scale.arg |
Known (positive) scale parameter, called |
The Cauchy distribution has density function
where and
are real and finite,
and
.
The distribution is symmetric about
and has a heavy tail.
Its median and mode are
, but the mean does not exist.
The fitted values are the estimates of
.
Fisher scoring is used.
If the scale parameter is known (cauchy1
) then there
may be multiple local maximum likelihood solutions for the
location parameter. However, if both location and scale
parameters are to be estimated (cauchy
) then there
is a unique maximum likelihood solution provided and less than half the data are located at any one point.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
, and vgam
.
It is well-known that the Cauchy distribution may have
local maximums in its likelihood function; make full use of
imethod
, ilocation
, iscale
etc.
Good initial values are needed.
By default cauchy
searches for a starting
value for and
on a 2-D grid.
Likewise, by default,
cauchy1
searches for a starting
value for on a 1-D grid.
If convergence to the global maximum is not acheieved then
it also pays to select a wide range
of initial values via the
ilocation
and/or
iscale
and/or imethod
arguments.
T. W. Yee
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
Barnett, V. D. (1966). Evaluation of the maximum-likehood estimator where the likelihood equation has multiple roots. Biometrika, 53, 151–165.
Copas, J. B. (1975). On the unimodality of the likelihood for the Cauchy distribution. Biometrika, 62, 701–704.
Efron, B. and Hinkley, D. V. (1978). Assessing the accuracy of the maximum likelihood estimator: Observed versus expected Fisher information. Biometrika, 65, 457–481.
Cauchy
,
cauchit
,
studentt
,
simulate.vlm
.
# Both location and scale parameters unknown set.seed(123) cdata <- data.frame(x2 = runif(nn <- 1000)) cdata <- transform(cdata, loc = exp(1 + 0.5 * x2), scale = exp(1)) cdata <- transform(cdata, y2 = rcauchy(nn, loc, scale)) fit2 <- vglm(y2 ~ x2, cauchy(lloc = "loglink"), data = cdata) coef(fit2, matrix = TRUE) head(fitted(fit2)) # Location estimates summary(fit2) # Location parameter unknown cdata <- transform(cdata, scale1 = 0.4) cdata <- transform(cdata, y1 = rcauchy(nn, loc, scale1)) fit1 <- vglm(y1 ~ x2, cauchy1(scale = 0.4), data = cdata, trace = TRUE) coef(fit1, matrix = TRUE)
# Both location and scale parameters unknown set.seed(123) cdata <- data.frame(x2 = runif(nn <- 1000)) cdata <- transform(cdata, loc = exp(1 + 0.5 * x2), scale = exp(1)) cdata <- transform(cdata, y2 = rcauchy(nn, loc, scale)) fit2 <- vglm(y2 ~ x2, cauchy(lloc = "loglink"), data = cdata) coef(fit2, matrix = TRUE) head(fitted(fit2)) # Location estimates summary(fit2) # Location parameter unknown cdata <- transform(cdata, scale1 = 0.4) cdata <- transform(cdata, y1 = rcauchy(nn, loc, scale1)) fit1 <- vglm(y1 ~ x2, cauchy1(scale = 0.4), data = cdata, trace = TRUE) coef(fit1, matrix = TRUE)
Computes the cumulative distribution function (CDF) for observations, based on a LMS quantile regression.
cdf.lmscreg(object, newdata = NULL, ...)
cdf.lmscreg(object, newdata = NULL, ...)
object |
A VGAM quantile regression model, i.e.,
an object produced by modelling functions such as
|
newdata |
Data frame where the predictions are to be made. If missing, the original data is used. |
... |
Parameters which are passed into functions such as
|
The CDFs returned here are values lying in [0,1] giving
the relative probabilities associated with the quantiles
newdata
. For example, a value near 0.75 means it is
close to the upper quartile of the distribution.
A vector of CDF values lying in [0,1].
The data are treated like quantiles, and the
percentiles are returned. The opposite is performed by
qtplot.lmscreg
.
The CDF values of the model have been placed in
@post$cdf
when the model was fitted.
Thomas W. Yee
Yee, T. W. (2004). Quantile regression via vector generalized additive models. Statistics in Medicine, 23, 2295–2315.
deplot.lmscreg
,
qtplot.lmscreg
,
lms.bcn
,
lms.bcg
,
lms.yjn
,
CommonVGAMffArguments
.
fit <- vgam(BMI ~ s(age, df=c(4, 2)), lms.bcn(zero = 1), data = bmi.nz) head(fit@post$cdf) head(cdf(fit)) # Same head(depvar(fit)) head(fitted(fit)) cdf(fit, data.frame(age = c(31.5, 39), BMI = c(28.4, 24)))
fit <- vgam(BMI ~ s(age, df=c(4, 2)), lms.bcn(zero = 1), data = bmi.nz) head(fit@post$cdf) head(cdf(fit)) # Same head(depvar(fit)) head(fitted(fit)) cdf(fit, data.frame(age = c(31.5, 39), BMI = c(28.4, 24)))
Maximum likelihood estimation of the 2-parameter Gumbel distribution when there are censored observations. A matrix response is not allowed.
cens.gumbel(llocation = "identitylink", lscale = "loglink", iscale = NULL, mean = TRUE, percentiles = NULL, zero = "scale")
cens.gumbel(llocation = "identitylink", lscale = "loglink", iscale = NULL, mean = TRUE, percentiles = NULL, zero = "scale")
llocation , lscale
|
Character.
Parameter link functions for the location and
(positive) |
iscale |
Numeric and positive.
Initial value for |
mean |
Logical. Return the mean? If |
percentiles |
Numeric with values between 0 and 100.
If |
zero |
An integer-valued vector specifying which linear/additive
predictors are modelled as intercepts only. The value
(possibly values) must be from the set {1,2} corresponding
respectively to |
This VGAM family function is like gumbel
but handles observations that are left-censored (so that
the true value would be less than the observed value) else
right-censored (so that the true value would be greater than
the observed value). To indicate which type of censoring,
input
extra = list(leftcensored = vec1, rightcensored = vec2)
where vec1
and vec2
are logical vectors
the same length as the response.
If the two components of this list are missing then the
logical values are taken to be FALSE
. The fitted
object has these two components stored in the extra
slot.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
and vgam
.
Numerical problems may occur if the amount of censoring is excessive.
See gumbel
for details about the Gumbel
distribution. The initial values are based on assuming all
uncensored observations, therefore could be improved upon.
T. W. Yee
Coles, S. (2001). An Introduction to Statistical Modeling of Extreme Values. London: Springer-Verlag.
gumbel
,
gumbelff
,
rgumbel
,
guplot
,
gev
,
venice
.
# Example 1 ystar <- venice[["r1"]] # Use the first order statistic as the response nn <- length(ystar) L <- runif(nn, 100, 104) # Lower censoring points U <- runif(nn, 130, 135) # Upper censoring points y <- pmax(L, ystar) # Left censored y <- pmin(U, y) # Right censored extra <- list(leftcensored = ystar < L, rightcensored = ystar > U) fit <- vglm(y ~ scale(year), data = venice, trace = TRUE, extra = extra, fam = cens.gumbel(mean = FALSE, perc = c(5, 25, 50, 75, 95))) coef(fit, matrix = TRUE) head(fitted(fit)) fit@extra # Example 2: simulated data nn <- 1000 ystar <- rgumbel(nn, loc = 1, scale = exp(0.5)) # The uncensored data L <- runif(nn, -1, 1) # Lower censoring points U <- runif(nn, 2, 5) # Upper censoring points y <- pmax(L, ystar) # Left censored y <- pmin(U, y) # Right censored ## Not run: par(mfrow = c(1, 2)); hist(ystar); hist(y); extra <- list(leftcensored = ystar < L, rightcensored = ystar > U) fit <- vglm(y ~ 1, trace = TRUE, extra = extra, fam = cens.gumbel) coef(fit, matrix = TRUE)
# Example 1 ystar <- venice[["r1"]] # Use the first order statistic as the response nn <- length(ystar) L <- runif(nn, 100, 104) # Lower censoring points U <- runif(nn, 130, 135) # Upper censoring points y <- pmax(L, ystar) # Left censored y <- pmin(U, y) # Right censored extra <- list(leftcensored = ystar < L, rightcensored = ystar > U) fit <- vglm(y ~ scale(year), data = venice, trace = TRUE, extra = extra, fam = cens.gumbel(mean = FALSE, perc = c(5, 25, 50, 75, 95))) coef(fit, matrix = TRUE) head(fitted(fit)) fit@extra # Example 2: simulated data nn <- 1000 ystar <- rgumbel(nn, loc = 1, scale = exp(0.5)) # The uncensored data L <- runif(nn, -1, 1) # Lower censoring points U <- runif(nn, 2, 5) # Upper censoring points y <- pmax(L, ystar) # Left censored y <- pmin(U, y) # Right censored ## Not run: par(mfrow = c(1, 2)); hist(ystar); hist(y); extra <- list(leftcensored = ystar < L, rightcensored = ystar > U) fit <- vglm(y ~ 1, trace = TRUE, extra = extra, fam = cens.gumbel) coef(fit, matrix = TRUE)
Maximum likelihood estimation for the normal distribution with left and right censoring.
cens.normal(lmu = "identitylink", lsd = "loglink", imethod = 1, zero = "sd")
cens.normal(lmu = "identitylink", lsd = "loglink", imethod = 1, zero = "sd")
lmu , lsd
|
Parameter link functions
applied to the mean and standard deviation parameters.
See |
imethod |
Initialization method. Either 1 or 2, this specifies two methods for obtaining initial values for the parameters. |
zero |
A vector, e.g., containing the value 1 or 2; if so,
the mean or standard deviation respectively are modelled
as an intercept only.
Setting |
This function is like uninormal
but handles
observations that are left-censored (so that the true value
would be less than the observed value) else right-censored
(so that the true value would be greater than the observed
value). To indicate which type of censoring, input extra
= list(leftcensored = vec1, rightcensored = vec2)
where
vec1
and vec2
are logical vectors the same length
as the response.
If the two components of this list are missing then the logical
values are taken to be FALSE
. The fitted object has
these two components stored in the extra
slot.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
, and vgam
.
This function, which is an alternative to tobit
,
cannot handle a matrix response
and uses different working weights.
If there are no censored observations then
uninormal
is recommended instead.
T. W. Yee
tobit
,
uninormal
,
double.cens.normal
.
## Not run: cdata <- data.frame(x2 = runif(nn <- 1000)) # ystar are true values cdata <- transform(cdata, ystar = rnorm(nn, m = 100 + 15 * x2, sd = exp(3))) with(cdata, hist(ystar)) cdata <- transform(cdata, L = runif(nn, 80, 90), # Lower censoring points U = runif(nn, 130, 140)) # Upper censoring points cdata <- transform(cdata, y = pmax(L, ystar)) # Left censored cdata <- transform(cdata, y = pmin(U, y)) # Right censored with(cdata, hist(y)) Extra <- list(leftcensored = with(cdata, ystar < L), rightcensored = with(cdata, ystar > U)) fit1 <- vglm(y ~ x2, cens.normal, data = cdata, crit = "c", extra = Extra) fit2 <- vglm(y ~ x2, tobit(Lower = with(cdata, L), Upper = with(cdata, U)), data = cdata, crit = "c", trace = TRUE) coef(fit1, matrix = TRUE) max(abs(coef(fit1, matrix = TRUE) - coef(fit2, matrix = TRUE))) # Should be 0 names(fit1@extra) ## End(Not run)
## Not run: cdata <- data.frame(x2 = runif(nn <- 1000)) # ystar are true values cdata <- transform(cdata, ystar = rnorm(nn, m = 100 + 15 * x2, sd = exp(3))) with(cdata, hist(ystar)) cdata <- transform(cdata, L = runif(nn, 80, 90), # Lower censoring points U = runif(nn, 130, 140)) # Upper censoring points cdata <- transform(cdata, y = pmax(L, ystar)) # Left censored cdata <- transform(cdata, y = pmin(U, y)) # Right censored with(cdata, hist(y)) Extra <- list(leftcensored = with(cdata, ystar < L), rightcensored = with(cdata, ystar > U)) fit1 <- vglm(y ~ x2, cens.normal, data = cdata, crit = "c", extra = Extra) fit2 <- vglm(y ~ x2, tobit(Lower = with(cdata, L), Upper = with(cdata, U)), data = cdata, crit = "c", trace = TRUE) coef(fit1, matrix = TRUE) max(abs(coef(fit1, matrix = TRUE) - coef(fit2, matrix = TRUE))) # Should be 0 names(fit1@extra) ## End(Not run)
Family function for a censored Poisson response.
cens.poisson(link = "loglink", imu = NULL, biglambda = 10, smallno = 1e-10)
cens.poisson(link = "loglink", imu = NULL, biglambda = 10, smallno = 1e-10)
link |
Link function applied to the mean;
see |
imu |
Optional initial value;
see |
biglambda , smallno
|
Used to help robustify the code when |
Often a table of Poisson counts has an entry J+ meaning
.
This family function is similar to
poissonff
but handles such censored data. The input requires
SurvS4
. Only a univariate response is allowed.
The Newton-Raphson algorithm is used.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as
vglm
and
vgam
.
As the response is discrete,
care is required with Surv
(the old class because of
setOldClass(c("SurvS4", "Surv"))
;
see
setOldClass
),
especially with
"interval"
censored data because of the
(start, end]
format.
See the examples below.
The examples have
y < L
as left censored and
y >= U
(formatted as U+
) as right censored observations,
therefore
L <= y < U
is for uncensored and/or interval censored
observations.
Consequently the input must be tweaked to conform to the
(start, end]
format.
A bit of attention has been directed to try robustify the code
when lambda
is very large, however this currently works
for left and right censored data only, not interval
censored data. Sometime the fix involves an approximation,
hence it is a good idea to set trace = TRUE
.
The function poissonff
should be used
when there are no censored observations.
Also, NA
s are not permitted with SurvS4
,
nor is type = "counting"
.
Thomas W. Yee
See survival for background.
SurvS4
,
poissonff
,
Links
,
mills.ratio
.
# Example 1: right censored data set.seed(123); U <- 20 cdata <- data.frame(y = rpois(N <- 100, exp(3))) cdata <- transform(cdata, cy = pmin(U, y), rcensored = (y >= U)) cdata <- transform(cdata, status = ifelse(rcensored, 0, 1)) with(cdata, table(cy)) with(cdata, table(rcensored)) with(cdata, table(print(SurvS4(cy, status)))) # Check; U+ means >= U fit <- vglm(SurvS4(cy, status) ~ 1, cens.poisson, data = cdata, trace = TRUE) coef(fit, matrix = TRUE) table(print(depvar(fit))) # Another check; U+ means >= U # Example 2: left censored data L <- 15 cdata <- transform(cdata, cY = pmax(L, y), lcensored = y < L) # Note y < L, not cY == L or y <= L cdata <- transform(cdata, status = ifelse(lcensored, 0, 1)) with(cdata, table(cY)) with(cdata, table(lcensored)) with(cdata, table(print(SurvS4(cY, status, type = "left")))) # Check fit <- vglm(SurvS4(cY, status, type = "left") ~ 1, cens.poisson, data = cdata, trace = TRUE) coef(fit, matrix = TRUE) # Example 3: interval censored data cdata <- transform(cdata, Lvec = rep(L, len = N), Uvec = rep(U, len = N)) cdata <- transform(cdata, icensored = Lvec <= y & y < Uvec) # Not lcensored or rcensored with(cdata, table(icensored)) cdata <- transform(cdata, status = rep(3, N)) # 3 == interval censored cdata <- transform(cdata, status = ifelse(rcensored, 0, status)) # 0 means right censored cdata <- transform(cdata, status = ifelse(lcensored, 2, status)) # 2 means left censored # Have to adjust Lvec and Uvec because of the (start, end] format: cdata$Lvec[with(cdata,icensored)] <- cdata$Lvec[with(cdata,icensored)]-1 cdata$Uvec[with(cdata,icensored)] <- cdata$Uvec[with(cdata,icensored)]-1 # Unchanged: cdata$Lvec[with(cdata, lcensored)] <- cdata$Lvec[with(cdata, lcensored)] cdata$Lvec[with(cdata, rcensored)] <- cdata$Uvec[with(cdata, rcensored)] with(cdata, # Check table(ii <- print(SurvS4(Lvec, Uvec, status, type = "interval")))) fit <- vglm(SurvS4(Lvec, Uvec, status, type = "interval") ~ 1, cens.poisson, data = cdata, trace = TRUE) coef(fit, matrix = TRUE) table(print(depvar(fit))) # Another check # Example 4: Add in some uncensored observations index <- (1:N)[with(cdata, icensored)] index <- head(index, 4) cdata$status[index] <- 1 # actual or uncensored value cdata$Lvec[index] <- cdata$y[index] with(cdata, table(ii <- print(SurvS4(Lvec, Uvec, status, type = "interval")))) # Check fit <- vglm(SurvS4(Lvec, Uvec, status, type = "interval") ~ 1, cens.poisson, data = cdata, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) table(print(depvar(fit))) # Another check
# Example 1: right censored data set.seed(123); U <- 20 cdata <- data.frame(y = rpois(N <- 100, exp(3))) cdata <- transform(cdata, cy = pmin(U, y), rcensored = (y >= U)) cdata <- transform(cdata, status = ifelse(rcensored, 0, 1)) with(cdata, table(cy)) with(cdata, table(rcensored)) with(cdata, table(print(SurvS4(cy, status)))) # Check; U+ means >= U fit <- vglm(SurvS4(cy, status) ~ 1, cens.poisson, data = cdata, trace = TRUE) coef(fit, matrix = TRUE) table(print(depvar(fit))) # Another check; U+ means >= U # Example 2: left censored data L <- 15 cdata <- transform(cdata, cY = pmax(L, y), lcensored = y < L) # Note y < L, not cY == L or y <= L cdata <- transform(cdata, status = ifelse(lcensored, 0, 1)) with(cdata, table(cY)) with(cdata, table(lcensored)) with(cdata, table(print(SurvS4(cY, status, type = "left")))) # Check fit <- vglm(SurvS4(cY, status, type = "left") ~ 1, cens.poisson, data = cdata, trace = TRUE) coef(fit, matrix = TRUE) # Example 3: interval censored data cdata <- transform(cdata, Lvec = rep(L, len = N), Uvec = rep(U, len = N)) cdata <- transform(cdata, icensored = Lvec <= y & y < Uvec) # Not lcensored or rcensored with(cdata, table(icensored)) cdata <- transform(cdata, status = rep(3, N)) # 3 == interval censored cdata <- transform(cdata, status = ifelse(rcensored, 0, status)) # 0 means right censored cdata <- transform(cdata, status = ifelse(lcensored, 2, status)) # 2 means left censored # Have to adjust Lvec and Uvec because of the (start, end] format: cdata$Lvec[with(cdata,icensored)] <- cdata$Lvec[with(cdata,icensored)]-1 cdata$Uvec[with(cdata,icensored)] <- cdata$Uvec[with(cdata,icensored)]-1 # Unchanged: cdata$Lvec[with(cdata, lcensored)] <- cdata$Lvec[with(cdata, lcensored)] cdata$Lvec[with(cdata, rcensored)] <- cdata$Uvec[with(cdata, rcensored)] with(cdata, # Check table(ii <- print(SurvS4(Lvec, Uvec, status, type = "interval")))) fit <- vglm(SurvS4(Lvec, Uvec, status, type = "interval") ~ 1, cens.poisson, data = cdata, trace = TRUE) coef(fit, matrix = TRUE) table(print(depvar(fit))) # Another check # Example 4: Add in some uncensored observations index <- (1:N)[with(cdata, icensored)] index <- head(index, 4) cdata$status[index] <- 1 # actual or uncensored value cdata$Lvec[index] <- cdata$y[index] with(cdata, table(ii <- print(SurvS4(Lvec, Uvec, status, type = "interval")))) # Check fit <- vglm(SurvS4(Lvec, Uvec, status, type = "interval") ~ 1, cens.poisson, data = cdata, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) table(print(depvar(fit))) # Another check
This data frame concerns families data and cystic fibrosis.
data(cfibrosis)
data(cfibrosis)
A data frame with 24 rows on the following 4 variables.
Over ascertained families, the th ascertained family
has
siblings of whom
are affected and
are ascertained.
The data set allows a classical segregation analysis
to be peformed. In particular,
to test Mendelian segregation ratios in nuclear family data.
The likelihood has similarities with seq2binomial
.
The data is originally from Crow (1965) and appears as Table 2.3 of Lange (2002).
Crow, J. F. (1965) Problems of ascertainment in the analysis of family data. Epidemiology and Genetics of Chronic Disease. Public Health Service Publication 1163, Neel J. V., Shaw M. W., Schull W. J., editors, Department of Health, Education, and Welfare, Washington, DC, USA.
Lange, K. (2002) Mathematical and Statistical Methods for Genetic Analysis. Second Edition. Springer-Verlag: New York, USA.
cfibrosis summary(cfibrosis)
cfibrosis summary(cfibrosis)
Redirects the user to the function cqo
.
cgo(...)
cgo(...)
... |
Ignored. |
The former function cgo
has been renamed cqo
because CGO (for canonical Gaussian ordination) is a confusing
and inaccurate name.
CQO (for constrained quadratic ordination) is better.
This new nomenclature described in Yee (2006).
Nothing is returned; an error message is issued.
The code, therefore, in Yee (2004) will not run without changing the
"g"
to a "q"
.
Thomas W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
cqo
.
## Not run: cgo() ## End(Not run)
## Not run: cgo() ## End(Not run)
Presence/absence of chest pain in 10186 New Zealand adults.
data(chest.nz)
data(chest.nz)
A data frame with 73 rows and the following 5 variables.
a numeric vector; age (years).
a numeric vector of counts; no pain on LHS or RHS.
a numeric vector of counts; no pain on LHS but pain on RHS.
a numeric vector of counts; no pain on RHS but pain on LHS.
a numeric vector of counts; pain on LHS and RHS of chest.
Each adult was asked their age and whether they experienced any pain or discomfort in their chest over the last six months. If yes, they indicated whether it was on their LHS and/or RHS of their chest.
MacMahon, S., Norton, R., Jackson, R., Mackie, M. J., Cheng, A., Vander Hoorn, S., Milne, A., McCulloch, A. (1995) Fletcher Challenge-University of Auckland Heart & Health Study: design and baseline findings. New Zealand Medical Journal, 108, 499–502.
## Not run: fit <- vgam(cbind(nolnor, nolr, lnor, lr) ~ s(age, c(4, 3)), binom2.or(exchan = TRUE, zero = NULL), data = chest.nz) coef(fit, matrix = TRUE) ## End(Not run) ## Not run: plot(fit, which.cf = 2, se = TRUE)
## Not run: fit <- vgam(cbind(nolnor, nolr, lnor, lr) ~ s(age, c(4, 3)), binom2.or(exchan = TRUE, zero = NULL), data = chest.nz) coef(fit, matrix = TRUE) ## End(Not run) ## Not run: plot(fit, which.cf = 2, se = TRUE)
The Chinese population in New Zealand from 1867 to 2001, along with the whole of the New Zealand population.
data(chinese.nz)
data(chinese.nz)
A data frame with 27 observations on the following 4 variables.
year
Year.
male
Number of Chinese males.
female
Number of Chinese females.
nz
Total number in the New Zealand population.
Historically, there was a large exodus of Chinese from the Guangdong region starting in the mid-1800s to the gold fields of South Island of New Zealand, California (a region near Mexico), and southern Australia, etc. Discrimination then meant that only men were allowed entry, to hinder permanent settlement. In the case of New Zealand, the government relaxed its immigration laws after WWII to allow wives of Chinese already in NZ to join them because China had been among the Allied powers. Gradual relaxation in the immigration and an influx during the 1980s meant the Chinese population became increasingly demographically normal over time.
The NZ total for the years 1867 and 1871 exclude the Maori population. Three modifications have been made to the female column to make the data internally consistent with the original table.
Page 6 of Aliens At My Table: Asians as New Zealanders See Them by M. Ip and N. Murphy, (2005). Penguin Books. Auckland, New Zealand.
## Not run: par(mfrow = c(1, 2)) plot(female / (male + female) ~ year, chinese.nz, type = "b", ylab = "Proportion", col = "blue", las = 1, cex = 0.015 * sqrt(male + female), # cex = 0.10 * sqrt((male + female)^1.5 / sqrt(female) / sqrt(male)), main = "Proportion of NZ Chinese that are female") abline(h = 0.5, lty = "dashed", col = "gray") fit1.cnz <- vglm(cbind(female, male) ~ year, binomialff, data = chinese.nz) fit2.cnz <- vglm(cbind(female, male) ~ sm.poly(year, 2), binomialff, data = chinese.nz) fit4.cnz <- vglm(cbind(female, male) ~ sm.bs(year, 5), binomialff, data = chinese.nz) lines(fitted(fit1.cnz) ~ year, chinese.nz, col = "purple", lty = 1) lines(fitted(fit2.cnz) ~ year, chinese.nz, col = "green", lty = 2) lines(fitted(fit4.cnz) ~ year, chinese.nz, col = "orange", lwd = 2, lty = 1) legend("bottomright", col = c("purple", "green", "orange"), lty = c(1, 2, 1), leg = c("linear", "quadratic", "B-spline")) plot(100*(male+female)/nz ~ year, chinese.nz, type = "b", ylab = "Percent", ylim = c(0, max(100*(male+female)/nz)), col = "blue", las = 1, main = "Percent of NZers that are Chinese") abline(h = 0, lty = "dashed", col = "gray") ## End(Not run)
## Not run: par(mfrow = c(1, 2)) plot(female / (male + female) ~ year, chinese.nz, type = "b", ylab = "Proportion", col = "blue", las = 1, cex = 0.015 * sqrt(male + female), # cex = 0.10 * sqrt((male + female)^1.5 / sqrt(female) / sqrt(male)), main = "Proportion of NZ Chinese that are female") abline(h = 0.5, lty = "dashed", col = "gray") fit1.cnz <- vglm(cbind(female, male) ~ year, binomialff, data = chinese.nz) fit2.cnz <- vglm(cbind(female, male) ~ sm.poly(year, 2), binomialff, data = chinese.nz) fit4.cnz <- vglm(cbind(female, male) ~ sm.bs(year, 5), binomialff, data = chinese.nz) lines(fitted(fit1.cnz) ~ year, chinese.nz, col = "purple", lty = 1) lines(fitted(fit2.cnz) ~ year, chinese.nz, col = "green", lty = 2) lines(fitted(fit4.cnz) ~ year, chinese.nz, col = "orange", lwd = 2, lty = 1) legend("bottomright", col = c("purple", "green", "orange"), lty = c(1, 2, 1), leg = c("linear", "quadratic", "B-spline")) plot(100*(male+female)/nz ~ year, chinese.nz, type = "b", ylab = "Percent", ylim = c(0, max(100*(male+female)/nz)), col = "blue", las = 1, main = "Percent of NZers that are Chinese") abline(h = 0, lty = "dashed", col = "gray") ## End(Not run)
Maximum likelihood estimation of the degrees of freedom for a chi-squared distribution. Also fits the chi distribution.
chisq(link = "loglink", zero = NULL, squared = TRUE)
chisq(link = "loglink", zero = NULL, squared = TRUE)
link , zero
|
See |
squared |
Logical.
Set |
The degrees of freedom is treated as a real parameter to be estimated and not as an integer. Being positive, a log link is used by default. Fisher scoring is used.
If a random variable has a chi-squared distribution then the square root of the random variable has a chi distribution. For both distributions, the fitted value is the mean.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
Multiple responses are permitted. There may be convergence problems if the degrees of freedom is very large or close to zero.
T. W. Yee
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
cdata <- data.frame(x2 = runif(nn <- 1000)) cdata <- transform(cdata, y1 = rchisq(nn, df = exp(1 - 1 * x2)), y2 = rchisq(nn, df = exp(2 - 2 * x2))) fit <- vglm(cbind(y1, y2) ~ x2, chisq, data = cdata, trace = TRUE) coef(fit, matrix = TRUE)
cdata <- data.frame(x2 = runif(nn <- 1000)) cdata <- transform(cdata, y1 = rchisq(nn, df = exp(1 - 1 * x2)), y2 = rchisq(nn, df = exp(2 - 2 * x2))) fit <- vglm(cbind(y1, y2) ~ x2, chisq, data = cdata, trace = TRUE) coef(fit, matrix = TRUE)
Redirects the user to the function rrvglm
.
clo(...)
clo(...)
... |
Ignored. |
CLO stands for constrained linear ordination, and is fitted with a statistical class of models called reduced-rank vector generalized linear models (RR-VGLMs). It allows for generalized reduced-rank regression in that response types such as Poisson counts and presence/absence data can be handled.
Currently in the VGAM package, rrvglm
is
used to fit RR-VGLMs. However, the Author's opinion is that
linear responses to a latent variable (composite environmental
gradient) is not as common as unimodal responses, therefore
cqo
is often more appropriate.
The new CLO/CQO/CAO nomenclature described in Yee (2006).
Nothing is returned; an error message is issued.
Thomas W. Yee
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
## Not run: clo() ## End(Not run)
## Not run: clo() ## End(Not run)
Computes the complementary log-log transformation, including its inverse and the first two derivatives. The complementary log transformation is also computed.
clogloglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) cloglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
clogloglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) cloglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
bvalue |
See |
inverse , deriv , short , tag
|
Details at |
The complementary log-log link function is
commonly used for parameters
that lie in the unit interval.
But unlike
logitlink
,
probitlink
and
cauchitlink
, this link is not
symmetric.
It is the inverse CDF of the extreme value
(or Gumbel or log-Weibull) distribution.
Numerical values of theta
close to 0 or 1 or out of range result
in Inf
, -Inf
, NA
or
NaN
.
The complementary log link function is
the same as the complementary log-log
but the outer log is omitted.
This link is suitable for lrho
in
betabinomial
because it
handles probability-like parameters but
also allows slight negative values in theory.
In particular, cloglink
safeguards against parameters exceeding unity.
For deriv = 0
, the complimentary log-log
of theta
,
i.e., log(-log(1 - theta))
when
inverse = FALSE
, and if
inverse = TRUE
then
1-exp(-exp(theta))
.
For deriv = 1
, then the function returns
d eta
/ d theta
as a function of theta
if inverse = FALSE
,
else if inverse = TRUE
then it
returns the reciprocal.
Here, all logarithms are natural logarithms,
i.e., to base .
Numerical instability may occur when
theta
is close to 1 or 0.
One way of overcoming this is to use
bvalue
.
Changing 1s to 0s and 0s to 1s in the
response means that effectively
a loglog link is fitted. That is,
tranform by
.
That's why only one of
clogloglink
and logloglink
is written.
With constrained ordination
(e.g., cqo
and
cao
) used with
binomialff
, a complementary
log-log link function is preferred over the
default logitlink
,
for a good reason. See the example below.
In terms of the threshold approach with cumulative probabilities for an ordinal response this link function corresponds to the extreme value distribution.
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Links
,
logitoffsetlink
,
logitlink
,
probitlink
,
cauchitlink
,
pgumbel
.
p <- seq(0.01, 0.99, by = 0.01) clogloglink(p) max(abs(clogloglink(clogloglink(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) clogloglink(p) # Has NAs clogloglink(p, bvalue = .Machine$double.eps) # Has no NAs ## Not run: p <- seq(0.01, 0.99, by = 0.01) plot(p, logitlink(p), type = "l", col = "limegreen", lwd = 2, las = 1, main = "Some probability link functions", ylab = "transformation") lines(p, probitlink(p), col = "purple", lwd = 2) lines(p, clogloglink(p), col = "chocolate", lwd = 2) lines(p, cauchitlink(p), col = "tan", lwd = 2) abline(v = 0.5, h = 0, lty = "dashed") legend(0.1, 4, c("logitlink", "probitlink", "clogloglink", "cauchitlink"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = 2) ## End(Not run) ## Not run: # This example shows that clogloglink is preferred over logitlink n <- 500; p <- 5; S <- 3; Rank <- 1 # Species packing model: mydata <- rcqo(n, p, S, eq.tol = TRUE, es.opt = TRUE, eq.max = TRUE, family = "binomial", hi.abundance = 5, seed = 123, Rank = Rank) fitc <- cqo(attr(mydata, "formula"), I.tol = TRUE, data = mydata, fam = binomialff(multiple.responses = TRUE, link = "cloglog"), Rank = Rank) fitl <- cqo(attr(mydata, "formula"), I.tol = TRUE, data = mydata, fam = binomialff(multiple.responses = TRUE, link = "logitlink"), Rank = Rank) # Compare the fitted models (cols 1 and 3) with the truth (col 2) cbind(concoef(fitc), attr(mydata, "concoefficients"), concoef(fitl)) ## End(Not run)
p <- seq(0.01, 0.99, by = 0.01) clogloglink(p) max(abs(clogloglink(clogloglink(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) clogloglink(p) # Has NAs clogloglink(p, bvalue = .Machine$double.eps) # Has no NAs ## Not run: p <- seq(0.01, 0.99, by = 0.01) plot(p, logitlink(p), type = "l", col = "limegreen", lwd = 2, las = 1, main = "Some probability link functions", ylab = "transformation") lines(p, probitlink(p), col = "purple", lwd = 2) lines(p, clogloglink(p), col = "chocolate", lwd = 2) lines(p, cauchitlink(p), col = "tan", lwd = 2) abline(v = 0.5, h = 0, lty = "dashed") legend(0.1, 4, c("logitlink", "probitlink", "clogloglink", "cauchitlink"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = 2) ## End(Not run) ## Not run: # This example shows that clogloglink is preferred over logitlink n <- 500; p <- 5; S <- 3; Rank <- 1 # Species packing model: mydata <- rcqo(n, p, S, eq.tol = TRUE, es.opt = TRUE, eq.max = TRUE, family = "binomial", hi.abundance = 5, seed = 123, Rank = Rank) fitc <- cqo(attr(mydata, "formula"), I.tol = TRUE, data = mydata, fam = binomialff(multiple.responses = TRUE, link = "cloglog"), Rank = Rank) fitl <- cqo(attr(mydata, "formula"), I.tol = TRUE, data = mydata, fam = binomialff(multiple.responses = TRUE, link = "logitlink"), Rank = Rank) # Compare the fitted models (cols 1 and 3) with the truth (col 2) cbind(concoef(fitc), attr(mydata, "concoefficients"), concoef(fitl)) ## End(Not run)
Given M linear/additive predictors, construct the constraint matrices to allow symmetry, (linear and normal) ordering, etc. in terms such as the intercept.
CM.equid(M, Trev = FALSE, Tref = 1) CM.free(M, Trev = FALSE, Tref = 1) CM.ones(M, Trev = FALSE, Tref = 1) CM.symm0(M, Trev = FALSE, Tref = 1) CM.symm1(M, Trev = FALSE, Tref = 1) CM.qnorm(M, Trev = FALSE, Tref = 1)
CM.equid(M, Trev = FALSE, Tref = 1) CM.free(M, Trev = FALSE, Tref = 1) CM.ones(M, Trev = FALSE, Tref = 1) CM.symm0(M, Trev = FALSE, Tref = 1) CM.symm1(M, Trev = FALSE, Tref = 1) CM.qnorm(M, Trev = FALSE, Tref = 1)
M |
Number of linear/additive predictors,
usually |
Tref |
Reference level for the threshold,
this should be a single value from |
Trev |
Logical. Apply reverse direction for the thresholds direction? This argument is ignored by some of the above functions. |
A constraint matrix is where
is its rank and usually the elements are
0, 1 or
.
There is a constraint matrix for each column
of the LM matrix used to fit the
vglm
.
They are used to apportion the regression
coefficients to the linear predictors, e.g.,
parallelism, exchangeability, etc.
The functions described here are intended
to construct
constraint matrices easily for
symmetry constraints and
linear ordering etc.
They are potentially useful for categorical data
analysis (e.g., cumulative
,
multinomial
), especially for the
intercept term.
When applied to cumulative
,
they are sometimes called
structured thresholds,
e.g., ordinal.
One example is the stereotype model proposed
by Anderson (1984)
(see multinomial
and
rrvglm
) where the elements of
the A matrix are ordered.
This is not fully possible in VGAM but
some special cases can be fitted, e.g.,
use CM.equid
to create
a linear ordering.
And CM.symm1
might result in
fully ordered estimates too, etc.
CM.free
creates
free or unconstrained estimates.
It is almost always the case for VGLMs,
and is simply diag(M)
.
CM.ones
creates
equal estimates,
which is also known as the parallelism
assumption in models such as
cumulative
.
It gets its name because the constraint matrix
is simply matrix(1, M, 1)
.
CM.equid
creates
equidistant estimates. This is a
linear scaling, and the direction and
origin are controlled by Treverse
and Tref
respectively.
CM.qnorm
and
CM.qlogis
are based on
qnorm
and
qlogis
.
For example, CM.qnorm(M)
is essentially
cbind(qnorm(seq(M) / (M + 1)))
.
This might be useful with a model with
probitlink
applied to multiple
intercepts.
Further details can be found at
cumulative
and
CommonVGAMffArguments
,
A constraint matrix.
CommonVGAMffArguments
,
cumulative
,
acat
,
cratio
,
sratio
,
multinomial
.
CM.equid(4) CM.equid(4, Trev = TRUE, Tref = 3) CM.symm1(5) CM.symm0(5) CM.qnorm(5)
CM.equid(4) CM.equid(4, Trev = TRUE, Tref = 3) CM.symm1(5) CM.symm0(5) CM.qnorm(5)
Coalminers who are smokers without radiological pneumoconiosis, classified by age, breathlessness and wheeze.
data(coalminers)
data(coalminers)
A data frame with 9 age groups with the following 5 columns.
Counts with breathlessness and wheeze.
Counts with breathlessness but no wheeze.
Counts with no breathlessness but wheeze.
Counts with neither breathlessness or wheeze.
Age of the coal miners (actually, the midpoints of the 5-year category ranges).
The data were published in Ashford and Sowden (1970). A more recent analysis is McCullagh and Nelder (1989, Section 6.6).
Ashford, J. R. and Sowden, R. R. (1970) Multi-variate probit analysis. Biometrics, 26, 535–546.
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models. 2nd ed. London: Chapman & Hall.
str(coalminers)
str(coalminers)
Coef
is a generic function which computes model
coefficients from objects returned by modelling functions.
It is an auxiliary function to coef
that
enables extra capabilities for some specific models.
Coef(object, ...)
Coef(object, ...)
object |
An object for which the computation of other types of model coefficients or quantities is meaningful. |
... |
Other arguments fed into the specific methods function of the model. |
This function can often be useful for vglm
objects
with just an intercept term in the RHS of the formula, e.g.,
y ~ 1
. Then often this function will apply the inverse
link functions to the parameters. See the example below.
For reduced-rank VGLMs, this function can return the A, C matrices, etc.
For quadratic and additive ordination models, this function can return ecological meaningful quantities such as tolerances, optimums, maximums.
The value returned depends specifically on the methods function invoked.
This function may not work for all VGAM family functions. You should check your results on some artificial data before applying it to models fitted to real data.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
coef
,
Coef.vlm
,
Coef.rrvglm
,
Coef.qrrvglm
,
depvar
.
nn <- 1000 bdata <- data.frame(y = rbeta(nn, shape1 = 1, shape2 = 3)) # Original scale fit <- vglm(y ~ 1, betaR, data = bdata, trace = TRUE) # Intercept-only model coef(fit, matrix = TRUE) # Both on a log scale Coef(fit) # On the original scale
nn <- 1000 bdata <- data.frame(y = rbeta(nn, shape1 = 1, shape2 = 3)) # Original scale fit <- vglm(y ~ 1, betaR, data = bdata, trace = TRUE) # Intercept-only model coef(fit, matrix = TRUE) # Both on a log scale Coef(fit) # On the original scale
This methods function returns important matrices etc. of a QO object.
Coef.qrrvglm(object, varI.latvar = FALSE, refResponse = NULL, ...)
Coef.qrrvglm(object, varI.latvar = FALSE, refResponse = NULL, ...)
object |
A CQO object.
The former has class |
varI.latvar |
Logical indicating whether to scale the site scores (latent variables)
to have variance-covariance matrix equal to the rank- |
refResponse |
Integer or character. Specifies the reference response or reference species. By default, the reference species is found by searching sequentially starting from the first species until a positive-definite tolerance matrix is found. Then this tolerance matrix is transformed to the identity matrix. Then the sites scores (latent variables) are made uncorrelated. See below for further details. |
... |
Currently unused. |
If I.tolerances=TRUE
or eq.tolerances=TRUE
(and its
estimated tolerance matrix is positive-definite) then all species'
tolerances are unity by transformation or by definition, and the spread
of the site scores can be compared to them. Vice versa, if one wishes
to compare the tolerances with the sites score variability then setting
varI.latvar=TRUE
is more appropriate.
For rank-2 QRR-VGLMs, one of the species can be chosen so that the
angle of its major axis and minor axis is zero, i.e., parallel to
the ordination axes. This means the effect on the latent vars is
independent on that species, and that its tolerance matrix is diagonal.
The argument refResponse
allows one to choose which is the reference
species, which must have a positive-definite tolerance matrix, i.e.,
is bell-shaped. If refResponse
is not specified, then the code will
try to choose some reference species starting from the first species.
Although the refResponse
argument could possibly be offered as
an option when fitting the model, it is currently available after
fitting the model, e.g., in the functions Coef.qrrvglm
and
lvplot.qrrvglm
.
The A, B1, C, T, D matrices/arrays
are returned, along with other slots.
The returned object has class "Coef.qrrvglm"
(see Coef.qrrvglm-class
).
Consider an equal-tolerances Poisson/binomial CQO model with noRRR = ~ 1
.
For it has about
parameters.
For
it has about
parameters.
Here,
is the number of species, and
is
the number of environmental variables making up the latent variable.
For an unequal-tolerances Poisson/binomial CQO model with
noRRR = ~ 1
, it has about parameters
for
, and about
parameters
for
.
Since the total number of data points is
, where
is the number of sites, it pays to divide the number
of data points by the number of parameters to get some idea
about how much information the parameters contain.
Thomas W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
cqo
,
Coef.qrrvglm-class
,
print.Coef.qrrvglm
,
lvplot.qrrvglm
.
set.seed(123) x2 <- rnorm(n <- 100) x3 <- rnorm(n) x4 <- rnorm(n) latvar1 <- 0 + x3 - 2*x4 lambda1 <- exp(3 - 0.5 * ( latvar1-0)^2) lambda2 <- exp(2 - 0.5 * ( latvar1-1)^2) lambda3 <- exp(2 - 0.5 * ((latvar1+4)/2)^2) # Unequal tolerances y1 <- rpois(n, lambda1) y2 <- rpois(n, lambda2) y3 <- rpois(n, lambda3) set.seed(111) # vvv p1 <- cqo(cbind(y1, y2, y3) ~ x2 + x3 + x4, poissonff, trace = FALSE) ## Not run: lvplot(p1, y = TRUE, lcol = 1:3, pch = 1:3, pcol = 1:3) # vvv Coef(p1) # vvv print(Coef(p1), digits=3)
set.seed(123) x2 <- rnorm(n <- 100) x3 <- rnorm(n) x4 <- rnorm(n) latvar1 <- 0 + x3 - 2*x4 lambda1 <- exp(3 - 0.5 * ( latvar1-0)^2) lambda2 <- exp(2 - 0.5 * ( latvar1-1)^2) lambda3 <- exp(2 - 0.5 * ((latvar1+4)/2)^2) # Unequal tolerances y1 <- rpois(n, lambda1) y2 <- rpois(n, lambda2) y3 <- rpois(n, lambda3) set.seed(111) # vvv p1 <- cqo(cbind(y1, y2, y3) ~ x2 + x3 + x4, poissonff, trace = FALSE) ## Not run: lvplot(p1, y = TRUE, lcol = 1:3, pch = 1:3, pcol = 1:3) # vvv Coef(p1) # vvv print(Coef(p1), digits=3)
The most pertinent matrices and other quantities pertaining to a QRR-VGLM (CQO model).
Objects can be created by calls of the form Coef(object,
...)
where object
is an object of class "qrrvglm"
(created by cqo
).
In this document, is the rank,
is the number of
linear predictors and
is the number of observations.
A
:Of class "matrix"
, A, which are the
linear ‘coefficients’ of the matrix of latent variables.
It is by
.
B1
:Of class "matrix"
, B1.
These correspond to terms of the argument noRRR
.
C
:Of class "matrix"
, C, the
canonical coefficients. It has columns.
Constrained
:Logical. Whether the model is a constrained ordination model.
D
:Of class "array"
,
D[,,j]
is an order-Rank
matrix, for
j
= 1,...,.
Ideally, these are negative-definite in order to make the response
curves/surfaces bell-shaped.
Rank
:The rank (dimension, number of latent variables)
of the RR-VGLM. Called .
latvar
: by
matrix
of latent variable values.
latvar.order
:Of class "matrix"
, the permutation
returned when the function
order
is applied to each column of latvar
.
This enables each column of latvar
to be easily sorted.
Maximum
:Of class "numeric"
, the
maximum fitted values. That is, the fitted values
at the optimums for
noRRR = ~ 1
models.
If noRRR
is not ~ 1
then these will be NA
s.
NOS
:Number of species.
Optimum
:Of class "matrix"
, the values
of the latent variables where the optimums are.
If the curves are not bell-shaped, then the value will
be NA
or NaN
.
Optimum.order
:Of class "matrix"
, the permutation
returned when the function
order
is applied to each column of Optimum
.
This enables each row of Optimum
to be easily sorted.
bellshaped
:Vector of logicals: is each response curve/surface bell-shaped?
dispersion
:Dispersion parameter(s).
Dzero
:Vector of logicals, is each of the
response curves linear in the latent variable(s)?
It will be if and only if
D[,,j]
equals O, for
j
= 1,..., .
Tolerance
:Object of class "array"
,
Tolerance[,,j]
is an order-Rank
matrix, for
j
= 1,...,, being the matrix of
tolerances (squared if on the diagonal).
These are denoted by T in Yee (2004).
Ideally, these are positive-definite in order to make the response
curves/surfaces bell-shaped.
The tolerance matrices satisfy
.
Thomas W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Coef.qrrvglm
,
cqo
,
print.Coef.qrrvglm
.
x2 <- rnorm(n <- 100) x3 <- rnorm(n) x4 <- rnorm(n) latvar1 <- 0 + x3 - 2*x4 lambda1 <- exp(3 - 0.5 * ( latvar1-0)^2) lambda2 <- exp(2 - 0.5 * ( latvar1-1)^2) lambda3 <- exp(2 - 0.5 * ((latvar1+4)/2)^2) y1 <- rpois(n, lambda1) y2 <- rpois(n, lambda2) y3 <- rpois(n, lambda3) yy <- cbind(y1, y2, y3) # vvv p1 <- cqo(yy ~ x2 + x3 + x4, fam = poissonff, trace = FALSE) ## Not run: lvplot(p1, y = TRUE, lcol = 1:3, pch = 1:3, pcol = 1:3) ## End(Not run) # vvv print(Coef(p1), digits = 3)
x2 <- rnorm(n <- 100) x3 <- rnorm(n) x4 <- rnorm(n) latvar1 <- 0 + x3 - 2*x4 lambda1 <- exp(3 - 0.5 * ( latvar1-0)^2) lambda2 <- exp(2 - 0.5 * ( latvar1-1)^2) lambda3 <- exp(2 - 0.5 * ((latvar1+4)/2)^2) y1 <- rpois(n, lambda1) y2 <- rpois(n, lambda2) y3 <- rpois(n, lambda3) yy <- cbind(y1, y2, y3) # vvv p1 <- cqo(yy ~ x2 + x3 + x4, fam = poissonff, trace = FALSE) ## Not run: lvplot(p1, y = TRUE, lcol = 1:3, pch = 1:3, pcol = 1:3) ## End(Not run) # vvv print(Coef(p1), digits = 3)
This methods function returns important matrices etc. of a RR-VGLM object.
Coef.rrvglm(object, ...)
Coef.rrvglm(object, ...)
object |
An object of class |
... |
Currently unused. |
The A, B1, C matrices are returned,
along with other slots.
See rrvglm
for details about RR-VGLMs.
An object of class "Coef.rrvglm"
(see Coef.rrvglm-class
).
This function is an alternative to coef.rrvglm
.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
Coef.rrvglm-class
,
print.Coef.rrvglm
,
rrvglm
.
# Rank-1 stereotype model of Anderson (1984) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, data = pneumo) coef(fit, matrix = TRUE) Coef(fit)
# Rank-1 stereotype model of Anderson (1984) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, data = pneumo) coef(fit, matrix = TRUE) Coef(fit)
The most pertinent matrices and other quantities pertaining to a RR-VGLM.
Objects can be created by calls of the form
Coef(object, ...)
where object
is an object
of class rrvglm
(see rrvglm-class
).
In this document, is the number of linear predictors
and
is the number of observations.
A
:Of class "matrix"
, A.
B1
:Of class "matrix"
, B1.
C
:Of class "matrix"
, C.
Rank
:The rank of the RR-VGLM.
colx1.index
:Index of the columns of the
"vlm"
-type model matrix corresponding to the variables
in x1. These correspond to B1.
colx2.index
:Index of the columns of the
"vlm"
-type model matrix corresponding to the variables
in x2. These correspond to the reduced-rank regression.
Atilde
:Object of class "matrix"
, the
A matrix with the corner rows removed. Thus each of the
elements have been estimated. This matrix is returned only
if corner constraints were used.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
Coef.rrvglm
,
rrvglm
,
rrvglm-class
,
print.Coef.rrvglm
.
# Rank-1 stereotype model of Anderson (1984) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, data = pneumo) coef(fit, matrix = TRUE) Coef(fit) # print(Coef(fit), digits = 3)
# Rank-1 stereotype model of Anderson (1984) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, data = pneumo) coef(fit, matrix = TRUE) Coef(fit) # print(Coef(fit), digits = 3)
Amongst other things, this function applies inverse link functions to the parameters of intercept-only VGLMs.
Coef.vlm(object, ...)
Coef.vlm(object, ...)
object |
A fitted model. |
... |
Arguments which may be passed into
|
Most VGAM family functions apply a link function to the parameters, e.g., positive parameter are often have a log link, parameters between 0 and 1 have a logit link. This function can back-transform the parameter estimate to the original scale.
For intercept-only models (e.g., formula is y ~ 1
)
the back-transformed parameter estimates can be returned.
This function may not work for all VGAM family functions. You should check your results on some artificial data before applying it to models fitted to real data.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
set.seed(123); nn <- 1000 bdata <- data.frame(y = rbeta(nn, shape1 = 1, shape2 = 3)) fit <- vglm(y ~ 1, betaff, data = bdata, trace = TRUE) # intercept-only model coef(fit, matrix = TRUE) # log scale Coef(fit) # On the original scale
set.seed(123); nn <- 1000 bdata <- data.frame(y = rbeta(nn, shape1 = 1, shape2 = 3)) fit <- vglm(y ~ 1, betaff, data = bdata, trace = TRUE) # intercept-only model coef(fit, matrix = TRUE) # log scale Coef(fit) # On the original scale
Extracts the estimated coefficients from vgam() objects.
coefvgam(object, type = c("linear", "nonlinear"), ...)
coefvgam(object, type = c("linear", "nonlinear"), ...)
object |
A
|
type |
Character. The default is the first choice. |
... |
Optional arguments fed into
|
For VGAMs, because modified backfitting is performed,
each fitted function is decomposed into a linear and nonlinear
(smooth) part.
The argument type
is used to return which one is wanted.
A vector if type = "linear"
.
A list if type = "nonlinear"
, and each component of
this list corresponds to an s
term;
the component contains an S4 object with slot names such as
"Bcoefficients"
,
"knots"
,
"xmin"
,
"xmax"
.
Thomas W. Yee
fit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) coef(fit) # Same as coef(fit, type = "linear") (ii <- coef(fit, type = "nonlinear")) is.list(ii) names(ii) slotNames(ii[[1]])
fit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) coef(fit) # Same as coef(fit, type = "linear") (ii <- coef(fit, type = "nonlinear")) is.list(ii) names(ii) slotNames(ii[[1]])
Extracts the estimated coefficients from VLM objects such as VGLMs.
coefvlm(object, matrix.out = FALSE, label = TRUE, colon = FALSE, ...)
coefvlm(object, matrix.out = FALSE, label = TRUE, colon = FALSE, ...)
object |
An object for which the extraction of
coefficients is meaningful.
This will usually be a |
matrix.out |
Logical. If |
label |
Logical. If |
colon |
Logical. Explanatory variables which appear in more than one
linear/additive predictor are labelled with a colon,
e.g., |
... |
Currently unused. |
This function works in a similar way to
applying coef()
to a lm
or glm
object.
However, for VGLMs, there are more options available.
A vector usually.
A matrix if matrix.out = TRUE
.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
zdata <- data.frame(x2 = runif(nn <- 200)) zdata <- transform(zdata, pstr0 = logitlink(-0.5 + 1*x2, inverse = TRUE), lambda = loglink( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y2 = rzipois(nn, lambda, pstr0 = pstr0)) fit2 <- vglm(y2 ~ x2, zipoisson(zero = 1), data = zdata, trace = TRUE) coef(fit2, matrix = TRUE) # Always a good idea coef(fit2) coef(fit2, colon = TRUE)
zdata <- data.frame(x2 = runif(nn <- 200)) zdata <- transform(zdata, pstr0 = logitlink(-0.5 + 1*x2, inverse = TRUE), lambda = loglink( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y2 = rzipois(nn, lambda, pstr0 = pstr0)) fit2 <- vglm(y2 ~ x2, zipoisson(zero = 1), data = zdata, trace = TRUE) coef(fit2, matrix = TRUE) # Always a good idea coef(fit2) coef(fit2, colon = TRUE)
Here is a description of some common and
typical arguments found in many VGAM
family functions, e.g.,
zero
,
lsigma
,
isigma
,
gsigma
,
eq.mean
,
nsimEI
and
parallel
.
TypicalVGAMfamilyFunction(lsigma = "loglink", isigma = NULL, zero = NULL, gsigma = exp(-5:5), eq.mean = FALSE, parallel = TRUE, imethod = 1, vfl = FALSE, Form2 = NULL, type.fitted = c("mean", "quantiles", "Qlink", "pobs0", "pstr0", "onempstr0"), percentiles = c(25, 50, 75), probs.x = c(0.15, 0.85), probs.y = c(0.25, 0.50, 0.75), multiple.responses = FALSE, earg.link = FALSE, ishrinkage = 0.95, nointercept = NULL, whitespace = FALSE, bred = FALSE, lss = TRUE, oim = FALSE, nsimEIM = 100, byrow.arg = FALSE, link.list = list("(Default)" = "identitylink", x2 = "loglink", x3 = "logofflink", x4 = "multilogitlink", x5 = "multilogitlink"), earg.list = list("(Default)" = list(), x2 = list(), x3 = list(offset = -1), x4 = list(), x5 = list()), Thresh = NULL, nrfs = 1)
TypicalVGAMfamilyFunction(lsigma = "loglink", isigma = NULL, zero = NULL, gsigma = exp(-5:5), eq.mean = FALSE, parallel = TRUE, imethod = 1, vfl = FALSE, Form2 = NULL, type.fitted = c("mean", "quantiles", "Qlink", "pobs0", "pstr0", "onempstr0"), percentiles = c(25, 50, 75), probs.x = c(0.15, 0.85), probs.y = c(0.25, 0.50, 0.75), multiple.responses = FALSE, earg.link = FALSE, ishrinkage = 0.95, nointercept = NULL, whitespace = FALSE, bred = FALSE, lss = TRUE, oim = FALSE, nsimEIM = 100, byrow.arg = FALSE, link.list = list("(Default)" = "identitylink", x2 = "loglink", x3 = "logofflink", x4 = "multilogitlink", x5 = "multilogitlink"), earg.list = list("(Default)" = list(), x2 = list(), x3 = list(offset = -1), x4 = list(), x5 = list()), Thresh = NULL, nrfs = 1)
lsigma |
Character.
Link function applied to a parameter and not
necessarily a mean. See |
isigma |
Optional initial values can often be
inputted using an argument beginning with
|
zero |
An important argument, either an integer vector, or a vector of character strings. If an integer, then it specifies which
linear/additive predictor is modelled
as intercept-only. That is,
the regression coefficients are set to
zero for all covariates except for the
intercept. If Some VGAM family functions allow the
Suppose Note: The argument If the |
gsigma |
Grid-search initial values can be inputted
using an argument beginning with Some family functions have an argument
called |
eq.mean |
Logical.
Constrain all the means to be equal?
This type of argument is simpler than
|
parallel |
A logical, or a simple formula specifying
which terms have equal/unequal coefficients.
The formula must be simple, i.e., additive
with simple main effects terms. Interactions
and nesting etc. are not handled. To handle
complex formulas use the Here are some examples.
1. This argument is common in VGAM family
functions for categorical responses, e.g.,
|
nsimEIM |
Some VGAM family functions use
simulation to obtain an approximate expected
information matrix (EIM). For those that
do, the Some VGAM family functions provide
two algorithms for estimating the EIM.
If applicable, set |
imethod |
An integer with value VGAM family functions such
|
Form2 |
Formula.
Using applied to models with |
vfl |
A single logical.
This stands for
variance–variance factored loglinear
(VFL)
model.
If A good question is:
why is |
type.fitted |
Character.
Type of fitted value returned by
the The choice |
percentiles |
Numeric vector, with values between 0 and
100 (although it is not recommended that
exactly 0 or 100 be inputted). Used only
if |
probs.x , probs.y
|
Numeric, with values in (0, 1).
The probabilites that define quantiles with
respect to some vector, usually an |
lss |
Logical.
This stands for the ordering: location,
scale and shape. Should the ordering
of the parameters be in this order?
Almost all VGAM family functions
have this order by default, but in order
to match the arguments of existing R
functions, one might need to set
|
Thresh |
Thresholds is another name for the
intercepts, e.g., for categorical models.
They may be constrained by functions such as
|
whitespace |
Logical.
Should white spaces ( |
oim |
Logical.
Should the observed information matrices
(OIMs) be used for the working weights?
In general, setting |
nrfs |
Numeric, a value in |
,
multiple.responses |
Logical.
Some VGAM family functions allow
a multivariate or vector response.
If so, then usually the response is a
matrix with columns corresponding to the
individual response variables. They are
all fitted simultaneously. Arguments such
as |
earg.link |
This argument should be generally ignored. |
byrow.arg |
Logical.
Some VGAM family functions that handle
multiple responses have arguments that allow
input to be fed in which affect all the
responses, e.g., |
ishrinkage |
Shrinkage factor |
nointercept |
An integer-valued vector specifying
which linear/additive predictors have no
intercepts. Any values must be from the
set {1,2,..., |
bred |
Logical.
Some VGAM family functions will allow
bias-reduction based on the work by Kosmidis
and Firth. Sometimes half-stepping is a good
idea; set |
link.list , earg.list
|
Some VGAM family functions
(such as |
Full details will be given in documentation
yet to be written, at a later date!
A general recommendation is to set
trace = TRUE
whenever any model fitting
is done, since monitoring convergence is
usually very informative.
An object of class "vglmff"
(see
vglmff-class
). The object
is used by modelling functions such as
vglm
and vgam
.
The zero
argument is supplied for
convenience but conflicts can arise with other
arguments, e.g., the constraints
argument of vglm
and
vgam
. See Example 5 below
for an example. If not sure, use, e.g.,
constraints(fit)
and
coef(fit, matrix = TRUE)
to check the result of a fit fit
.
The arguments zero
and
nointercept
can be inputted with values
that fail. For example,
multinomial(zero = 2, nointercept = 1:3)
means the second linear/additive predictor is
identically zero, which will cause a failure.
Be careful about the use of other
potentially contradictory constraints, e.g.,
multinomial(zero = 2, parallel = TRUE ~ x3)
.
If in doubt, apply constraints()
to the fitted object to check.
VGAM family functions with the
nsimEIM
may have inaccurate working
weight matrices. If so, then the standard
errors of the regression coefficients
may be inaccurate. Thus output from
summary(fit)
, vcov(fit)
,
etc. may be misleading.
Changes relating to the lss
argument
have very important consequences and users
must beware. Good programming style is
to rely on the argument names and not on
the order.
T. W. Yee
Yee, T. W. (2015). Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: Springer.
Kosmidis, I. and Firth, D. (2009). Bias reduction in exponential family nonlinear models. Biometrika, 96, 793–804.
Miranda-Soberanis, V. F. and Yee, T. W. (2019). New link functions for distribution–specific quantile regression based on vector generalized linear and additive models. Journal of Probability and Statistics, 5, 1–11.
Links
,
vglm
,
vgam
,
vglmff-class
,
UtilitiesVGAM
,
multilogitlink
,
multinomial
,
VGAMextra.
# Example 1 cumulative() cumulative(link = "probitlink", reverse = TRUE, parallel = TRUE) # Example 2 wdata <- data.frame(x2 = runif(nn <- 1000)) wdata <- transform(wdata, y = rweibull(nn, shape = 2 + exp(1 + x2), scale = exp(-0.5))) fit <- vglm(y ~ x2, weibullR(lshape = logofflink(offset = -2), zero = 2), data = wdata) coef(fit, mat = TRUE) # Example 3; multivariate (multiple) response ## Not run: ndata <- data.frame(x = runif(nn <- 500)) ndata <- transform(ndata, y1 = rnbinom(nn, exp(1), mu = exp(3+x)), # k is size y2 = rnbinom(nn, exp(0), mu = exp(2-x))) fit <- vglm(cbind(y1, y2) ~ x, negbinomial(zero = -2), ndata) coef(fit, matrix = TRUE) ## End(Not run) # Example 4 ## Not run: # fit1 and fit2 are equivalent fit1 <- vglm(ymatrix ~ x2 + x3 + x4 + x5, cumulative(parallel = FALSE ~ 1 + x3 + x5), cdata) fit2 <- vglm(ymatrix ~ x2 + x3 + x4 + x5, cumulative(parallel = TRUE ~ x2 + x4), cdata) ## End(Not run) # Example 5 udata <- data.frame(x2 = rnorm(nn <- 200)) udata <- transform(udata, x1copy = 1, # Copy of the intercept x3 = runif(nn), y1 = rnorm(nn, 1 - 3*x2, sd = exp(1 + 0.2*x2)), y2 = rnorm(nn, 1 - 3*x2, sd = exp(1))) args(uninormal) fit1 <- vglm(y1 ~ x2, uninormal, udata) # This is okay fit2 <- vglm(y2 ~ x2, uninormal(zero = 2), udata) # This is okay fit4 <- vglm(y2 ~ x2 + x1copy + x3, uninormal(zero = NULL, vfl = TRUE, Form2 = ~ x1copy + x3 - 1), udata) coef(fit4, matrix = TRUE) # VFL model # This creates potential conflict clist <- list("(Intercept)" = diag(2), "x2" = diag(2)) fit3 <- vglm(y2 ~ x2, uninormal(zero = 2), data = udata, constraints = clist) # Conflict! coef(fit3, matrix = TRUE) # Shows that clist[["x2"]] was overwritten, constraints(fit3) # i.e., 'zero' seems to override the 'constraints' arg # Example 6 ('whitespace' argument) pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, sratio(whitespace = FALSE, parallel = TRUE), pneumo) fit2 <- vglm(cbind(normal, mild, severe) ~ let, sratio(whitespace = TRUE, parallel = TRUE), pneumo) head(predict(fit1), 2) # No white spaces head(predict(fit2), 2) # Uses white spaces # Example 7 ('zero' argument with character input) set.seed(123); n <- 1000 ldata <- data.frame(x2 = runif(n)) ldata <- transform(ldata, y1 = rlogis(n, loc = 5*x2, scale = exp(2))) ldata <- transform(ldata, y2 = rlogis(n, loc = 5*x2, scale = exp(1*x2))) ldata <- transform(ldata, w1 = runif(n)) ldata <- transform(ldata, w2 = runif(n)) fit7 <- vglm(cbind(y1, y2) ~ x2, # logistic(zero = "location1"), # location1 is intercept-only # logistic(zero = "location2"), # logistic(zero = "location*"), # Not okay... all is unmatched # logistic(zero = "scale1"), # logistic(zero = "scale2"), # logistic(zero = "scale"), # Both scale parameters are matched logistic(zero = c("location", "scale2")), # All but scale1 # logistic(zero = c("LOCAT", "scale2")), # Only scale2 is matched # logistic(zero = c("LOCAT")), # Nothing is matched # trace = TRUE, # weights = cbind(w1, w2), weights = w1, data = ldata) coef(fit7, matrix = TRUE)
# Example 1 cumulative() cumulative(link = "probitlink", reverse = TRUE, parallel = TRUE) # Example 2 wdata <- data.frame(x2 = runif(nn <- 1000)) wdata <- transform(wdata, y = rweibull(nn, shape = 2 + exp(1 + x2), scale = exp(-0.5))) fit <- vglm(y ~ x2, weibullR(lshape = logofflink(offset = -2), zero = 2), data = wdata) coef(fit, mat = TRUE) # Example 3; multivariate (multiple) response ## Not run: ndata <- data.frame(x = runif(nn <- 500)) ndata <- transform(ndata, y1 = rnbinom(nn, exp(1), mu = exp(3+x)), # k is size y2 = rnbinom(nn, exp(0), mu = exp(2-x))) fit <- vglm(cbind(y1, y2) ~ x, negbinomial(zero = -2), ndata) coef(fit, matrix = TRUE) ## End(Not run) # Example 4 ## Not run: # fit1 and fit2 are equivalent fit1 <- vglm(ymatrix ~ x2 + x3 + x4 + x5, cumulative(parallel = FALSE ~ 1 + x3 + x5), cdata) fit2 <- vglm(ymatrix ~ x2 + x3 + x4 + x5, cumulative(parallel = TRUE ~ x2 + x4), cdata) ## End(Not run) # Example 5 udata <- data.frame(x2 = rnorm(nn <- 200)) udata <- transform(udata, x1copy = 1, # Copy of the intercept x3 = runif(nn), y1 = rnorm(nn, 1 - 3*x2, sd = exp(1 + 0.2*x2)), y2 = rnorm(nn, 1 - 3*x2, sd = exp(1))) args(uninormal) fit1 <- vglm(y1 ~ x2, uninormal, udata) # This is okay fit2 <- vglm(y2 ~ x2, uninormal(zero = 2), udata) # This is okay fit4 <- vglm(y2 ~ x2 + x1copy + x3, uninormal(zero = NULL, vfl = TRUE, Form2 = ~ x1copy + x3 - 1), udata) coef(fit4, matrix = TRUE) # VFL model # This creates potential conflict clist <- list("(Intercept)" = diag(2), "x2" = diag(2)) fit3 <- vglm(y2 ~ x2, uninormal(zero = 2), data = udata, constraints = clist) # Conflict! coef(fit3, matrix = TRUE) # Shows that clist[["x2"]] was overwritten, constraints(fit3) # i.e., 'zero' seems to override the 'constraints' arg # Example 6 ('whitespace' argument) pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, sratio(whitespace = FALSE, parallel = TRUE), pneumo) fit2 <- vglm(cbind(normal, mild, severe) ~ let, sratio(whitespace = TRUE, parallel = TRUE), pneumo) head(predict(fit1), 2) # No white spaces head(predict(fit2), 2) # Uses white spaces # Example 7 ('zero' argument with character input) set.seed(123); n <- 1000 ldata <- data.frame(x2 = runif(n)) ldata <- transform(ldata, y1 = rlogis(n, loc = 5*x2, scale = exp(2))) ldata <- transform(ldata, y2 = rlogis(n, loc = 5*x2, scale = exp(1*x2))) ldata <- transform(ldata, w1 = runif(n)) ldata <- transform(ldata, w2 = runif(n)) fit7 <- vglm(cbind(y1, y2) ~ x2, # logistic(zero = "location1"), # location1 is intercept-only # logistic(zero = "location2"), # logistic(zero = "location*"), # Not okay... all is unmatched # logistic(zero = "scale1"), # logistic(zero = "scale2"), # logistic(zero = "scale"), # Both scale parameters are matched logistic(zero = c("location", "scale2")), # All but scale1 # logistic(zero = c("LOCAT", "scale2")), # Only scale2 is matched # logistic(zero = c("LOCAT")), # Nothing is matched # trace = TRUE, # weights = cbind(w1, w2), weights = w1, data = ldata) coef(fit7, matrix = TRUE)
concoef
is a generic function which extracts the constrained
(canonical) coefficients from objects returned by certain modelling
functions.
concoef(object, ...)
concoef(object, ...)
object |
An object for which the extraction of canonical coefficients is meaningful. |
... |
Other arguments fed into the specific methods function of the model. |
For constrained quadratic and ordination models, canonical coefficients are the elements of the C matrix used to form the latent variables. They are highly interpretable in ecology, and are looked at as weights or loadings.
They are also applicable for reduced-rank VGLMs.
The value returned depends specifically on the methods function invoked.
concoef
replaces ccoef
;
the latter is deprecated.
For QO models, there is a direct inverse relationship between the
scaling of the latent variables (site scores) and the tolerances.
One normalization is for the latent variables to have unit variance.
Another normalization is for all the species' tolerances to be
unit (provided eq.tolerances
is TRUE
). These two
normalizations cannot simultaneously hold in general. For rank
models with
it becomes more complicated because
the latent variables are also uncorrelated. An important argument when
fitting quadratic ordination models is whether
eq.tolerances
is TRUE
or FALSE
. See Yee (2004) for details.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
concoef-method
,
concoef.qrrvglm
,
concoef.cao
,
coef
.
## Not run: set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Crow1positive = FALSE) concoef(p1) ## End(Not run)
## Not run: set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Crow1positive = FALSE) concoef(p1) ## End(Not run)
concoef
is a generic function used to return the constrained
(canonical) coefficients of a constrained ordination model.
The function invokes particular methods which depend on the class of
the first argument.
The object from which the constrained coefficients are extracted.
Computes confidence intervals (CIs)
for one or more parameters in a fitted model.
Currently the object must be a
"vglm"
object.
confintvglm(object, parm, level = 0.95, method = c("wald", "profile"), trace = NULL, ...)
confintvglm(object, parm, level = 0.95, method = c("wald", "profile"), trace = NULL, ...)
object |
A fitted model object. |
parm , level , ...
|
Same as |
method |
Character.
The default is the first method.
Abbreviations are allowed.
Currently |
trace |
Logical. If |
The default for
this methods function is based on confint.default
and assumes
asymptotic normality. In particular,
the coef
and
vcov
methods functions are used for
vglm-class
objects.
When method = "profile"
the function
profilevglm
is called to do the profiling. The code is very heavily
based on profile.glm
which was originally written by
D. M. Bates and W. N. Venables (For S in 1996)
and subsequently corrected by B. D. Ripley.
Sometimes the profiling method can give problems, for
example, cumulative
requires the
linear predictors not to intersect in the data cloud.
Such numerical problems are less common when
method = "wald"
, however, it is well-known
that inference based on profile likelihoods is generally
more accurate than Wald, especially when the sample size
is small.
The deviance (deviance(object)
) is used if possible,
else the difference
2 * (logLik(object) - ell)
is computed,
where ell
are the values of the loglikelihood on a grid.
For
Wald CIs and
rrvglm-class
objects, currently an error message is produced because
I haven't gotten around to write the methods function;
it's not too hard, but am too busy!
An interim measure is to
coerce the object into a "vglm"
object,
but then the confidence intervals will tend to be too narrow because
the estimated constraint matrices are treated as known.
For
Wald CIs and
vgam-class
objects, currently an error message is produced because
the theory is undeveloped.
Same as confint
.
The order of the values of argument method
may change
in the future without notice.
The functions
plot.profile.glm
and
pairs.profile.glm
from MASS
appear to work with output from this function.
Thomas Yee adapted confint.lm
to handle "vglm"
objects, for Wald-type
confidence intervals.
Also, profile.glm
was originally written by
D. M. Bates and W. N. Venables (For S in 1996)
and subsequently corrected by B. D. Ripley.
This function effectively calls confint.profile.glm()
in MASS.
vcovvlm
,
summaryvglm
,
confint
,
profile.glm
,
lrt.stat.vlm
,
wald.stat
,
plot.profile.glm
,
pairs.profile.glm
.
# Example 1: this is based on a glm example counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3, 1, 9); treatment <- gl(3, 3) glm.D93 <- glm(counts ~ outcome + treatment, family = poisson()) vglm.D93 <- vglm(counts ~ outcome + treatment, family = poissonff) confint(glm.D93) # needs MASS to be present on the system confint.default(glm.D93) # based on asymptotic normality confint(vglm.D93) confint(vglm.D93) - confint(glm.D93) # Should be all 0s confint(vglm.D93) - confint.default(glm.D93) # based on asympt. normality # Example 2: simulated negative binomial data with multiple responses ndata <- data.frame(x2 = runif(nn <- 100)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x2), size = exp(1)), y2 = rnbinom(nn, mu = exp(2-x2), size = exp(0))) fit1 <- vglm(cbind(y1, y2) ~ x2, negbinomial, data = ndata, trace = TRUE) coef(fit1) coef(fit1, matrix = TRUE) confint(fit1) confint(fit1, "x2:1") # This might be improved to "x2" some day... ## Not run: confint(fit1, method = "profile") # Computationally expensive confint(fit1, "x2:1", method = "profile", trace = FALSE) ## End(Not run) fit2 <- rrvglm(y1 ~ x2, negbinomial(zero = NULL), data = ndata) confint(as(fit2, "vglm")) # Too narrow (SEs are biased downwards)
# Example 1: this is based on a glm example counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3, 1, 9); treatment <- gl(3, 3) glm.D93 <- glm(counts ~ outcome + treatment, family = poisson()) vglm.D93 <- vglm(counts ~ outcome + treatment, family = poissonff) confint(glm.D93) # needs MASS to be present on the system confint.default(glm.D93) # based on asymptotic normality confint(vglm.D93) confint(vglm.D93) - confint(glm.D93) # Should be all 0s confint(vglm.D93) - confint.default(glm.D93) # based on asympt. normality # Example 2: simulated negative binomial data with multiple responses ndata <- data.frame(x2 = runif(nn <- 100)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x2), size = exp(1)), y2 = rnbinom(nn, mu = exp(2-x2), size = exp(0))) fit1 <- vglm(cbind(y1, y2) ~ x2, negbinomial, data = ndata, trace = TRUE) coef(fit1) coef(fit1, matrix = TRUE) confint(fit1) confint(fit1, "x2:1") # This might be improved to "x2" some day... ## Not run: confint(fit1, method = "profile") # Computationally expensive confint(fit1, "x2:1", method = "profile", trace = FALSE) ## End(Not run) fit2 <- rrvglm(y1 ~ x2, negbinomial(zero = NULL), data = ndata) confint(as(fit2, "vglm")) # Too narrow (SEs are biased downwards)
Extractor function for the constraint matrices of objects in the VGAM package.
constraints(object, ...) constraints.vlm(object, type = c("lm", "term"), all = TRUE, which, matrix.out = FALSE, colnames.arg = TRUE, rownames.arg = TRUE, ...)
constraints(object, ...) constraints.vlm(object, type = c("lm", "term"), all = TRUE, which, matrix.out = FALSE, colnames.arg = TRUE, rownames.arg = TRUE, ...)
object |
Some VGAM object, for example, having
class |
type |
Character. Whether LM- or term-type constraints are to be returned.
The number of such matrices returned is equal to
|
all , which
|
If |
matrix.out |
Logical. If |
colnames.arg , rownames.arg
|
Logical. If |
... |
Other possible arguments such as |
Constraint matrices describe the relationship of coefficients/component functions of a particular explanatory variable between the linear/additive predictors in VGLM/VGAM models. For example, they may be all different (constraint matrix is the identity matrix) or all the same (constraint matrix has one column and has unit values).
VGLMs and VGAMs have constraint matrices which are known. The class of RR-VGLMs have constraint matrices which are unknown and are to be estimated.
The extractor function
constraints()
returns a list comprising of
constraint matrices—usually one for each column of the
VLM model matrix, and in that order.
The list is labelled with the variable names.
Each constraint matrix has rows, where
is the number of linear/additive predictors,
and whose rank is equal to the number of columns.
A model with no constraints at all has an order
identity matrix as each variable's
constraint matrix.
For vglm
and vgam
objects,
feeding in type = "term"
constraint matrices back
into the same model should work and give an identical model.
The default are the "lm"
-type constraint matrices;
this is a list with one constraint matrix per column of
the LM matrix.
See the constraints
argument of vglm
,
and the example below.
In all VGAM family functions zero = NULL
means
none of the linear/additive predictors are modelled as
intercepts-only.
Other arguments found in certain VGAM family functions
which affect constraint matrices include
parallel
and exchangeable
.
The constraints
argument in vglm
and vgam
allows constraint matrices to
be inputted. If so, then constraints(fit, type = "lm")
can
be fed into the constraints
argument of the same object
to get the same model.
The xij
argument does not affect constraint matrices; rather,
it allows each row of the constraint matrix to be multiplied by a
specified vector.
T. W. Yee
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
is.parallel
,
is.zero
,
trim.constraints
.
VGLMs are described in vglm-class
;
RR-VGLMs are described in rrvglm-class
.
Arguments such as zero
and parallel
found in many VGAM
family functions are a way of creating/modifying constraint
matrices conveniently, e.g., see zero
.
See CommonVGAMffArguments
for more information.
# Fit the proportional odds model: pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ sm.bs(let, 3), cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) coef(fit1, matrix = TRUE) constraints(fit1) # Parallel assumption results in this constraints(fit1, type = "term") # Same as the default ("vlm"-type) is.parallel(fit1) # An equivalent model to fit1 (needs the type "term" constraints): clist.term <- constraints(fit1, type = "term") # "term"-type constraints # cumulative() has no 'zero' argument to set to NULL (a good idea # when using the 'constraints' argument): (fit2 <- vglm(cbind(normal, mild, severe) ~ sm.bs(let, 3), data = pneumo, cumulative(reverse = TRUE), constraints = clist.term)) abs(max(coef(fit1, matrix = TRUE) - coef(fit2, matrix = TRUE))) # Should be zero # Fit a rank-1 stereotype (RR-multinomial logit) model: fit <- rrvglm(Country ~ Width + Height + HP, multinomial, data = car.all) constraints(fit) # All except the first are the estimated A matrix
# Fit the proportional odds model: pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ sm.bs(let, 3), cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) coef(fit1, matrix = TRUE) constraints(fit1) # Parallel assumption results in this constraints(fit1, type = "term") # Same as the default ("vlm"-type) is.parallel(fit1) # An equivalent model to fit1 (needs the type "term" constraints): clist.term <- constraints(fit1, type = "term") # "term"-type constraints # cumulative() has no 'zero' argument to set to NULL (a good idea # when using the 'constraints' argument): (fit2 <- vglm(cbind(normal, mild, severe) ~ sm.bs(let, 3), data = pneumo, cumulative(reverse = TRUE), constraints = clist.term)) abs(max(coef(fit1, matrix = TRUE) - coef(fit2, matrix = TRUE))) # Should be zero # Fit a rank-1 stereotype (RR-multinomial logit) model: fit <- rrvglm(Country ~ Width + Height + HP, multinomial, data = car.all) constraints(fit) # All except the first are the estimated A matrix
Returns a vector similar to coefs() comprising the centre of the parameter space (COPS) values, given a fitted VGLM regression.
cops(object, ...) copsvglm(object, beta.range = c(-5, 6), tol = .Machine$double.eps^0.25, dointercepts = TRUE, trace. = FALSE, slowtrain = FALSE, ...)
cops(object, ...) copsvglm(object, beta.range = c(-5, 6), tol = .Machine$double.eps^0.25, dointercepts = TRUE, trace. = FALSE, slowtrain = FALSE, ...)
object |
A |
beta.range |
Numeric.
Interval for the numerical search.
After a little scaling, it is effectively
fed into |
tol |
Numeric.
Fed into |
dointercepts |
Logical.
Compute the COPS for the intercepts?
This should be set to |
trace. |
Logical. Print a running log? This may or may not work properly. |
slowtrain |
Logical.
If |
... |
currently unused but may be used in the future for further arguments passed into the other methods functions. |
For many models, some COPS values will be
Inf
or -Inf
so that manual checking is needed,
for example, poissonff
.
Each value returned may be effectively
that of beta.range
or NA
.
The answers returned by this function only
make sense if the COPSs are in the
interior of the parameter space.
This function was written specifically for
logistic regression but has much wider
applicability.
Currently the result returned depends critically
on beta.range
so that the answer should
be checked after several values are fed into
that argument.
A named vector, similar to coefvlm
.
If trace.
then a list is returned,
having a componennt comprising a
matrix of function evaluations used by
optimize
.
This function is experimental and can be made to run more efficiently in the future.
Thomas W. Yee.
Yee, T. W. (2024). Musings and new results on the parameter space. Under review.
## Not run: data("xs.nz", package = "VGAMdata") data1 <- na.omit(xs.nz[, c("age", "cancer", "sex")]) fit1 <- vglm(cancer ~ age + sex, binomialff, data1) cops(fit1) # 'beta.range' is okay here ## End(Not run)
## Not run: data("xs.nz", package = "VGAMdata") data1 <- na.omit(xs.nz[, c("age", "cancer", "sex")]) fit1 <- vglm(cancer ~ age + sex, binomialff, data1) cops(fit1) # 'beta.range' is okay here ## End(Not run)
About 3300 individual butterflies were caught in Malaya by naturalist Corbet trapping butterflies. They were classified to about 500 species.
data(corbet)
data(corbet)
A data frame with 24 observations on the following 2 variables.
species
Number of species.
ofreq
Observed frequency of individual butterflies of that species.
In the early 1940s Corbet spent two years trapping butterflies in Malaya. Of interest was the total number of species. Some species were so rare (e.g., 118 species had only one specimen) that it was thought likely that there were many unknown species.
Actually, 119 species had over 24 observed frequencies,
so this could/should be appended to the data set.
Hence there are 620 species in total in a
sample size of individuals.
Fisher, R. A., Corbet, A. S. and Williams, C. B. (1943). The Relation Between the Number of Species and the Number of Individuals in a Random Sample of an Animal Population. Journal of Animal Ecology, 12, 42–58.
summary(corbet)
summary(corbet)
A constrained quadratic ordination (CQO; formerly called canonical Gaussian ordination or CGO) model is fitted using the quadratic reduced-rank vector generalized linear model (QRR-VGLM) framework.
cqo(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = qrrvglm.control(...), offset = NULL, method = "cqo.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, smart = TRUE, ...)
cqo(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = qrrvglm.control(...), offset = NULL, method = "cqo.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, smart = TRUE, ...)
formula |
a symbolic description of the model to be fit. The RHS of the formula is applied to each linear predictor. Different variables in each linear predictor can be chosen by specifying constraint matrices. |
family |
a function of class |
data |
an optional data frame containing the variables in the model.
By default the variables are taken from |
weights |
an optional vector or matrix of (prior) weights to be used in the fitting process. Currently, this argument should not be used. |
subset |
an optional logical vector specifying a subset of observations to be used in the fitting process. |
na.action |
a function which indicates what should happen when the data contain
|
etastart |
starting values for the linear predictors.
It is a |
mustart |
starting values for the fitted values. It can be a vector or a matrix. Some family functions do not make use of this argument. Currently, this argument probably should not be used. |
coefstart |
starting values for the coefficient vector. Currently, this argument probably should not be used. |
control |
a list of parameters for controlling the fitting process.
See |
offset |
This argument must not be used. |
method |
the method to be used in fitting the model.
The default (and presently only) method |
model |
a logical value indicating whether the model frame
should be assigned in the |
x.arg , y.arg
|
logical values indicating whether
the model matrix and response matrix used in the fitting
process should be assigned in the |
contrasts |
an optional list. See the |
constraints |
an optional list of constraint matrices.
The components of the list must be named with the term it corresponds
to (and it must match in character format).
Each constraint matrix must have |
extra |
an optional list with any extra information that might be needed by the family function. |
smart |
logical value indicating whether smart prediction
( |
... |
further arguments passed into |
QRR-VGLMs or constrained quadratic ordination (CQO) models
are estimated here by maximum likelihood estimation. Optimal linear
combinations of the environmental variables are computed, called
latent variables (these appear as latvar
for
else
latvar1
, latvar2
, etc. in the output). Here,
is the rank or the number of ordination axes. Each species'
response is then a regression of these latent variables using quadratic
polynomials on a transformed scale (e.g., log for Poisson counts, logit
for presence/absence responses). The solution is obtained iteratively
in order to maximize the log-likelihood function, or equivalently,
minimize the deviance.
The central formula (for Poisson and binomial species data) is given by
where is a vector (usually just a 1 for an intercept),
is a vector of environmental variables,
is a
-vector of latent variables,
is
a vector of 0s but with a 1 in the
th position.
The
are a vector of linear/additive predictors,
e.g., the
th element is
for the
th species. The matrices
,
,
and
are estimated from the data, i.e.,
contain the regression coefficients. The tolerance matrices
satisfy
.
Many important CQO details are directly related to arguments
in
qrrvglm.control
, e.g., the argument noRRR
specifies which variables comprise .
Theoretically, the four most popular VGAM family functions
to be used with cqo
correspond to the Poisson, binomial,
normal, and negative binomial distributions. The latter is a
2-parameter model. All of these are implemented, as well as the
2-parameter gamma.
For initial values, the function .Init.Poisson.QO
should
work reasonably well if the data is Poisson with species having equal
tolerances. It can be quite good on binary data too. Otherwise the
Cinit
argument in qrrvglm.control
can be used.
It is possible to relax the quadratic form to an additive model. The
result is a data-driven approach rather than a model-driven approach,
so that CQO is extended to constrained additive ordination
(CAO) when . See
cao
for more details.
In this documentation, is the number of linear predictors,
is the number of responses (species). Then
for Poisson and binomial species data,
and
for negative binomial and gamma distributed species data.
Incidentally,
Unconstrained quadratic ordination (UQO)
may be performed by, e.g., fitting a Goodman's RC association model;
see uqo
and the Yee and Hadi (2014) referenced there.
For UQO, the response is the usual site-by-species matrix and
there are no environmental variables;
the site scores are free parameters.
UQO can be performed under the assumption that all species
have the same tolerance matrices.
An object of class "qrrvglm"
.
Local solutions are not uncommon when fitting CQO models. To increase
the chances of obtaining the global solution, increase the value
of the argument Bestof
in qrrvglm.control
.
For reproducibility of the results, it pays to set a different
random number seed before calling cqo
(the function
set.seed
does this). The function cqo
chooses initial values for C using .Init.Poisson.QO()
if Use.Init.Poisson.QO = TRUE
, else random numbers.
Unless I.tolerances = TRUE
or eq.tolerances = FALSE
,
CQO is computationally expensive with memory and time.
It pays to keep the rank down to 1
or 2. If eq.tolerances = TRUE
and I.tolerances = FALSE
then
the cost grows quickly with the number of species and sites (in terms of
memory requirements and time). The data needs to conform quite closely
to the statistical model, and the environmental range of the data should
be wide in order for the quadratics to fit the data well (bell-shaped
response surfaces). If not, RR-VGLMs will be more appropriate because
the response is linear on the transformed scale (e.g., log or logit)
and the ordination is called constrained linear ordination or CLO.
Like many regression models, CQO is sensitive to outliers (in the environmental and species data), sparse data, high leverage points, multicollinearity etc. For these reasons, it is necessary to examine the data carefully for these features and take corrective action (e.g., omitting certain species, sites, environmental variables from the analysis, transforming certain environmental variables, etc.). Any optimum lying outside the convex hull of the site scores should not be trusted. Fitting a CAO is recommended first, then upon transformations etc., possibly a CQO can be fitted.
For binary data, it is necessary to have ‘enough’ data. In general,
the number of sites ought to be much larger than the number of
species S, e.g., at least 100 sites for two species. Compared
to count (Poisson) data, numerical problems occur more frequently
with presence/absence (binary) data. For example, if
Rank = 1
and if the response data for each species is a string of all absences,
then all presences, then all absences (when enumerated along the latent
variable) then infinite parameter estimates will occur. In general,
setting I.tolerances = TRUE
may help.
This function was formerly called cgo
. It has been renamed to
reinforce a new nomenclature described in Yee (2006).
The input requires care, preparation and thought—a lot more than other ordination methods. Here is a partial checklist.
The number of species should be kept reasonably low, e.g., 12 max. Feeding in 100+ species wholesale is a recipe for failure. Choose a few species carefully. Using 10 well-chosen species is better than 100+ species thrown in willy-nilly.
Each species should be screened individually first, e.g.,
for presence/absence is the species totally absent or totally present
at all sites?
For presence/absence data sort(colMeans(data))
can help
avoid such species.
The number of explanatory variables should be kept low, e.g., 7 max.
Each explanatory variable should be screened individually first, e.g., is it heavily skewed or are there outliers? They should be plotted and then transformed where needed. They should not be too highly correlated with each other.
Each explanatory variable should be scaled, e.g.,
to mean 0 and unit variance.
This is especially needed for I.tolerance = TRUE
.
Keep the rank low. Only if the data is very good should a rank-2 model be attempted. Usually a rank-1 model is all that is practically possible even after a lot of work. The rank-1 model should always be attempted first. Then might be clever and try use this for initial values for a rank-2 model.
If the number of sites is large then choose a random sample of them. For example, choose a maximum of 500 sites. This will reduce the memory and time expense of the computations.
Try I.tolerance = TRUE
or eq.tolerance = FALSE
if the inputted data set is large,
so as to reduce the computational expense.
That's because the default, I.tolerance = FALSE
and
eq.tolerance = TRUE
, is very memory hungry.
By default, a rank-1 equal-tolerances QRR-VGLM model is fitted
(see qrrvglm.control
for the default control
parameters).
If Rank > 1
then the latent variables are always transformed
so that they are uncorrelated.
By default, the argument trace
is TRUE
meaning a running
log is printed out while the computations are taking place. This is
because the algorithm is computationally expensive, therefore users
might think that their computers have frozen if trace = FALSE
!
The argument Bestof
in qrrvglm.control
controls
the number of models fitted (each uses different starting values) to
the data. This argument is important because convergence may be to a
local solution rather than the global solution. Using
more starting values increases the chances of finding the global
solution. Always plot an ordination diagram (use the generic function
lvplot
) and see if it looks sensible. Local solutions
arise because the optimization problem is highly nonlinear, and this is
particularly true for CAO.
Many of the arguments applicable to cqo
are common to
vglm
and rrvglm.control
.
The most important arguments are
Rank
,
noRRR
,
Bestof
,
I.tolerances
,
eq.tolerances
,
isd.latvar
, and
MUXfactor
.
When fitting a 2-parameter model such as the negative binomial
or gamma, it pays to have eq.tolerances = TRUE
and
I.tolerances = FALSE
. This is because numerical problems can
occur when fitting the model far away from the global solution when
I.tolerances = TRUE
. Setting the two arguments as described will
slow down the computation considerably, however it is numerically
more stable.
In Example 1 below, an unequal-tolerances rank-1 QRR-VGLM is fitted to the
hunting spiders dataset, and
Example 2 is the equal-tolerances version. The latter is less likely to
have convergence problems compared to the unequal-tolerances model.
In Example 3 below, an equal-tolerances rank-2 QRR-VGLM is fitted to the
hunting spiders dataset.
The numerical difficulties encountered in fitting the rank-2 model
suggests a rank-1 model is probably preferable.
In Example 4 below, constrained binary quadratic ordination (in old
nomenclature, constrained Gaussian logit ordination) is fitted to some
simulated data coming from a species packing model.
With multivariate binary responses, one must
use multiple.responses = TRUE
to
indicate that the response (matrix) is multivariate. Otherwise, it is
interpreted as a single binary response variable.
In Example 5 below, the deviance residuals are plotted for each species.
This is useful as a diagnostic plot.
This is done by (re)regressing each species separately against the latent
variable.
Sometime in the future, this function might handle input of the form
cqo(x, y)
, where x
and y
are matrices containing
the environmental and species data respectively.
Thomas W. Yee. Thanks to Alvin Sou for converting a lot of the original FORTRAN code into C.
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
ter Braak, C. J. F. and Prentice, I. C. (1988). A theory of gradient analysis. Advances in Ecological Research, 18, 271–317.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
qrrvglm.control
,
Coef.qrrvglm
,
predictqrrvglm
,
calibrate.qrrvglm
,
model.matrixqrrvglm
,
vcovqrrvglm
,
rcqo
,
cao
,
uqo
,
rrvglm
,
poissonff
,
binomialff
,
negbinomial
,
gamma2
,
lvplot.qrrvglm
,
perspqrrvglm
,
trplot.qrrvglm
,
vglm
,
set.seed
,
hspider
,
trapO
.
## Not run: # Example 1; Fit an unequal tolerances model to the hunting spiders data hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental variables set.seed(1234) # For reproducibility of the results p1ut <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, fam = poissonff, data = hspider, Crow1positive = FALSE, eq.tol = FALSE) sort(deviance(p1ut, history = TRUE)) # A history of all the iterations if (deviance(p1ut) > 1177) warning("suboptimal fit obtained") S <- ncol(depvar(p1ut)) # Number of species clr <- (1:(S+1))[-7] # Omits yellow lvplot(p1ut, y = TRUE, lcol = clr, pch = 1:S, pcol = clr, las = 1) # Ordination diagram legend("topright", leg = colnames(depvar(p1ut)), col = clr, pch = 1:S, merge = TRUE, bty = "n", lty = 1:S, lwd = 2) (cp <- Coef(p1ut)) (a <- latvar(cp)[[email protected]]) # Ordered site scores along the gradient # Names of the ordered sites along the gradient: rownames(latvar(cp))[[email protected]] (aa <- Opt(cp)[, [email protected]]) # Ordered optimums along the gradient aa <- aa[!is.na(aa)] # Delete the species that is not unimodal names(aa) # Names of the ordered optimums along the gradient trplot(p1ut, which.species = 1:3, log = "xy", type = "b", lty = 1, lwd = 2, col = c("blue","red","green"), label = TRUE) -> ii # Trajectory plot legend(0.00005, 0.3, paste(ii$species[, 1], ii$species[, 2], sep = " and "), lwd = 2, lty = 1, col = c("blue", "red", "green")) abline(a = 0, b = 1, lty = "dashed") S <- ncol(depvar(p1ut)) # Number of species clr <- (1:(S+1))[-7] # Omits yellow persp(p1ut, col = clr, label = TRUE, las = 1) # Perspective plot # Example 2; Fit an equal tolerances model. Less numerically fraught. set.seed(1234) p1et <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE) sort(deviance(p1et, history = TRUE)) # A history of all the iterations if (deviance(p1et) > 1586) warning("suboptimal fit obtained") S <- ncol(depvar(p1et)) # Number of species clr <- (1:(S+1))[-7] # Omits yellow persp(p1et, col = clr, label = TRUE, las = 1) # Example 3: A rank-2 equal tolerances CQO model with Poisson data # This example is numerically fraught... need I.toler = TRUE too. set.seed(555) p2 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE, I.toler = TRUE, Rank = 2, Bestof = 3, isd.latvar = c(2.1, 0.9)) sort(deviance(p2, history = TRUE)) # A history of all the iterations if (deviance(p2) > 1127) warning("suboptimal fit obtained") lvplot(p2, ellips = FALSE, label = TRUE, xlim = c(-3,4), C = TRUE, Ccol = "brown", sites = TRUE, scol = "grey", pcol = "blue", pch = "+", chull = TRUE, ccol = "grey") # Example 4: species packing model with presence/absence data set.seed(2345) n <- 200; p <- 5; S <- 5 mydata <- rcqo(n, p, S, fam = "binomial", hi.abundance = 4, eq.tol = TRUE, es.opt = TRUE, eq.max = TRUE) myform <- attr(mydata, "formula") set.seed(1234) b1et <- cqo(myform, binomialff(multiple.responses = TRUE, link = "clogloglink"), data = mydata) sort(deviance(b1et, history = TRUE)) # A history of all the iterations lvplot(b1et, y = TRUE, lcol = 1:S, pch = 1:S, pcol = 1:S, las = 1) Coef(b1et) # Compare the fitted model with the 'truth' cbind(truth = attr(mydata, "concoefficients"), fitted = concoef(b1et)) # Example 5: Plot the deviance residuals for diagnostic purposes set.seed(1234) p1et <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, eq.tol = TRUE, trace = FALSE) sort(deviance(p1et, history = TRUE)) # A history of all the iterations if (deviance(p1et) > 1586) warning("suboptimal fit obtained") S <- ncol(depvar(p1et)) par(mfrow = c(3, 4)) for (ii in 1:S) { tempdata <- data.frame(latvar1 = c(latvar(p1et)), sppCounts = depvar(p1et)[, ii]) tempdata <- transform(tempdata, myOffset = -0.5 * latvar1^2) # For species ii, refit the model to get the deviance residuals fit1 <- vglm(sppCounts ~ offset(myOffset) + latvar1, poissonff, data = tempdata, trace = FALSE) # For checking: this should be 0 # print("max(abs(c(Coef(p1et)@B1[1,ii],Coef(p1et)@A[ii,1])-coef(fit1)))") # print( max(abs(c(Coef(p1et)@B1[1,ii],Coef(p1et)@A[ii,1])-coef(fit1))) ) # Plot the deviance residuals devresid <- resid(fit1, type = "deviance") predvalues <- predict(fit1) + fit1@offset ooo <- with(tempdata, order(latvar1)) plot(predvalues + devresid ~ latvar1, data = tempdata, col = "red", xlab = "latvar1", ylab = "", main = colnames(depvar(p1et))[ii]) with(tempdata, lines(latvar1[ooo], predvalues[ooo], col = "blue")) } ## End(Not run)
## Not run: # Example 1; Fit an unequal tolerances model to the hunting spiders data hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental variables set.seed(1234) # For reproducibility of the results p1ut <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, fam = poissonff, data = hspider, Crow1positive = FALSE, eq.tol = FALSE) sort(deviance(p1ut, history = TRUE)) # A history of all the iterations if (deviance(p1ut) > 1177) warning("suboptimal fit obtained") S <- ncol(depvar(p1ut)) # Number of species clr <- (1:(S+1))[-7] # Omits yellow lvplot(p1ut, y = TRUE, lcol = clr, pch = 1:S, pcol = clr, las = 1) # Ordination diagram legend("topright", leg = colnames(depvar(p1ut)), col = clr, pch = 1:S, merge = TRUE, bty = "n", lty = 1:S, lwd = 2) (cp <- Coef(p1ut)) (a <- latvar(cp)[cp@latvar.order]) # Ordered site scores along the gradient # Names of the ordered sites along the gradient: rownames(latvar(cp))[cp@latvar.order] (aa <- Opt(cp)[, cp@Optimum.order]) # Ordered optimums along the gradient aa <- aa[!is.na(aa)] # Delete the species that is not unimodal names(aa) # Names of the ordered optimums along the gradient trplot(p1ut, which.species = 1:3, log = "xy", type = "b", lty = 1, lwd = 2, col = c("blue","red","green"), label = TRUE) -> ii # Trajectory plot legend(0.00005, 0.3, paste(ii$species[, 1], ii$species[, 2], sep = " and "), lwd = 2, lty = 1, col = c("blue", "red", "green")) abline(a = 0, b = 1, lty = "dashed") S <- ncol(depvar(p1ut)) # Number of species clr <- (1:(S+1))[-7] # Omits yellow persp(p1ut, col = clr, label = TRUE, las = 1) # Perspective plot # Example 2; Fit an equal tolerances model. Less numerically fraught. set.seed(1234) p1et <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE) sort(deviance(p1et, history = TRUE)) # A history of all the iterations if (deviance(p1et) > 1586) warning("suboptimal fit obtained") S <- ncol(depvar(p1et)) # Number of species clr <- (1:(S+1))[-7] # Omits yellow persp(p1et, col = clr, label = TRUE, las = 1) # Example 3: A rank-2 equal tolerances CQO model with Poisson data # This example is numerically fraught... need I.toler = TRUE too. set.seed(555) p2 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE, I.toler = TRUE, Rank = 2, Bestof = 3, isd.latvar = c(2.1, 0.9)) sort(deviance(p2, history = TRUE)) # A history of all the iterations if (deviance(p2) > 1127) warning("suboptimal fit obtained") lvplot(p2, ellips = FALSE, label = TRUE, xlim = c(-3,4), C = TRUE, Ccol = "brown", sites = TRUE, scol = "grey", pcol = "blue", pch = "+", chull = TRUE, ccol = "grey") # Example 4: species packing model with presence/absence data set.seed(2345) n <- 200; p <- 5; S <- 5 mydata <- rcqo(n, p, S, fam = "binomial", hi.abundance = 4, eq.tol = TRUE, es.opt = TRUE, eq.max = TRUE) myform <- attr(mydata, "formula") set.seed(1234) b1et <- cqo(myform, binomialff(multiple.responses = TRUE, link = "clogloglink"), data = mydata) sort(deviance(b1et, history = TRUE)) # A history of all the iterations lvplot(b1et, y = TRUE, lcol = 1:S, pch = 1:S, pcol = 1:S, las = 1) Coef(b1et) # Compare the fitted model with the 'truth' cbind(truth = attr(mydata, "concoefficients"), fitted = concoef(b1et)) # Example 5: Plot the deviance residuals for diagnostic purposes set.seed(1234) p1et <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, eq.tol = TRUE, trace = FALSE) sort(deviance(p1et, history = TRUE)) # A history of all the iterations if (deviance(p1et) > 1586) warning("suboptimal fit obtained") S <- ncol(depvar(p1et)) par(mfrow = c(3, 4)) for (ii in 1:S) { tempdata <- data.frame(latvar1 = c(latvar(p1et)), sppCounts = depvar(p1et)[, ii]) tempdata <- transform(tempdata, myOffset = -0.5 * latvar1^2) # For species ii, refit the model to get the deviance residuals fit1 <- vglm(sppCounts ~ offset(myOffset) + latvar1, poissonff, data = tempdata, trace = FALSE) # For checking: this should be 0 # print("max(abs(c(Coef(p1et)@B1[1,ii],Coef(p1et)@A[ii,1])-coef(fit1)))") # print( max(abs(c(Coef(p1et)@B1[1,ii],Coef(p1et)@A[ii,1])-coef(fit1))) ) # Plot the deviance residuals devresid <- resid(fit1, type = "deviance") predvalues <- predict(fit1) + fit1@offset ooo <- with(tempdata, order(latvar1)) plot(predvalues + devresid ~ latvar1, data = tempdata, col = "red", xlab = "latvar1", ylab = "", main = colnames(depvar(p1et))[ii]) with(tempdata, lines(latvar1[ooo], predvalues[ooo], col = "blue")) } ## End(Not run)
A variety of reported crash data cross-classified by time (hour of the day) and day of the week, accumulated over 2009. These include fatalities and injuries (by car), trucks, motor cycles, bicycles and pedestrians. There are some alcohol-related data too.
data(crashi) data(crashf) data(crashtr) data(crashmc) data(crashbc) data(crashp) data(alcoff) data(alclevels)
data(crashi) data(crashf) data(crashtr) data(crashmc) data(crashbc) data(crashp) data(alcoff) data(alclevels)
Data frames with hourly times as rows and days of the week as columns.
The alclevels
dataset has hourly times and alcohol levels.
Day of the week.
Blood alcohol level (milligrams alcohol per 100 millilitres of blood).
Each cell is the aggregate number of crashes reported at each
hour-day combination, over the 2009 calendar year.
The rownames
of each data frame is the
start time (hourly from midnight onwards) on a 24 hour clock,
e.g., 21 means 9.00pm to 9.59pm.
For crashes,
chrashi
are the number of injuries by car,
crashf
are the number of fatalities by car
(not included in chrashi
),
crashtr
are the number of crashes involving trucks,
crashmc
are the number of crashes involving motorcyclists,
crashbc
are the number of crashes involving bicycles,
and
crashp
are the number of crashes involving pedestrians.
For alcohol-related offences,
alcoff
are the number of alcohol offenders from
breath screening drivers,
and
alclevels
are the blood alcohol levels of fatally injured drivers.
http://www.transport.govt.nz/research/Pages/Motor-Vehicle-Crashes-in-New-Zealand-2009.aspx
.
Thanks to Warwick Goold and Alfian F. Hadi for assistance.
Motor Vehicles Crashes in New Zealand 2009; Statistical Statement Calendar Year 2009. Ministry of Transport, NZ Government; Yearly Report 2010. ISSN: 1176-3949
## Not run: plot(unlist(alcoff), type = "l", frame.plot = TRUE, axes = FALSE, col = "blue", bty = "o", main = "Alcoholic offenders on NZ roads, aggregated over 2009", sub = "Vertical lines at midnight (purple) and noon (orange)", xlab = "Day/hour", ylab = "Number of offenders") axis(1, at = 1 + (0:6) * 24 + 12, labels = colnames(alcoff)) axis(2, las = 1) axis(3:4, labels = FALSE, tick = FALSE) abline(v = sort(1 + c((0:7) * 24, (0:6) * 24 + 12)), lty = "dashed", col = c("purple", "orange")) ## End(Not run) # Goodmans RC models ## Not run: fitgrc1 <- grc(alcoff) # Rank-1 model fitgrc2 <- grc(alcoff, Rank = 2, Corner = FALSE, Uncor = TRUE) Coef(fitgrc2) ## End(Not run) ## Not run: biplot(fitgrc2, scaleA = 2.3, Ccol = "blue", Acol = "orange", Clabels = as.character(1:23), xlim = c(-1.3, 2.3), ylim = c(-1.2, 1)) ## End(Not run)
## Not run: plot(unlist(alcoff), type = "l", frame.plot = TRUE, axes = FALSE, col = "blue", bty = "o", main = "Alcoholic offenders on NZ roads, aggregated over 2009", sub = "Vertical lines at midnight (purple) and noon (orange)", xlab = "Day/hour", ylab = "Number of offenders") axis(1, at = 1 + (0:6) * 24 + 12, labels = colnames(alcoff)) axis(2, las = 1) axis(3:4, labels = FALSE, tick = FALSE) abline(v = sort(1 + c((0:7) * 24, (0:6) * 24 + 12)), lty = "dashed", col = c("purple", "orange")) ## End(Not run) # Goodmans RC models ## Not run: fitgrc1 <- grc(alcoff) # Rank-1 model fitgrc2 <- grc(alcoff, Rank = 2, Corner = FALSE, Uncor = TRUE) Coef(fitgrc2) ## End(Not run) ## Not run: biplot(fitgrc2, scaleA = 2.3, Ccol = "blue", Acol = "orange", Clabels = as.character(1:23), xlim = c(-1.3, 2.3), ylim = c(-1.2, 1)) ## End(Not run)
Fits a continuation ratio logit/probit/cloglog/cauchit/... regression model to an ordered (preferably) factor response.
cratio(link = "logitlink", parallel = FALSE, reverse = FALSE, zero = NULL, ynames = FALSE, Thresh = NULL, Trev = reverse, Tref = if (Trev) "M" else 1, whitespace = FALSE)
cratio(link = "logitlink", parallel = FALSE, reverse = FALSE, zero = NULL, ynames = FALSE, Thresh = NULL, Trev = reverse, Tref = if (Trev) "M" else 1, whitespace = FALSE)
link |
Link function applied to
the |
parallel |
A logical, or formula specifying which terms have equal/unequal coefficients. |
reverse |
Logical.
By default, the continuation ratios used are
|
ynames |
See |
zero |
An integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
The values must be from the set {1,2,..., |
Thresh , Trev , Tref
|
See |
whitespace |
See |
In this help file the response is assumed to be
a factor with ordered values
, so that
is the number of linear/additive predictors
.
There are a number of definitions for the
continuation ratio
in the literature. To make life easier, in the VGAM
package, we use continuation ratios and stopping
ratios
(see sratio
).
Stopping ratios deal with quantities such as
logitlink(P[Y=j|Y>=j])
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
No check is made to verify that the response is ordinal if the
response is a matrix;
see ordered
.
Boersch-Supan (2021) looks at sparse data and
the numerical problems that result;
see sratio
.
The response should be either a matrix of counts
(with row sums that are all positive), or a
factor. In both cases, the y
slot returned by
vglm
/vgam
/rrvglm
is the matrix
of counts.
For a nominal (unordered) factor response, the
multinomial logit model (multinomial
)
is more appropriate.
Here is an example of the usage of the parallel
argument. If there are covariates x1
, x2
and x3
, then parallel = TRUE ~ x1 + x2 -1
and parallel = FALSE ~ x3
are equivalent. This
would constrain the regression coefficients for x1
and x2
to be equal; those of the intercepts and
x3
would be different.
Thomas W. Yee
See sratio
.
sratio
,
acat
,
cumulative
,
multinomial
,
CM.equid
,
CommonVGAMffArguments
,
margeff
,
pneumo
,
budworm
,
logitlink
,
probitlink
,
clogloglink
,
cauchitlink
.
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, cratio(parallel = TRUE), data = pneumo)) coef(fit, matrix = TRUE) constraints(fit) predict(fit) predict(fit, untransform = TRUE) margeff(fit)
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, cratio(parallel = TRUE), data = pneumo)) coef(fit, matrix = TRUE) constraints(fit) predict(fit) predict(fit, untransform = TRUE) margeff(fit)
Fits a cumulative link regression model to a (preferably ordered) factor response.
cumulative(link = "logitlink", parallel = FALSE, reverse = FALSE, multiple.responses = FALSE, ynames = FALSE, Thresh = NULL, Trev = reverse, Tref = if (Trev) "M" else 1, whitespace = FALSE)
cumulative(link = "logitlink", parallel = FALSE, reverse = FALSE, multiple.responses = FALSE, ynames = FALSE, Thresh = NULL, Trev = reverse, Tref = if (Trev) "M" else 1, whitespace = FALSE)
link |
Link function applied to the |
parallel |
A logical or formula specifying which terms have
equal/unequal coefficients.
See below for more information about the parallelism
assumption.
The default results in what some people call the
generalized ordered logit model to be fitted.
If The partial proportional odds model can be
fitted by assigning this argument something like
|
reverse |
Logical.
By default, the cumulative probabilities used are
|
ynames |
See |
multiple.responses |
Logical.
Multiple responses?
If |
Thresh |
Character.
The choices concern constraint matrices applied
to the intercepts.
They can be constrained to be
equally-spaced (equidistant)
etc.
See If equally-spaced then the direction and the
reference level are controlled
by If If For |
Trev , Tref
|
Support arguments for |
whitespace |
See |
In this help file the response is assumed
to be a factor with ordered values
.
Hence
is the number of linear/additive
predictors
;
for
cumulative()
one has .
This VGAM family function fits the class of cumulative link models to (hopefully) an ordinal response. By default, the non-parallel cumulative logit model is fitted, i.e.,
where and
the
are not constrained to be parallel.
This is also known as the non-proportional odds model.
If the logit link is replaced by a complementary log-log link
(
clogloglink
) then
this is known as the proportional-hazards model.
In almost all the literature, the constraint matrices
associated with this family of models are known.
For example, setting
parallel = TRUE
will make all constraint matrices
(except for the intercept) equal to a vector of 1's.
If the constraint matrices are equal, unknown and to
be estimated,
then this can be achieved by fitting the model as a
reduced-rank vector generalized
linear model (RR-VGLM; see
rrvglm
).
Currently, reduced-rank vector generalized additive models
(RR-VGAMs) have not been implemented here.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
No check is made to verify that the response is ordinal
if the response is a matrix;
see ordered
.
Boersch-Supan (2021) looks at sparse data and
the numerical problems that result;
see sratio
.
The response should be either a matrix of counts
(with row sums that
are all positive), or a factor. In both cases,
the y
slot
returned by
vglm
/vgam
/rrvglm
is the matrix
of counts.
The formula must contain an intercept term.
Other VGAM family functions for an ordinal response
include
acat
,
cratio
,
sratio
.
For a nominal (unordered) factor response, the multinomial
logit model (multinomial
) is more appropriate.
With the logit link, setting parallel =
TRUE
will fit a proportional odds model. Note
that the TRUE
here does not apply to
the intercept term. In practice, the validity
of the proportional odds assumption
needs to be checked, e.g., by a likelihood
ratio test (LRT). If acceptable on the data,
then numerical problems are less likely
to occur during the fitting, and there are
less parameters. Numerical problems occur
when the linear/additive predictors cross,
which results in probabilities outside of
; setting
parallel = TRUE
will help avoid this problem.
Here is an example of the usage of the parallel
argument.
If there are covariates x2
, x3
and
x4
, then
parallel = TRUE ~ x2 + x3 -1
and
parallel = FALSE ~ x4
are equivalent.
This would constrain the regression coefficients
for x2
and x3
to be equal;
those of the intercepts and x4
would be different.
If the data is inputted in long format
(not wide format, as in pneumo
below)
and the self-starting initial values are not
good enough then try using
mustart
,
coefstart
and/or
etatstart
.
See the example below.
To fit the proportional odds model one can use the
VGAM family function propodds
.
Note that propodds(reverse)
is equivalent to
cumulative(parallel = TRUE, reverse = reverse)
(which is equivalent to
cumulative(parallel =
TRUE, reverse = reverse, link = "logitlink")
).
It is for convenience only. A call to
cumulative()
is preferred since it reminds the user
that a parallelism assumption is made, as well as
being a lot more flexible.
Category specific effects may be modelled using
the xij
-facility; see
vglm.control
and fill1
.
With most Thresh
old choices,
the first few fitted regression coefficients
need care in their interpretation. For example,
some values could be the distance away from
the median intercept. Typing something
like constraints(fit)[[1]]
gives the
constraint matrix of the intercept term.
Thomas W. Yee
Agresti, A. (2013). Categorical Data Analysis, 3rd ed. Hoboken, NJ, USA: Wiley.
Agresti, A. (2010). Analysis of Ordinal Categorical Data, 2nd ed. Hoboken, NJ, USA: Wiley.
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Tutz, G. (2012). Regression for Categorical Data, Cambridge: Cambridge University Press.
Tutz, G. and Berger, M. (2022). Sparser ordinal regression models based on parametric and additive location-shift approaches. International Statistical Review, 90, 306–327. doi:10.1111/insr.12484.
Yee, T. W. (2010). The VGAM package for categorical data analysis. Journal of Statistical Software, 32, 1–34. doi:10.18637/jss.v032.i10.
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
propodds
,
constraints
,
CM.ones
,
CM.equid
,
R2latvar
,
ordsup
,
prplot
,
margeff
,
acat
,
cratio
,
sratio
,
multinomial
,
CommonVGAMffArguments
,
pneumo
,
budworm
,
Links
,
hdeff.vglm
,
logitlink
,
probitlink
,
clogloglink
,
cauchitlink
,
logistic1
.
# Proportional odds model (p.179) of McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), pneumo)) depvar(fit) # Sample proportions (good technique) fit@y # Sample proportions (bad technique) weights(fit, type = "prior") # Number of observations coef(fit, matrix = TRUE) constraints(fit) # Constraint matrices apply(fitted(fit), 1, which.max) # Classification apply(predict(fit, newdata = pneumo, type = "response"), 1, which.max) # Classification R2latvar(fit) # Check that the model is linear in let ---------------------- fit2 <- vgam(cbind(normal, mild, severe) ~ s(let, df = 2), cumulative(reverse = TRUE), data = pneumo) ## Not run: plot(fit2, se = TRUE, overlay = TRUE, lcol = 1:2, scol = 1:2) ## End(Not run) # Check the proportional odds assumption with a LRT ---------- (fit3 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), pneumo)) pchisq(2 * (logLik(fit3) - logLik(fit)), df = length(coef(fit3)) - length(coef(fit)), lower.tail = FALSE) lrtest(fit3, fit) # More elegant # A factor() version of fit ---------------------------------- # This is in long format (cf. wide format above) Nobs <- round(depvar(fit) * c(weights(fit, type = "prior"))) sumNobs <- colSums(Nobs) # apply(Nobs, 2, sum) pneumo.long <- data.frame(symptoms = ordered(rep(rep(colnames(Nobs), nrow(Nobs)), times = c(t(Nobs))), levels = colnames(Nobs)), let = rep(rep(with(pneumo, let), each = ncol(Nobs)), times = c(t(Nobs)))) with(pneumo.long, table(let, symptoms)) # Should be same as pneumo (fit.long1 <- vglm(symptoms ~ let, data = pneumo.long, trace = TRUE, cumulative(parallel = TRUE, reverse = TRUE))) coef(fit.long1, matrix = TRUE) # cf. coef(fit, matrix = TRUE) # Could try using mustart if fit.long1 failed to converge. mymustart <- matrix(sumNobs / sum(sumNobs), nrow(pneumo.long), ncol(Nobs), byrow = TRUE) fit.long2 <- vglm(symptoms ~ let, mustart = mymustart, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo.long, trace = TRUE) coef(fit.long2, matrix = TRUE) # cf. coef(fit, matrix = TRUE)
# Proportional odds model (p.179) of McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), pneumo)) depvar(fit) # Sample proportions (good technique) fit@y # Sample proportions (bad technique) weights(fit, type = "prior") # Number of observations coef(fit, matrix = TRUE) constraints(fit) # Constraint matrices apply(fitted(fit), 1, which.max) # Classification apply(predict(fit, newdata = pneumo, type = "response"), 1, which.max) # Classification R2latvar(fit) # Check that the model is linear in let ---------------------- fit2 <- vgam(cbind(normal, mild, severe) ~ s(let, df = 2), cumulative(reverse = TRUE), data = pneumo) ## Not run: plot(fit2, se = TRUE, overlay = TRUE, lcol = 1:2, scol = 1:2) ## End(Not run) # Check the proportional odds assumption with a LRT ---------- (fit3 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), pneumo)) pchisq(2 * (logLik(fit3) - logLik(fit)), df = length(coef(fit3)) - length(coef(fit)), lower.tail = FALSE) lrtest(fit3, fit) # More elegant # A factor() version of fit ---------------------------------- # This is in long format (cf. wide format above) Nobs <- round(depvar(fit) * c(weights(fit, type = "prior"))) sumNobs <- colSums(Nobs) # apply(Nobs, 2, sum) pneumo.long <- data.frame(symptoms = ordered(rep(rep(colnames(Nobs), nrow(Nobs)), times = c(t(Nobs))), levels = colnames(Nobs)), let = rep(rep(with(pneumo, let), each = ncol(Nobs)), times = c(t(Nobs)))) with(pneumo.long, table(let, symptoms)) # Should be same as pneumo (fit.long1 <- vglm(symptoms ~ let, data = pneumo.long, trace = TRUE, cumulative(parallel = TRUE, reverse = TRUE))) coef(fit.long1, matrix = TRUE) # cf. coef(fit, matrix = TRUE) # Could try using mustart if fit.long1 failed to converge. mymustart <- matrix(sumNobs / sum(sumNobs), nrow(pneumo.long), ncol(Nobs), byrow = TRUE) fit.long2 <- vglm(symptoms ~ let, mustart = mymustart, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo.long, trace = TRUE) coef(fit.long2, matrix = TRUE) # cf. coef(fit, matrix = TRUE)
Maximum likelihood estimation of the 3-parameter Dagum distribution.
dagum(lscale = "loglink", lshape1.a = "loglink", lshape2.p = "loglink", iscale = NULL, ishape1.a = NULL, ishape2.p = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), gshape2.p = exp(-5:5), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
dagum(lscale = "loglink", lshape1.a = "loglink", lshape2.p = "loglink", iscale = NULL, ishape1.a = NULL, ishape2.p = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), gshape2.p = exp(-5:5), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
lss |
See |
lshape1.a , lscale , lshape2.p
|
Parameter link functions applied to the
(positive) parameters |
iscale , ishape1.a , ishape2.p , imethod , zero
|
See |
gscale , gshape1.a , gshape2.p
|
See |
probs.y |
See |
The 3-parameter Dagum distribution is the 4-parameter
generalized beta II distribution with shape parameter .
It is known under various other names, such as the Burr III,
inverse Burr, beta-K, and 3-parameter kappa distribution.
It can be considered a generalized log-logistic distribution.
Some distributions which are special cases of the 3-parameter
Dagum are the inverse Lomax (
), Fisk (
),
and the inverse paralogistic (
).
More details can be found in Kleiber and Kotz (2003).
The Dagum distribution has a cumulative distribution function
which leads to a probability density function
for ,
,
,
.
Here,
is the scale parameter
scale
,
and the others are shape parameters.
The mean is
provided ; these are returned as the fitted
values. This family function handles multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as
vglm
, and vgam
.
See the notes in genbetaII
.
From Kleiber and Kotz (2003), the MLE is rather sensitive to
isolated observations located sufficiently far from the majority
of the data. Reliable estimation of the scale parameter require
, while estimates for
and
can be
considered unbiased for
or 3000.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
Dagum
,
genbetaII
,
betaII
,
sinmad
,
fisk
,
inv.lomax
,
lomax
,
paralogistic
,
inv.paralogistic
,
simulate.vlm
.
## Not run: ddata <- data.frame(y = rdagum(n = 3000, scale = exp(2), shape1 = exp(1), shape2 = exp(1))) fit <- vglm(y ~ 1, dagum(lss = FALSE), data = ddata, trace = TRUE) fit <- vglm(y ~ 1, dagum(lss = FALSE, ishape1.a = exp(1)), data = ddata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
## Not run: ddata <- data.frame(y = rdagum(n = 3000, scale = exp(2), shape1 = exp(1), shape2 = exp(1))) fit <- vglm(y ~ 1, dagum(lss = FALSE), data = ddata, trace = TRUE) fit <- vglm(y ~ 1, dagum(lss = FALSE, ishape1.a = exp(1)), data = ddata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
Density, distribution function, quantile function and random
generation for the Dagum distribution with shape parameters
a
and p
, and scale parameter scale
.
ddagum(x, scale = 1, shape1.a, shape2.p, log = FALSE) pdagum(q, scale = 1, shape1.a, shape2.p, lower.tail = TRUE, log.p = FALSE) qdagum(p, scale = 1, shape1.a, shape2.p, lower.tail = TRUE, log.p = FALSE) rdagum(n, scale = 1, shape1.a, shape2.p)
ddagum(x, scale = 1, shape1.a, shape2.p, log = FALSE) pdagum(q, scale = 1, shape1.a, shape2.p, lower.tail = TRUE, log.p = FALSE) qdagum(p, scale = 1, shape1.a, shape2.p, lower.tail = TRUE, log.p = FALSE) rdagum(n, scale = 1, shape1.a, shape2.p)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations. If |
shape1.a , shape2.p
|
shape parameters. |
scale |
scale parameter. |
log |
Logical.
If |
lower.tail , log.p
|
See dagum
, which is the VGAM family function
for estimating the parameters by maximum likelihood estimation.
ddagum
gives the density,
pdagum
gives the distribution function,
qdagum
gives the quantile function, and
rdagum
generates random deviates.
The Dagum distribution is a special case of the 4-parameter generalized beta II distribution.
T. W. Yee and Kai Huang
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
probs <- seq(0.1, 0.9, by = 0.1) shape1.a <- 1; shape2.p <- 2 # Should be 0: max(abs(pdagum(qdagum(probs, shape1.a = shape1.a, shape2.p = shape2.p), shape1.a = shape1.a, shape2.p = shape2.p) - probs)) ## Not run: par(mfrow = c(1, 2)) x <- seq(-0.01, 5, len = 401) plot(x, dexp(x), type = "l", col = "black", ylab = "", las = 1, ylim = c(0, 1), main = "Black is std exponential, others are ddagum(x, ...)") lines(x, ddagum(x, shape1.a = shape1.a, shape2.p = 1), col = "orange") lines(x, ddagum(x, shape1.a = shape1.a, shape2.p = 2), col = "blue") lines(x, ddagum(x, shape1.a = shape1.a, shape2.p = 5), col = "green") legend("topright", col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape1.a =", shape1.a, ", shape2.p =", c(1, 2, 5))) plot(x, pexp(x), type = "l", col = "black", ylab = "", las = 1, main = "Black is std exponential, others are pdagum(x, ...)") lines(x, pdagum(x, shape1.a = shape1.a, shape2.p = 1), col = "orange") lines(x, pdagum(x, shape1.a = shape1.a, shape2.p = 2), col = "blue") lines(x, pdagum(x, shape1.a = shape1.a, shape2.p = 5), col = "green") legend("bottomright", col = c("orange", "blue", "green"), lty = rep(1, len = 3), legend = paste("shape1.a =", shape1.a, ", shape2.p =", c(1, 2, 5))) ## End(Not run)
probs <- seq(0.1, 0.9, by = 0.1) shape1.a <- 1; shape2.p <- 2 # Should be 0: max(abs(pdagum(qdagum(probs, shape1.a = shape1.a, shape2.p = shape2.p), shape1.a = shape1.a, shape2.p = shape2.p) - probs)) ## Not run: par(mfrow = c(1, 2)) x <- seq(-0.01, 5, len = 401) plot(x, dexp(x), type = "l", col = "black", ylab = "", las = 1, ylim = c(0, 1), main = "Black is std exponential, others are ddagum(x, ...)") lines(x, ddagum(x, shape1.a = shape1.a, shape2.p = 1), col = "orange") lines(x, ddagum(x, shape1.a = shape1.a, shape2.p = 2), col = "blue") lines(x, ddagum(x, shape1.a = shape1.a, shape2.p = 5), col = "green") legend("topright", col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape1.a =", shape1.a, ", shape2.p =", c(1, 2, 5))) plot(x, pexp(x), type = "l", col = "black", ylab = "", las = 1, main = "Black is std exponential, others are pdagum(x, ...)") lines(x, pdagum(x, shape1.a = shape1.a, shape2.p = 1), col = "orange") lines(x, pdagum(x, shape1.a = shape1.a, shape2.p = 2), col = "blue") lines(x, pdagum(x, shape1.a = shape1.a, shape2.p = 5), col = "green") legend("bottomright", col = c("orange", "blue", "green"), lty = rep(1, len = 3), legend = paste("shape1.a =", shape1.a, ", shape2.p =", c(1, 2, 5))) ## End(Not run)
Density for the AR-1 model.
dAR1(x, drift = 0, var.error = 1, ARcoef1 = 0.0, type.likelihood = c("exact", "conditional"), log = FALSE)
dAR1(x, drift = 0, var.error = 1, ARcoef1 = 0.0, type.likelihood = c("exact", "conditional"), log = FALSE)
x |
vector of quantiles. |
drift |
the scaled mean (also known as the drift parameter),
|
log |
Logical.
If |
type.likelihood , var.error , ARcoef1
|
See |
Most of the background to this function is given
in AR1
.
All the arguments are converted into matrices, and then
all their dimensions are obtained. They are then coerced
into the same size: the number of rows is the maximum
of all the single rows, and ditto for the number of columns.
dAR1
gives the density.
T. W. Yee and Victor Miranda
AR1
.
## Not run: nn <- 100; set.seed(1) tdata <- data.frame(index = 1:nn, TS1 = arima.sim(nn, model = list(ar = -0.50), sd = exp(1))) fit1 <- vglm(TS1 ~ 1, AR1, data = tdata, trace = TRUE) rhobitlink(-0.5) coef(fit1, matrix = TRUE) (Cfit1 <- Coef(fit1)) summary(fit1) # SEs are useful to know logLik(fit1) sum(dAR1(depvar(fit1), drift = Cfit1[1], var.error = (Cfit1[2])^2, ARcoef1 = Cfit1[3], log = TRUE)) fit2 <- vglm(TS1 ~ 1, AR1(type.likelihood = "cond"), data = tdata, trace = TRUE) (Cfit2 <- Coef(fit2)) # Okay for intercept-only models logLik(fit2) head(keep <- dAR1(depvar(fit2), drift = Cfit2[1], var.error = (Cfit2[2])^2, ARcoef1 = Cfit2[3], type.likelihood = "cond", log = TRUE)) sum(keep[-1]) ## End(Not run)
## Not run: nn <- 100; set.seed(1) tdata <- data.frame(index = 1:nn, TS1 = arima.sim(nn, model = list(ar = -0.50), sd = exp(1))) fit1 <- vglm(TS1 ~ 1, AR1, data = tdata, trace = TRUE) rhobitlink(-0.5) coef(fit1, matrix = TRUE) (Cfit1 <- Coef(fit1)) summary(fit1) # SEs are useful to know logLik(fit1) sum(dAR1(depvar(fit1), drift = Cfit1[1], var.error = (Cfit1[2])^2, ARcoef1 = Cfit1[3], log = TRUE)) fit2 <- vglm(TS1 ~ 1, AR1(type.likelihood = "cond"), data = tdata, trace = TRUE) (Cfit2 <- Coef(fit2)) # Okay for intercept-only models logLik(fit2) head(keep <- dAR1(depvar(fit2), drift = Cfit2[1], var.error = (Cfit2[2])^2, ARcoef1 = Cfit2[3], type.likelihood = "cond", log = TRUE)) sum(keep[-1]) ## End(Not run)
Captures of Peromyscus maniculatus collected at East Stuart Gulch, Colorado, USA.
data(deermice)
data(deermice)
The format is a data frame.
Peromyscus maniculatus is a rodent native to North America. The deer mouse is small in size, only about 8 to 10 cm long, not counting the length of the tail.
Originally,
the columns of this data frame
represent the sex (m
or f
),
the ages (y
: young, sa
: semi-adult, a
:
adult), the weights in grams, and the capture histories of
38 individuals over 6 trapping occasions (1: captured, 0:
not captured).
The data set was collected by V. Reid and distributed with the CAPTURE program of Otis et al. (1978).
deermice
has 38 deermice whereas
Perom
had 36 deermice
(Perom
has been withdrawn.)
In deermice
the two semi-adults have been classified
as adults. The sex
variable has 1 for female, and 0
for male.
Huggins, R. M. (1991). Some practical aspects of a conditional likelihood approach to capture experiments. Biometrics, 47, 725–732.
Otis, D. L. et al. (1978). Statistical inference from capture data on closed animal populations, Wildlife Monographs, 62, 3–135.
posbernoulli.b
,
posbernoulli.t
,
fill1
.
head(deermice) ## Not run: fit1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + age, posbernoulli.t(parallel.t = TRUE), deermice, trace = TRUE) coef(fit1) coef(fit1, matrix = TRUE) ## End(Not run)
head(deermice) ## Not run: fit1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + age, posbernoulli.t(parallel.t = TRUE), deermice, trace = TRUE) coef(fit1) coef(fit1, matrix = TRUE) ## End(Not run)
Plots a probability density function associated with a LMS quantile regression.
deplot.lmscreg(object, newdata = NULL, x0, y.arg, show.plot = TRUE, ...)
deplot.lmscreg(object, newdata = NULL, x0, y.arg, show.plot = TRUE, ...)
object |
A VGAM quantile regression model, i.e.,
an object produced by modelling functions such as
|
newdata |
Optional data frame containing secondary variables such as sex. It should have a maximum of one row. The default is to use the original data. |
x0 |
Numeric. The value of the primary variable at which to make the ‘slice’. |
y.arg |
Numerical vector. The values of the response variable at which to evaluate the density. This should be a grid that is fine enough to ensure the plotted curves are smooth. |
show.plot |
Logical. Plot it? If |
... |
Graphical parameter that are passed into
|
This function calls, e.g., deplot.lms.yjn
in order to
compute the density function.
The original object
but with a list
placed in the slot post
, called
@post$deplot
. The list has components
newdata |
The argument |
y |
The argument |
density |
Vector of the density function values evaluated
at |
plotdeplot.lmscreg
actually does the plotting.
Thomas W. Yee
Yee, T. W. (2004). Quantile regression via vector generalized additive models. Statistics in Medicine, 23, 2295–2315.
plotdeplot.lmscreg
,
qtplot.lmscreg
,
lms.bcn
,
lms.bcg
,
lms.yjn
.
## Not run: fit <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), bmi.nz) ygrid <- seq(15, 43, by = 0.25) deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "green", llwd = 2, main = "BMI distribution at ages 20 (green), 40 (blue), 60 (red)") deplot(fit, x0 = 40, y = ygrid, add = TRUE, col = "blue", llwd = 2) deplot(fit, x0 = 60, y = ygrid, add = TRUE, col = "red", llwd = 2) -> a names(a@post$deplot) a@post$deplot$newdata head(a@post$deplot$y) head(a@post$deplot$density) ## End(Not run)
## Not run: fit <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), bmi.nz) ygrid <- seq(15, 43, by = 0.25) deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "green", llwd = 2, main = "BMI distribution at ages 20 (green), 40 (blue), 60 (red)") deplot(fit, x0 = 40, y = ygrid, add = TRUE, col = "blue", llwd = 2) deplot(fit, x0 = 60, y = ygrid, add = TRUE, col = "red", llwd = 2) -> a names(a@post$deplot) a@post$deplot$newdata head(a@post$deplot$y) head(a@post$deplot$density) ## End(Not run)
A generic function that extracts the response/dependent variable from objects.
depvar(object, ...)
depvar(object, ...)
object |
An object that has some response/dependent variable. |
... |
Other arguments fed into the specific methods function of
the model.
In particular, sometimes |
By default
this function is preferred to calling fit@y
, say.
The response/dependent variable, usually as a matrix or vector.
Thomas W. Yee
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) fit@y # Sample proportions (not recommended) depvar(fit) # Better than using fit@y weights(fit, type = "prior") # Number of observations
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) fit@y # Sample proportions (not recommended) depvar(fit) # Better than using fit@y weights(fit, type = "prior") # Number of observations
Density for the extended log-F distribution.
dextlogF(x, lambda, tau, location = 0, scale = 1, log = FALSE)
dextlogF(x, lambda, tau, location = 0, scale = 1, log = FALSE)
x |
Vector of quantiles. |
lambda , tau
|
See |
location , scale
|
See |
log |
If |
The details are given in extlogF1
.
dextlogF
gives the density.
T. W. Yee
## Not run: x <- seq(-2, 8, by = 0.1); mytau <- 0.25; mylambda <- 0.2 plot(x, dextlogF(x, mylambda, tau = mytau), type = "l", las = 1, col = "blue", ylab = "PDF (log-scale)", log = "y", main = "Extended log-F density function is blue", sub = "Asymmetric Laplace is orange dashed") lines(x, dalap(x, tau = mytau, scale = 3.5), col = "orange", lty = 2) abline(v = 0, col = "gray", lty = 2) ## End(Not run)
## Not run: x <- seq(-2, 8, by = 0.1); mytau <- 0.25; mylambda <- 0.2 plot(x, dextlogF(x, mylambda, tau = mytau), type = "l", las = 1, col = "blue", ylab = "PDF (log-scale)", log = "y", main = "Extended log-F density function is blue", sub = "Asymmetric Laplace is orange dashed") lines(x, dalap(x, tau = mytau, scale = 3.5), col = "orange", lty = 2) abline(v = 0, col = "gray", lty = 2) ## End(Not run)
Returns the residual degrees-of-freedom extracted from a fitted VGLM object.
df.residual_vlm(object, type = c("vlm", "lm"), ...)
df.residual_vlm(object, type = c("vlm", "lm"), ...)
object |
an object for which the degrees-of-freedom are desired,
e.g., a |
type |
the type of residual degrees-of-freedom wanted. In some applications the 'usual' LM-type value may be more appropriate. The default is the first choice. |
... |
additional optional arguments. |
When a VGLM is fitted, a large (VLM) generalized least
squares (GLS) fit is done at each IRLS iteration. To do this, an
ordinary least squares (OLS) fit is performed by
transforming the GLS using Cholesky factors.
The number of rows is times the ‘ordinary’ number
of rows of the LM-type model:
.
Here,
is the number of linear/additive predictors.
So the formula for the VLM-type residual degrees-of-freedom
is
where
is the number of
columns of the ‘big’ VLM matrix.
The formula for the LM-type residual degrees-of-freedom
is
where
is the number of
columns of the ‘ordinary’ LM matrix corresponding
to the
th linear/additive predictor.
The value of the residual degrees-of-freedom extracted
from the object.
When type = "vlm"
this is a single integer, and
when type = "lm"
this is a -vector of
integers.
vglm
,
deviance
,
lm
,
anova.vglm
,
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) head(model.matrix(fit, type = "vlm")) head(model.matrix(fit, type = "lm")) df.residual(fit, type = "vlm") # n * M - p_VLM nobs(fit, type = "vlm") # n * M nvar(fit, type = "vlm") # p_VLM df.residual(fit, type = "lm") # n - p_LM(j) nobs(fit, type = "lm") # n nvar(fit, type = "lm") # p_LM nvar_vlm(fit, type = "lm") # p_LM(j) (<= p_LM elementwise)
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) head(model.matrix(fit, type = "vlm")) head(model.matrix(fit, type = "lm")) df.residual(fit, type = "vlm") # n * M - p_VLM nobs(fit, type = "vlm") # n * M nvar(fit, type = "vlm") # p_VLM df.residual(fit, type = "lm") # n - p_LM(j) nobs(fit, type = "lm") # n nvar(fit, type = "lm") # p_LM nvar_vlm(fit, type = "lm") # p_LM(j) (<= p_LM elementwise)
Plots a 1- or 2-parameter GAITD combo probability mass function.
dgaitdplot(theta.p, fam = "pois", a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, theta.a = theta.p, theta.i = theta.p, theta.d = theta.p, deflation = FALSE, plot.it = TRUE, new.plot = TRUE, offset.x = ifelse(new.plot, 0, 0.25), type.plot = "h", xlim = c(0, min(100, max.support + 2)), ylim = NULL, xlab = "", ylab = "Probability", main = "", cex.main = 1.2, posn.main = NULL, all.col = NULL, all.lty = NULL, all.lwd = NULL, lty.p = "solid", lty.a.mix = "longdash", lty.a.mlm = "longdash", lty.i.mix = "dashed", lty.i.mlm = "dashed", lty.d.mix = "solid", lty.d.mlm = "solid", lty.d.dip = "dashed", col.p = "pink2", col.a.mix = artichoke.col, col.a.mlm = asparagus.col, col.i.mix = indigo.col, col.i.mlm = iris.col, col.d.mix = deer.col, col.d.mlm = dirt.col, col.d.dip = desire.col, col.t = turquoise.col, cex.p = 1, lwd.p = NULL, lwd.a = NULL, lwd.i = NULL, lwd.d = NULL, iontop = TRUE, dontop = TRUE, las = 0, lend = "round", axes.x = TRUE, axes.y = TRUE, Plot.trunc = TRUE, cex.t = 1, pch.t = 1, baseparams.argnames = NULL, nparams = 1, flip.args = FALSE, ...)
dgaitdplot(theta.p, fam = "pois", a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, theta.a = theta.p, theta.i = theta.p, theta.d = theta.p, deflation = FALSE, plot.it = TRUE, new.plot = TRUE, offset.x = ifelse(new.plot, 0, 0.25), type.plot = "h", xlim = c(0, min(100, max.support + 2)), ylim = NULL, xlab = "", ylab = "Probability", main = "", cex.main = 1.2, posn.main = NULL, all.col = NULL, all.lty = NULL, all.lwd = NULL, lty.p = "solid", lty.a.mix = "longdash", lty.a.mlm = "longdash", lty.i.mix = "dashed", lty.i.mlm = "dashed", lty.d.mix = "solid", lty.d.mlm = "solid", lty.d.dip = "dashed", col.p = "pink2", col.a.mix = artichoke.col, col.a.mlm = asparagus.col, col.i.mix = indigo.col, col.i.mlm = iris.col, col.d.mix = deer.col, col.d.mlm = dirt.col, col.d.dip = desire.col, col.t = turquoise.col, cex.p = 1, lwd.p = NULL, lwd.a = NULL, lwd.i = NULL, lwd.d = NULL, iontop = TRUE, dontop = TRUE, las = 0, lend = "round", axes.x = TRUE, axes.y = TRUE, Plot.trunc = TRUE, cex.t = 1, pch.t = 1, baseparams.argnames = NULL, nparams = 1, flip.args = FALSE, ...)
theta.p |
Numeric, usually scalar but may have length 2.
This matches with, e.g., |
fam |
Character, |
a.mix , i.mix , a.mlm , i.mlm
|
See |
d.mix , d.mlm
|
See |
truncate , max.support
|
See |
pobs.mix , pobs.mlm , byrow.aid
|
See |
pstr.mix , pstr.mlm , pdip.mix , pdip.mlm
|
See |
theta.a , theta.i , theta.d
|
Similar to |
deflation |
Logical. Plot the deflation (dip) probabilities? |
plot.it |
Logical. Plot the PMF? |
new.plot , offset.x
|
If |
xlim , ylim , xlab , ylab
|
See |
main , cex.main , posn.main
|
Character, size and position of |
all.col , all.lty , all.lwd
|
These arguments allow all the colours,
line types and line widths arguments to be
assigned to these values, i.e., so that they
are the same for all values of the support.
For example, if |
lty.p , lty.a.mix , lty.a.mlm , lty.i.mix , lty.i.mlm
|
Line type for parent, altered and inflated.
See |
col.p , col.a.mix , col.a.mlm , col.i.mix , col.i.mlm
|
Line colour for parent (nonspecial), altered, inflated,
truncated and deflated values.
See |
lty.d.mix , lty.d.mlm , lty.d.dip
|
Similar to above.
Used when |
col.d.mix , col.d.mlm , col.d.dip
|
Similar to above.
Used when |
col.t |
Point colour for truncated values, the default is
|
type.plot , cex.p
|
The former matches 'type' argument in
|
lwd.p , lwd.a , lwd.i , lwd.d
|
Line width for parent, altered and inflated.
See |
las , lend
|
See |
iontop , dontop
|
Logicals.
Draw the inflated and deflated bars on top?
The default is to draw the spikes on top, but if
|
axes.x , axes.y
|
Logical. Plot axes?
See |
Plot.trunc , cex.t , pch.t
|
Logical. Plot the truncated values?
If so, then specify the size and plotting character.
See |
baseparams.argnames |
Character string specifying the argument name for the generic
parameter |
nparams , flip.args
|
Not for use by the user. It is used internally to handle the NBD. |
... |
Currently unused but there is provision for
passing graphical arguments in in the future;
see |
This is meant to be a crude function to plot the PMF of the GAITD combo model. Some flexibility is offered via many graphical arguments, but there are still many improvements that could be done.
A list is returned invisibly. The components are:
x |
The integer values between the values
of |
pmf.z |
The value of the PMF, by
calling the |
sc.parent |
The same level as the scaled parent
distribution. Thus for inflated values,
the value where the spikes begin. And for
deflated values, the value at the top of
the dips. This is a convenient way to obtain
them as it is quite cumbersome to compute
them manually. For any nonspecial value,
such as non-inflated and non-deflated values,
they are equal to |
unsc.parent |
Unscaled parent distribution. If there is no alteration, inflation, deflation and truncation then this is the basic PMF stipulated by the parent distribution only. Usually this is FYI only. |
This utility function may change a lot in the future.
Because this function is called by a shiny app,
if any parameter values lie outside the
parameter space then stop
will be called.
For example, too much deflation results in
NaN
values returned by
dgaitdnbinom
.
T. W. Yee.
plotdgaitd
,
spikeplot
,
meangaitd
,
Gaitdpois
,
gaitdpoisson
,
Gaitdnbinom
,
multilogitlink
.
## Not run: i.mix <- seq(0, 25, by = 5) mean.p <- 10; size.p <- 8 dgaitdplot(c(size.p, mean.p), fam = "nbinom", xlim = c(0, 25), a.mix = i.mix + 1, i.mix = i.mix, pobs.mix = 0.1, pstr.mix = 0.1, lwd.i = 2,lwd.p = 2, lwd.a = 2) ## End(Not run)
## Not run: i.mix <- seq(0, 25, by = 5) mean.p <- 10; size.p <- 8 dgaitdplot(c(size.p, mean.p), fam = "nbinom", xlim = c(0, 25), a.mix = i.mix + 1, i.mix = i.mix, pobs.mix = 0.1, pstr.mix = 0.1, lwd.i = 2,lwd.p = 2, lwd.a = 2) ## End(Not run)
Density, distribution function, quantile function and random generation for Huber's least favourable distribution, see Huber and Ronchetti (2009).
dhuber(x, k = 0.862, mu = 0, sigma = 1, log = FALSE) edhuber(x, k = 0.862, mu = 0, sigma = 1, log = FALSE) rhuber(n, k = 0.862, mu = 0, sigma = 1) qhuber(p, k = 0.862, mu = 0, sigma = 1, lower.tail = TRUE, log.p = FALSE) phuber(q, k = 0.862, mu = 0, sigma = 1, lower.tail = TRUE, log.p = FALSE)
dhuber(x, k = 0.862, mu = 0, sigma = 1, log = FALSE) edhuber(x, k = 0.862, mu = 0, sigma = 1, log = FALSE) rhuber(n, k = 0.862, mu = 0, sigma = 1) qhuber(p, k = 0.862, mu = 0, sigma = 1, lower.tail = TRUE, log.p = FALSE) phuber(q, k = 0.862, mu = 0, sigma = 1, lower.tail = TRUE, log.p = FALSE)
x , q
|
numeric vector, vector of quantiles. |
p |
vector of probabilities. |
n |
number of random values to be generated.
If |
k |
numeric. Borderline value of central Gaussian part
of the distribution.
This is known as the tuning constant, and should be positive.
For example, |
mu |
numeric. distribution mean. |
sigma |
numeric. Distribution scale ( |
log |
Logical.
If |
lower.tail , log.p
|
Details are given in huber2
, the
VGAM family function for estimating the
parameters mu
and sigma
.
dhuber
gives out a vector of density values.
edhuber
gives out a list with components val
(density values) and eps
(contamination proportion).
rhuber
gives out a vector of random numbers generated
by Huber's least favourable distribution.
phuber
gives the distribution function,
qhuber
gives the quantile function.
Christian Hennig wrote [d,ed,r]huber()
(from smoothmest) and
slight modifications were made by T. W. Yee to
replace looping by vectorization and addition of the log
argument.
Arash Ardalan wrote [pq]huber()
, and
two arguments for these were implemented by Kai Huang.
This helpfile was adapted from smoothmest.
set.seed(123456) edhuber(1:5, k = 1.5) rhuber(5) ## Not run: mu <- 3; xx <- seq(-2, 7, len = 100) # Plot CDF and PDF plot(xx, dhuber(xx, mu = mu), type = "l", col = "blue", las = 1, main = "blue is density, orange is the CDF", ylab = "", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(xx, phuber(xx, mu = mu), type = "l", col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qhuber(probs, mu = mu) lines(Q, dhuber(Q, mu = mu), col = "purple", lty = 3, type = "h") lines(Q, phuber(Q, mu = mu), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) phuber(Q, mu = mu) - probs # Should be all 0s ## End(Not run)
set.seed(123456) edhuber(1:5, k = 1.5) rhuber(5) ## Not run: mu <- 3; xx <- seq(-2, 7, len = 100) # Plot CDF and PDF plot(xx, dhuber(xx, mu = mu), type = "l", col = "blue", las = 1, main = "blue is density, orange is the CDF", ylab = "", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(xx, phuber(xx, mu = mu), type = "l", col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qhuber(probs, mu = mu) lines(Q, dhuber(Q, mu = mu), col = "purple", lty = 3, type = "h") lines(Q, phuber(Q, mu = mu), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) phuber(Q, mu = mu) - probs # Should be all 0s ## End(Not run)
Estimates the parameter of the differenced zeta distribution.
diffzeta(start = 1, lshape = "loglink", ishape = NULL)
diffzeta(start = 1, lshape = "loglink", ishape = NULL)
lshape , ishape
|
Same as |
start |
Smallest value of the support of the distribution. Must be a positive integer. |
The PMF is
where is the positive shape parameter, and
is
start
.
According to Moreno-Sanchez et al. (2016), this model
fits quite well to about 40 percent of all the English books
in the Project Gutenberg data base (about 30,000 texts).
Multiple responses are handled.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as
vglm
, and vgam
.
T. W. Yee
Moreno-Sanchez, I., Font-Clos, F. and Corral, A. (2016). Large-Scale Analysis of Zipf's Law in English Texts, PLoS ONE, 11(1), 1–19.
Diffzeta
,
zetaff
,
zeta
,
zipf
,
zipf
.
odata <- data.frame(x2 = runif(nn <- 1000)) # Artificial data odata <- transform(odata, shape = loglink(-0.25 + x2, inv = TRUE)) odata <- transform(odata, y1 = rdiffzeta(nn, shape)) with(odata, table(y1)) ofit <- vglm(y1 ~ x2, diffzeta, odata, trace = TRUE) coef(ofit, matrix = TRUE)
odata <- data.frame(x2 = runif(nn <- 1000)) # Artificial data odata <- transform(odata, shape = loglink(-0.25 + x2, inv = TRUE)) odata <- transform(odata, y1 = rdiffzeta(nn, shape)) with(odata, table(y1)) ofit <- vglm(y1 ~ x2, diffzeta, odata, trace = TRUE) coef(ofit, matrix = TRUE)
Density, distribution function, quantile function, and random generation for the differenced zeta distribution.
ddiffzeta(x, shape, start = 1, log = FALSE) pdiffzeta(q, shape, start = 1, lower.tail = TRUE) qdiffzeta(p, shape, start = 1) rdiffzeta(n, shape, start = 1)
ddiffzeta(x, shape, start = 1, log = FALSE) pdiffzeta(q, shape, start = 1, lower.tail = TRUE) qdiffzeta(p, shape, start = 1) rdiffzeta(n, shape, start = 1)
x , q , p , n
|
Same as in |
shape , start
|
Details at |
log , lower.tail
|
Same as in |
This distribution appears to work well on the distribution
of English words in such texts.
Some more details are given in diffzeta
.
ddiffzeta
gives the density,
pdiffzeta
gives the distribution function,
qdiffzeta
gives the quantile function, and
rdiffzeta
generates random deviates.
Given some response data, the VGAM family function
diffzeta
estimates the parameter shape
.
Function pdiffzeta()
suffers from the problems that
plog
sometimes has, i.e., when p
is very close to 1.
T. W. Yee
diffzeta
,
zetaff
,
zipf
,
Oizeta
.
ddiffzeta(1:20, 0.5, start = 2) rdiffzeta(20, 0.5) ## Not run: shape <- 0.8; x <- 1:10 plot(x, ddiffzeta(x, sh = shape), type = "h", ylim = 0:1, las = 1, sub = "shape=0.8", col = "blue", ylab = "Probability", main = "Differenced zeta distribution: blue=PMF; orange=CDF") lines(x + 0.1, pdiffzeta(x, shape = shape), col = "orange", lty = 3, type = "h") ## End(Not run)
ddiffzeta(1:20, 0.5, start = 2) rdiffzeta(20, 0.5) ## Not run: shape <- 0.8; x <- 1:10 plot(x, ddiffzeta(x, sh = shape), type = "h", ylim = 0:1, las = 1, sub = "shape=0.8", col = "blue", ylab = "Probability", main = "Differenced zeta distribution: blue=PMF; orange=CDF") lines(x + 0.1, pdiffzeta(x, shape = shape), col = "orange", lty = 3, type = "h") ## End(Not run)
Fits a Dirichlet distribution to a matrix of compositions.
dirichlet(link = "loglink", parallel = FALSE, zero = NULL, imethod = 1)
dirichlet(link = "loglink", parallel = FALSE, zero = NULL, imethod = 1)
link |
Link function applied to each of the |
parallel , zero , imethod
|
See |
In this help file the response is assumed to be a -column
matrix with positive values and whose rows each sum to unity.
Such data can be thought of as compositional data. There are
linear/additive predictors
.
The Dirichlet distribution is commonly used to model compositional
data, including applications in genetics.
Suppose is
the response. Then it has a Dirichlet distribution if
has density
where
,
,
and the density is defined on the unit simplex
One has
,
which are returned as the fitted values.
For this distribution Fisher scoring corresponds to Newton-Raphson.
The Dirichlet distribution can be motivated by considering
the random variables
which are
each independent
and identically distributed as a gamma distribution with density
.
Then the Dirichlet distribution arises when
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
When fitted, the fitted.values
slot of the object
contains the -column matrix of means.
The response should be a matrix of positive values whose rows
each sum to unity. Similar to this is count data, where probably
a multinomial logit model (multinomial
) may be
appropriate. Another similar distribution to the Dirichlet
is the Dirichlet-multinomial (see dirmultinomial
).
Thomas W. Yee
Lange, K. (2002). Mathematical and Statistical Methods for Genetic Analysis, 2nd ed. New York: Springer-Verlag.
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
rdiric
,
dirmultinomial
,
multinomial
,
simplex
.
ddata <- data.frame(rdiric(1000, shape = exp(c(y1 = -1, y2 = 1, y3 = 0)))) fit <- vglm(cbind(y1, y2, y3) ~ 1, dirichlet, data = ddata, trace = TRUE, crit = "coef") Coef(fit) coef(fit, matrix = TRUE) head(fitted(fit))
ddata <- data.frame(rdiric(1000, shape = exp(c(y1 = -1, y2 = 1, y3 = 0)))) fit <- vglm(cbind(y1, y2, y3) ~ 1, dirichlet, data = ddata, trace = TRUE, crit = "coef") Coef(fit) coef(fit, matrix = TRUE) head(fitted(fit))
Fits a Dirichlet-multinomial distribution to a matrix of non-negative integers.
dirmul.old(link = "loglink", ialpha = 0.01, parallel = FALSE, zero = NULL)
dirmul.old(link = "loglink", ialpha = 0.01, parallel = FALSE, zero = NULL)
link |
Link function applied to each of the |
ialpha |
Numeric vector. Initial values for the
|
parallel |
A logical, or formula specifying which terms have equal/unequal coefficients. |
zero |
An integer-valued vector specifying which linear/additive
predictors are modelled as intercepts only. The values must
be from the set {1,2,..., |
The Dirichlet-multinomial distribution, which is somewhat similar to a Dirichlet distribution, has probability function
for ,
,
and
.
Here,
means “
choose
” and
refers to combinations (see
choose
).
The (posterior) mean is
for , and these are returned
as the fitted values as a
-column matrix.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
The response should be a matrix of non-negative values. Convergence seems to slow down if there are zero values. Currently, initial values can be improved upon.
This function is almost defunct and may be withdrawn soon.
Use dirmultinomial
instead.
Thomas W. Yee
Lange, K. (2002). Mathematical and Statistical Methods for Genetic Analysis, 2nd ed. New York: Springer-Verlag.
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
Paul, S. R., Balasooriya, U. and Banerjee, T. (2005). Fisher information matrix of the Dirichlet-multinomial distribution. Biometrical Journal, 47, 230–236.
Tvedebrink, T. (2010).
Overdispersion in allelic counts and -correction
in forensic genetics.
Theoretical Population Biology,
78, 200–210.
dirmultinomial
,
dirichlet
,
betabinomialff
,
multinomial
.
# Data from p.50 of Lange (2002) alleleCounts <- c(2, 84, 59, 41, 53, 131, 2, 0, 0, 50, 137, 78, 54, 51, 0, 0, 0, 80, 128, 26, 55, 95, 0, 0, 0, 16, 40, 8, 68, 14, 7, 1) dim(alleleCounts) <- c(8, 4) alleleCounts <- data.frame(t(alleleCounts)) dimnames(alleleCounts) <- list(c("White","Black","Chicano","Asian"), paste("Allele", 5:12, sep = "")) set.seed(123) # @initialize uses random numbers fit <- vglm(cbind(Allele5,Allele6,Allele7,Allele8,Allele9, Allele10,Allele11,Allele12) ~ 1, dirmul.old, trace = TRUE, crit = "c", data = alleleCounts) (sfit <- summary(fit)) vcov(sfit) round(eta2theta(coef(fit), fit@misc$link, fit@misc$earg), digits = 2) # not preferred round(Coef(fit), digits = 2) # preferred round(t(fitted(fit)), digits = 4) # 2nd row of Lange (2002, Table 3.5) coef(fit, matrix = TRUE) pfit <- vglm(cbind(Allele5,Allele6,Allele7,Allele8,Allele9, Allele10,Allele11,Allele12) ~ 1, dirmul.old(parallel = TRUE), trace = TRUE, data = alleleCounts) round(eta2theta(coef(pfit, matrix = TRUE), pfit@misc$link, pfit@misc$earg), digits = 2) # 'Right' answer round(Coef(pfit), digits = 2) # 'Wrong' due to parallelism constraint
# Data from p.50 of Lange (2002) alleleCounts <- c(2, 84, 59, 41, 53, 131, 2, 0, 0, 50, 137, 78, 54, 51, 0, 0, 0, 80, 128, 26, 55, 95, 0, 0, 0, 16, 40, 8, 68, 14, 7, 1) dim(alleleCounts) <- c(8, 4) alleleCounts <- data.frame(t(alleleCounts)) dimnames(alleleCounts) <- list(c("White","Black","Chicano","Asian"), paste("Allele", 5:12, sep = "")) set.seed(123) # @initialize uses random numbers fit <- vglm(cbind(Allele5,Allele6,Allele7,Allele8,Allele9, Allele10,Allele11,Allele12) ~ 1, dirmul.old, trace = TRUE, crit = "c", data = alleleCounts) (sfit <- summary(fit)) vcov(sfit) round(eta2theta(coef(fit), fit@misc$link, fit@misc$earg), digits = 2) # not preferred round(Coef(fit), digits = 2) # preferred round(t(fitted(fit)), digits = 4) # 2nd row of Lange (2002, Table 3.5) coef(fit, matrix = TRUE) pfit <- vglm(cbind(Allele5,Allele6,Allele7,Allele8,Allele9, Allele10,Allele11,Allele12) ~ 1, dirmul.old(parallel = TRUE), trace = TRUE, data = alleleCounts) round(eta2theta(coef(pfit, matrix = TRUE), pfit@misc$link, pfit@misc$earg), digits = 2) # 'Right' answer round(Coef(pfit), digits = 2) # 'Wrong' due to parallelism constraint
Fits a Dirichlet-multinomial distribution to a matrix response.
dirmultinomial(lphi = "logitlink", iphi = 0.10, parallel = FALSE, zero = "M")
dirmultinomial(lphi = "logitlink", iphi = 0.10, parallel = FALSE, zero = "M")
lphi |
Link function applied to the |
iphi |
Numeric. Initial value for |
parallel |
A logical (formula not allowed here) indicating whether the
probabilities |
zero |
An integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
The values must be from the set |
The Dirichlet-multinomial distribution arises from a multinomial distribution where the probability parameters are not constant but are generated from a multivariate distribution called the Dirichlet distribution. The Dirichlet-multinomial distribution has probability function
where is the over-dispersion parameter
and
. Here,
means “
choose
”
and refers to combinations (see
choose
).
The above formula applies to each row of the matrix response.
In this VGAM family function the first
linear/additive predictors correspond to the first
probabilities via
where is the
th linear/additive
predictor (
by definition for
but not for
)
and
.
The
th linear/additive predictor corresponds to
lphi
applied to .
Note that but
the probabilities (returned as the fitted values)
are bundled together as a
-column
matrix. The quantities
are returned as the prior
weights.
The beta-binomial distribution is a special case of
the Dirichlet-multinomial distribution when ;
see
betabinomial
. It is easy to show that
the first shape parameter of the beta distribution is
and the second shape parameter is
. Also,
, which
is known as the intra-cluster correlation coefficient.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
, rrvglm
and vgam
.
If the model is an intercept-only model then @misc
(which is a
list) has a component called shape
which is a vector with the
values
.
This VGAM family function is prone to numerical problems, especially when there are covariates.
The response can be a matrix of non-negative integers, or
else a matrix of sample proportions and the total number of
counts in each row specified using the weights
argument.
This dual input option is similar to multinomial
.
To fit a ‘parallel’ model with the
parameter being an intercept-only you will need to use the
constraints
argument.
Currently, Fisher scoring is implemented. To compute the
expected information matrix a for
loop is used; this
may be very slow when the counts are large. Additionally,
convergence may be slower than usual due to round-off error
when computing the expected information matrices.
Thomas W. Yee
Paul, S. R., Balasooriya, U. and Banerjee, T. (2005). Fisher information matrix of the Dirichlet-multinomial distribution. Biometrical Journal, 47, 230–236.
Tvedebrink, T. (2010).
Overdispersion in allelic counts and -correction in
forensic genetics.
Theoretical Population Biology, 78, 200–210.
Yu, P. and Shaw, C. A. (2014). An Efficient Algorithm for Accurate Computation of the Dirichlet-Multinomial Log-Likelihood Function. Bioinformatics, 30, 1547–54.
dirmul.old
,
betabinomial
,
betabinomialff
,
dirichlet
,
multinomial
.
nn <- 5; M <- 4; set.seed(1) ydata <- data.frame(round(matrix(runif(nn * M, max = 100), nn, M))) colnames(ydata) <- paste("y", 1:M, sep = "") # Integer counts fit <- vglm(cbind(y1, y2, y3, y4) ~ 1, dirmultinomial, data = ydata, trace = TRUE) head(fitted(fit)) depvar(fit) # Sample proportions weights(fit, type = "prior", matrix = FALSE) # Total counts per row ## Not run: ydata <- transform(ydata, x2 = runif(nn)) fit <- vglm(cbind(y1, y2, y3, y4) ~ x2, dirmultinomial, data = ydata, trace = TRUE) Coef(fit) coef(fit, matrix = TRUE) (sfit <- summary(fit)) vcov(sfit) ## End(Not run)
nn <- 5; M <- 4; set.seed(1) ydata <- data.frame(round(matrix(runif(nn * M, max = 100), nn, M))) colnames(ydata) <- paste("y", 1:M, sep = "") # Integer counts fit <- vglm(cbind(y1, y2, y3, y4) ~ 1, dirmultinomial, data = ydata, trace = TRUE) head(fitted(fit)) depvar(fit) # Sample proportions weights(fit, type = "prior", matrix = FALSE) # Total counts per row ## Not run: ydata <- transform(ydata, x2 = runif(nn)) fit <- vglm(cbind(y1, y2, y3, y4) ~ x2, dirmultinomial, data = ydata, trace = TRUE) Coef(fit) coef(fit, matrix = TRUE) (sfit <- summary(fit)) vcov(sfit) ## End(Not run)
Density for the log F distribution.
dlogF(x, shape1, shape2, log = FALSE)
dlogF(x, shape1, shape2, log = FALSE)
x |
Vector of quantiles. |
shape1 , shape2
|
Positive shape parameters. |
log |
if |
The details are given in logF
.
dlogF
gives the density.
T. W. Yee
## Not run: shape1 <- 1.5; shape2 <- 0.5; x <- seq(-5, 8, length = 1001) plot(x, dlogF(x, shape1, shape2), type = "l", las = 1, col = "blue", ylab = "pdf", main = "log F density function") ## End(Not run)
## Not run: shape1 <- 1.5; shape2 <- 0.5; x <- seq(-5, 8, length = 1001) plot(x, dlogF(x, shape1, shape2), type = "l", las = 1, col = "blue", ylab = "pdf", main = "log F density function") ## End(Not run)
Maximum likelihood estimation of the two parameters of a univariate normal distribution when there is double censoring.
double.cens.normal(r1 = 0, r2 = 0, lmu = "identitylink", lsd = "loglink", imu = NULL, isd = NULL, zero = "sd")
double.cens.normal(r1 = 0, r2 = 0, lmu = "identitylink", lsd = "loglink", imu = NULL, isd = NULL, zero = "sd")
r1 , r2
|
Integers. Number of smallest and largest values censored, respectively. |
lmu , lsd
|
Parameter link functions applied to the
mean and standard deviation.
See |
imu , isd , zero
|
See |
This family function uses the Fisher information matrix given
in Harter and Moore (1966). The matrix is not diagonal if
either r1
or r2
are positive.
By default, the mean is the first linear/additive predictor and the log of the standard deviation is the second linear/additive predictor.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
, and vgam
.
This family function only handles a vector or one-column matrix
response. The weights
argument, if used, are interpreted
as frequencies, therefore it must be a vector with positive
integer values.
With no censoring at all (the default), it is better (and
equivalent) to use uninormal
.
T. W. Yee
Harter, H. L. and Moore, A. H. (1966). Iterative maximum-likelihood estimation of the parameters of normal populations from singly and doubly censored samples. Biometrika, 53, 205–213.
uninormal
,
cens.normal
,
tobit
.
## Not run: # Repeat the simulations of Harter & Moore (1966) SIMS <- 100 # Number of simulations (change this to 1000) mu.save <- sd.save <- rep(NA, len = SIMS) r1 <- 0; r2 <- 4; nn <- 20 for (sim in 1:SIMS) { y <- sort(rnorm(nn)) y <- y[(1+r1):(nn-r2)] # Delete r1 smallest and r2 largest fit <- vglm(y ~ 1, double.cens.normal(r1 = r1, r2 = r2)) mu.save[sim] <- predict(fit)[1, 1] sd.save[sim] <- exp(predict(fit)[1, 2]) # Assumes a log link & ~ 1 } c(mean(mu.save), mean(sd.save)) # Should be c(0,1) c(sd(mu.save), sd(sd.save)) ## End(Not run) # Data from Sarhan & Greenberg (1962); MLEs are mu=9.2606, sd=1.3754 strontium90 <- data.frame(y = c(8.2, 8.4, 9.1, 9.8, 9.9)) fit <- vglm(y ~ 1, double.cens.normal(r1 = 2, r2 = 3, isd = 6), data = strontium90, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit)
## Not run: # Repeat the simulations of Harter & Moore (1966) SIMS <- 100 # Number of simulations (change this to 1000) mu.save <- sd.save <- rep(NA, len = SIMS) r1 <- 0; r2 <- 4; nn <- 20 for (sim in 1:SIMS) { y <- sort(rnorm(nn)) y <- y[(1+r1):(nn-r2)] # Delete r1 smallest and r2 largest fit <- vglm(y ~ 1, double.cens.normal(r1 = r1, r2 = r2)) mu.save[sim] <- predict(fit)[1, 1] sd.save[sim] <- exp(predict(fit)[1, 2]) # Assumes a log link & ~ 1 } c(mean(mu.save), mean(sd.save)) # Should be c(0,1) c(sd(mu.save), sd(sd.save)) ## End(Not run) # Data from Sarhan & Greenberg (1962); MLEs are mu=9.2606, sd=1.3754 strontium90 <- data.frame(y = c(8.2, 8.4, 9.1, 9.8, 9.9)) fit <- vglm(y ~ 1, double.cens.normal(r1 = 2, r2 = 3, isd = 6), data = strontium90, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit)
Fits a double exponential binomial distribution by maximum likelihood estimation. The two parameters here are the mean and dispersion parameter.
double.expbinomial(lmean = "logitlink", ldispersion = "logitlink", idispersion = 0.25, zero = "dispersion")
double.expbinomial(lmean = "logitlink", ldispersion = "logitlink", idispersion = 0.25, zero = "dispersion")
lmean , ldispersion
|
Link functions applied to the two parameters, called
|
idispersion |
Initial value for the dispersion parameter. If given, it must be in range, and is recyled to the necessary length. Use this argument if convergence failure occurs. |
zero |
A vector specifying which
linear/additive predictor is to be modelled as intercept-only.
If assigned, the single value can be either |
This distribution provides a way for handling overdispersion in
a binary response. The double exponential binomial distribution
belongs the family of double exponential distributions proposed
by Efron (1986). Below, equation numbers refer to that original
article. Briefly, the idea is that an ordinary one-parameter
exponential family allows the addition of a second parameter
which varies the dispersion of the family
without changing the mean. The extended family behaves like
the original family with sample size changed from
to
.
The extended family is an exponential family in
when
and
are fixed, and an
exponential family in
when
and
are fixed. Having
corresponds to overdispersion with respect to the
binomial distribution. See Efron (1986) for full details.
This VGAM family function implements an
approximation (2.10) to the exact density (2.4). It
replaces the normalizing constant by unity since the
true value nearly equals 1. The default model fitted is
and
. This restricts
both parameters to lie between 0 and 1, although the
dispersion parameter can be modelled over a larger parameter
space by assigning the arguments
ldispersion
and
edispersion
.
Approximately, the mean (of ) is
.
The effective sample size is the dispersion
parameter multiplied by the original sample size, i.e.,
. This family function uses Fisher
scoring, and the two estimates are asymptotically independent
because the expected information matrix is diagonal.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
.
Numerical difficulties can occur; if so, try using
idispersion
.
This function processes the input in the same way
as binomialff
, however multiple responses are
not allowed (binomialff(multiple.responses = FALSE)
).
T. W. Yee
Efron, B. (1986). Double exponential families and their use in generalized linear regression. Journal of the American Statistical Association, 81, 709–721.
binomialff
,
toxop
,
CommonVGAMffArguments
.
# This example mimics the example in Efron (1986). # The results here differ slightly. # Scale the variables toxop <- transform(toxop, phat = positive / ssize, srainfall = scale(rainfall), # (6.1) sN = scale(ssize)) # (6.2) # A fit similar (should be identical) to Sec.6 of Efron (1986). # But does not use poly(), and M = 1.25 here, as in (5.3) cmlist <- list("(Intercept)" = diag(2), "I(srainfall)" = rbind(1, 0), "I(srainfall^2)" = rbind(1, 0), "I(srainfall^3)" = rbind(1, 0), "I(sN)" = rbind(0, 1), "I(sN^2)" = rbind(0, 1)) fit <- vglm(cbind(phat, 1 - phat) * ssize ~ I(srainfall) + I(srainfall^2) + I(srainfall^3) + I(sN) + I(sN^2), double.expbinomial(ldisp = extlogitlink(min = 0, max = 1.25), idisp = 0.2, zero = NULL), toxop, trace = TRUE, constraints = cmlist) # Now look at the results coef(fit, matrix = TRUE) head(fitted(fit)) summary(fit) vcov(fit) sqrt(diag(vcov(fit))) # Standard errors # Effective sample size (not quite the last column of Table 1) head(predict(fit)) Dispersion <- extlogitlink(predict(fit)[,2], min = 0, max = 1.25, inverse = TRUE) c(round(weights(fit, type = "prior") * Dispersion, digits = 1)) # Ordinary logistic regression (gives same results as (6.5)) ofit <- vglm(cbind(phat, 1 - phat) * ssize ~ I(srainfall) + I(srainfall^2) + I(srainfall^3), binomialff, toxop, trace = TRUE) # Same as fit but it uses poly(), and can be plotted (cf. Fig.1) cmlist2 <- list("(Intercept)" = diag(2), "poly(srainfall, degree = 3)" = rbind(1, 0), "poly(sN, degree = 2)" = rbind(0, 1)) fit2 <- vglm(cbind(phat, 1 - phat) * ssize ~ poly(srainfall, degree = 3) + poly(sN, degree = 2), double.expbinomial(ldisp = extlogitlink(min = 0, max = 1.25), idisp = 0.2, zero = NULL), toxop, trace = TRUE, constraints = cmlist2) ## Not run: par(mfrow = c(1, 2)) # Cf. Fig.1 plot(as(fit2, "vgam"), se = TRUE, lcol = "blue", scol = "orange") # Cf. Figure 1(a) par(mfrow = c(1,2)) ooo <- with(toxop, sort.list(rainfall)) with(toxop, plot(rainfall[ooo], fitted(fit2)[ooo], type = "l", col = "blue", las = 1, ylim = c(0.3, 0.65))) with(toxop, points(rainfall[ooo], fitted(ofit)[ooo], col = "orange", type = "b", pch = 19)) # Cf. Figure 1(b) ooo <- with(toxop, sort.list(ssize)) with(toxop, plot(ssize[ooo], Dispersion[ooo], type = "l", col = "blue", las = 1, xlim = c(0, 100))) ## End(Not run)
# This example mimics the example in Efron (1986). # The results here differ slightly. # Scale the variables toxop <- transform(toxop, phat = positive / ssize, srainfall = scale(rainfall), # (6.1) sN = scale(ssize)) # (6.2) # A fit similar (should be identical) to Sec.6 of Efron (1986). # But does not use poly(), and M = 1.25 here, as in (5.3) cmlist <- list("(Intercept)" = diag(2), "I(srainfall)" = rbind(1, 0), "I(srainfall^2)" = rbind(1, 0), "I(srainfall^3)" = rbind(1, 0), "I(sN)" = rbind(0, 1), "I(sN^2)" = rbind(0, 1)) fit <- vglm(cbind(phat, 1 - phat) * ssize ~ I(srainfall) + I(srainfall^2) + I(srainfall^3) + I(sN) + I(sN^2), double.expbinomial(ldisp = extlogitlink(min = 0, max = 1.25), idisp = 0.2, zero = NULL), toxop, trace = TRUE, constraints = cmlist) # Now look at the results coef(fit, matrix = TRUE) head(fitted(fit)) summary(fit) vcov(fit) sqrt(diag(vcov(fit))) # Standard errors # Effective sample size (not quite the last column of Table 1) head(predict(fit)) Dispersion <- extlogitlink(predict(fit)[,2], min = 0, max = 1.25, inverse = TRUE) c(round(weights(fit, type = "prior") * Dispersion, digits = 1)) # Ordinary logistic regression (gives same results as (6.5)) ofit <- vglm(cbind(phat, 1 - phat) * ssize ~ I(srainfall) + I(srainfall^2) + I(srainfall^3), binomialff, toxop, trace = TRUE) # Same as fit but it uses poly(), and can be plotted (cf. Fig.1) cmlist2 <- list("(Intercept)" = diag(2), "poly(srainfall, degree = 3)" = rbind(1, 0), "poly(sN, degree = 2)" = rbind(0, 1)) fit2 <- vglm(cbind(phat, 1 - phat) * ssize ~ poly(srainfall, degree = 3) + poly(sN, degree = 2), double.expbinomial(ldisp = extlogitlink(min = 0, max = 1.25), idisp = 0.2, zero = NULL), toxop, trace = TRUE, constraints = cmlist2) ## Not run: par(mfrow = c(1, 2)) # Cf. Fig.1 plot(as(fit2, "vgam"), se = TRUE, lcol = "blue", scol = "orange") # Cf. Figure 1(a) par(mfrow = c(1,2)) ooo <- with(toxop, sort.list(rainfall)) with(toxop, plot(rainfall[ooo], fitted(fit2)[ooo], type = "l", col = "blue", las = 1, ylim = c(0.3, 0.65))) with(toxop, points(rainfall[ooo], fitted(ofit)[ooo], col = "orange", type = "b", pch = 19)) # Cf. Figure 1(b) ooo <- with(toxop, sort.list(ssize)) with(toxop, plot(ssize[ooo], Dispersion[ooo], type = "l", col = "blue", las = 1, xlim = c(0, 100))) ## End(Not run)
Relative frequencies of serum proteins in white Pekin ducklings as determined by electrophoresis.
data(ducklings)
data(ducklings)
The format is: chr "ducklings"
Columns p1
, p2
, p3
stand for pre-albumin, albumin, globulins respectively.
These were collected from 3-week old white Pekin ducklings.
Let be proportional to the total milligrams of
pre-albumin in the blood serum of a duckling.
Similarly,
let
and
be directly proportional
to the same factor as
to the total milligrams
respectively of albumin and globulins in its blood serum.
The proportion of pre-albumin is given by
,
and similarly for the others.
Mosimann, J. E. (1962)
On the compound multinomial distribution,
the multivariate -distribution,
and correlations among proportions,
Biometrika,
49, 65–82.
print(ducklings)
print(ducklings)
Returns the desired quantiles of quantile regression object such as an extlogF1() or lms.bcn() VGLM object
eCDF.vglm(object, all = FALSE, ...)
eCDF.vglm(object, all = FALSE, ...)
object |
an object such as
a |
all |
Logical. Return all other information? If true, the empirical CDF is returned. |
... |
additional optional arguments. Currently unused. |
This function was specifically written for
a vglm
object
with family function extlogF1
or lms.bcn
.
It returns the proportion of data lying below each of
the fitted quantiles, and optionally
the desired quantiles (arguments tau
or
percentiles / 100
in the family function).
The output is coerced to be comparable between
family functions by calling the columns by
the same names.
A vector with each value lying in (0, 1).
If all = TRUE
then a 2-column matrix with the
second column being the tau
values or equivalent.
fit1 <- vglm(BMI ~ ns(age, 4), extlogF1, data = bmi.nz) # trace = TRUE eCDF(fit1) eCDF(fit1, all = TRUE)
fit1 <- vglm(BMI ~ ns(age, 4), extlogF1, data = bmi.nz) # trace = TRUE eCDF(fit1) eCDF(fit1, all = TRUE)
Enzyme velocity and substrate concentration.
data(enzyme)
data(enzyme)
A data frame with 12 observations on the following 2 variables.
a numeric explanatory vector; substrate concentration
a numeric response vector; enzyme velocity
Sorry, more details need to be included later.
Sorry, more details need to be included later.
Watts, D. G. (1981). An introduction to nonlinear least squares. In: L. Endrenyi (Ed.), Kinetic Data Analysis: Design and Analysis of Enzyme and Pharmacokinetic Experiments, pp.1–24. New York: Plenum Press.
## Not run: fit <- vglm(velocity ~ 1, micmen, data = enzyme, trace = TRUE, form2 = ~ conc - 1, crit = "crit") summary(fit) ## End(Not run)
## Not run: fit <- vglm(velocity ~ 1, micmen, data = enzyme, trace = TRUE, form2 = ~ conc - 1, crit = "crit") summary(fit) ## End(Not run)
Computes the error function, or its inverse, based on the normal distribution. Also computes the complement of the error function, or its inverse,
erf(x, inverse = FALSE) erfc(x, inverse = FALSE)
erf(x, inverse = FALSE) erfc(x, inverse = FALSE)
x |
Numeric. |
inverse |
Logical. Of length 1. |
is defined as
so that it is closely related to pnorm
.
The inverse function is defined for in
.
Returns the value of the function evaluated at x
.
Some authors omit the term from the
definition of
. Although defined for complex
arguments, this function only works for real arguments.
The complementary error function is defined
as
, and is implemented by
erfc
.
Its inverse function is defined for in
.
T. W. Yee
Abramowitz, M. and Stegun, I. A. (1972). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover Publications Inc.
## Not run: curve(erf, -3, 3, col = "orange", ylab = "", las = 1) curve(pnorm, -3, 3, add = TRUE, col = "blue", lty = "dotted", lwd = 2) abline(v = 0, h = 0, lty = "dashed") legend("topleft", c("erf(x)", "pnorm(x)"), col = c("orange", "blue"), lty = c("solid", "dotted"), lwd = 1:2) ## End(Not run)
## Not run: curve(erf, -3, 3, col = "orange", ylab = "", las = 1) curve(pnorm, -3, 3, add = TRUE, col = "blue", lty = "dotted", lwd = 2) abline(v = 0, h = 0, lty = "dashed") legend("topleft", c("erf(x)", "pnorm(x)"), col = c("orange", "blue"), lty = c("solid", "dotted"), lwd = 1:2) ## End(Not run)
Estimates the scale parameter of the Erlang distribution by maximum likelihood estimation.
erlang(shape.arg, lscale = "loglink", imethod = 1, zero = NULL)
erlang(shape.arg, lscale = "loglink", imethod = 1, zero = NULL)
shape.arg |
The shape parameters.
The user must specify a positive integer,
or integers for multiple responses.
They are recycled |
lscale |
Link function applied to the (positive) |
imethod , zero
|
See |
The Erlang distribution is a special case of
the gamma distribution
with shape that is a positive integer.
If shape.arg = 1
then it simplifies to the exponential distribution.
As illustrated
in the example below, the Erlang distribution is
the distribution of
the sum of shape.arg
independent and
identically distributed
exponential random variates.
The probability density function of the Erlang distribution is given by
for known positive integer ,
unknown
and
.
Here,
is the gamma
function, as in
gamma
.
The mean of Y
is and
its variance is
.
The linear/additive predictor, by default, is
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
Multiple responses are permitted.
The rate
parameter found in gammaR
is 1/scale
here—see also rgamma
.
T. W. Yee
Most standard texts on statistical distributions describe this distribution, e.g.,
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
gammaR
,
exponential
,
simulate.vlm
.
rate <- exp(2); myshape <- 3 edata <- data.frame(y = rep(0, nn <- 1000)) for (ii in 1:myshape) edata <- transform(edata, y = y + rexp(nn, rate = rate)) fit <- vglm(y ~ 1, erlang(shape = myshape), edata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) # Answer = 1/rate 1/rate summary(fit)
rate <- exp(2); myshape <- 3 edata <- data.frame(y = rep(0, nn <- 1000)) for (ii in 1:myshape) edata <- transform(edata, y = y + rexp(nn, rate = rate)) fit <- vglm(y ~ 1, erlang(shape = myshape), edata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) # Answer = 1/rate 1/rate summary(fit)
Density function, distribution function, and expectile function and random generation for the distribution associated with the expectiles of an exponential distribution.
deexp(x, rate = 1, log = FALSE) peexp(q, rate = 1, lower.tail = TRUE, log.p = FALSE) qeexp(p, rate = 1, Maxit.nr = 10, Tol.nr = 1.0e-6, lower.tail = TRUE, log.p = FALSE) reexp(n, rate = 1)
deexp(x, rate = 1, log = FALSE) peexp(q, rate = 1, lower.tail = TRUE, log.p = FALSE) qeexp(p, rate = 1, Maxit.nr = 10, Tol.nr = 1.0e-6, lower.tail = TRUE, log.p = FALSE) reexp(n, rate = 1)
x , p , q
|
See |
n , rate , log
|
See |
lower.tail , log.p
|
|
Maxit.nr , Tol.nr
|
See |
General details are given in deunif
including
a note regarding the terminology used.
Here,
exp
corresponds to the distribution of interest, , and
eexp
corresponds to .
The addition of “
e
” is for the ‘other’
distribution associated with the parent distribution.
Thus
deexp
is for ,
peexp
is for ,
qeexp
is for the inverse of ,
reexp
generates random variates from .
For qeexp
the Newton-Raphson algorithm is used to solve
for satisfying
. Numerical problems may
occur when values of
p
are very close to 0 or 1.
deexp(x)
gives the density function .
peexp(q)
gives the distribution function .
qeexp(p)
gives the expectile function:
the value such that
.
reexp(n)
gives random variates from
.
T. W. Yee and Kai Huang
my.p <- 0.25; y <- rexp(nn <- 1000) (myexp <- qeexp(my.p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my.p ## Not run: par(mfrow = c(2,1)) yy <- seq(-0, 4, len = nn) plot(yy, deexp(yy), col = "blue", ylim = 0:1, xlab = "y", ylab = "g(y)", type = "l", main = "g(y) for Exp(1); dotted green is f(y) = dexp(y)") lines(yy, dexp(yy), col = "green", lty = "dotted", lwd = 2) # 'original' plot(yy, peexp(yy), type = "l", col = "blue", ylim = 0:1, xlab = "y", ylab = "G(y)", main = "G(y) for Exp(1)") abline(v = 1, h = 0.5, col = "red", lty = "dashed") lines(yy, pexp(yy), col = "green", lty = "dotted", lwd = 2) ## End(Not run)
my.p <- 0.25; y <- rexp(nn <- 1000) (myexp <- qeexp(my.p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my.p ## Not run: par(mfrow = c(2,1)) yy <- seq(-0, 4, len = nn) plot(yy, deexp(yy), col = "blue", ylim = 0:1, xlab = "y", ylab = "g(y)", type = "l", main = "g(y) for Exp(1); dotted green is f(y) = dexp(y)") lines(yy, dexp(yy), col = "green", lty = "dotted", lwd = 2) # 'original' plot(yy, peexp(yy), type = "l", col = "blue", ylim = 0:1, xlab = "y", ylab = "G(y)", main = "G(y) for Exp(1)") abline(v = 1, h = 0.5, col = "red", lty = "dashed") lines(yy, pexp(yy), col = "green", lty = "dotted", lwd = 2) ## End(Not run)
Density function, distribution function, and expectile function and random generation for the distribution associated with the expectiles of a normal distribution.
denorm(x, mean = 0, sd = 1, log = FALSE) penorm(q, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) qenorm(p, mean = 0, sd = 1, Maxit.nr = 10, Tol.nr = 1.0e-6, lower.tail = TRUE, log.p = FALSE) renorm(n, mean = 0, sd = 1)
denorm(x, mean = 0, sd = 1, log = FALSE) penorm(q, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) qenorm(p, mean = 0, sd = 1, Maxit.nr = 10, Tol.nr = 1.0e-6, lower.tail = TRUE, log.p = FALSE) renorm(n, mean = 0, sd = 1)
x , p , q
|
See |
n , mean , sd , log
|
See |
lower.tail , log.p
|
|
Maxit.nr , Tol.nr
|
See |
General details are given in deunif
including
a note regarding the terminology used.
Here,
norm
corresponds to the distribution of interest, ,
and
enorm
corresponds to .
The addition of “
e
” is for the ‘other’
distribution associated with the parent distribution.
Thus
denorm
is for ,
penorm
is for ,
qenorm
is for the inverse of ,
renorm
generates random variates from .
For qenorm
the Newton-Raphson algorithm is used to solve for
satisfying
.
Numerical problems may occur when values of
p
are
very close to 0 or 1.
denorm(x)
gives the density function .
penorm(q)
gives the distribution function .
qenorm(p)
gives the expectile function:
the value such that
.
renorm(n)
gives random variates from
.
T. W. Yee and Kai Huang
deunif
,
deexp
,
dnorm
,
amlnormal
,
lms.bcn
.
my.p <- 0.25; y <- rnorm(nn <- 1000) (myexp <- qenorm(my.p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my.p # Non-standard normal mymean <- 1; mysd <- 2 yy <- rnorm(nn, mymean, mysd) (myexp <- qenorm(my.p, mymean, mysd)) sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my.p penorm(-Inf, mymean, mysd) # Should be 0 penorm( Inf, mymean, mysd) # Should be 1 penorm(mean(yy), mymean, mysd) # Should be 0.5 abs(qenorm(0.5, mymean, mysd) - mean(yy)) # Should be 0 abs(penorm(myexp, mymean, mysd) - my.p) # Should be 0 integrate(f = denorm, lower = -Inf, upper = Inf, mymean, mysd) # Should be 1 ## Not run: par(mfrow = c(2, 1)) yy <- seq(-3, 3, len = nn) plot(yy, denorm(yy), type = "l", col="blue", xlab = "y", ylab = "g(y)", main = "g(y) for N(0,1); dotted green is f(y) = dnorm(y)") lines(yy, dnorm(yy), col = "green", lty = "dotted", lwd = 2) # 'original' plot(yy, penorm(yy), type = "l", col = "blue", ylim = 0:1, xlab = "y", ylab = "G(y)", main = "G(y) for N(0,1)") abline(v = 0, h = 0.5, col = "red", lty = "dashed") lines(yy, pnorm(yy), col = "green", lty = "dotted", lwd = 2) ## End(Not run)
my.p <- 0.25; y <- rnorm(nn <- 1000) (myexp <- qenorm(my.p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my.p # Non-standard normal mymean <- 1; mysd <- 2 yy <- rnorm(nn, mymean, mysd) (myexp <- qenorm(my.p, mymean, mysd)) sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my.p penorm(-Inf, mymean, mysd) # Should be 0 penorm( Inf, mymean, mysd) # Should be 1 penorm(mean(yy), mymean, mysd) # Should be 0.5 abs(qenorm(0.5, mymean, mysd) - mean(yy)) # Should be 0 abs(penorm(myexp, mymean, mysd) - my.p) # Should be 0 integrate(f = denorm, lower = -Inf, upper = Inf, mymean, mysd) # Should be 1 ## Not run: par(mfrow = c(2, 1)) yy <- seq(-3, 3, len = nn) plot(yy, denorm(yy), type = "l", col="blue", xlab = "y", ylab = "g(y)", main = "g(y) for N(0,1); dotted green is f(y) = dnorm(y)") lines(yy, dnorm(yy), col = "green", lty = "dotted", lwd = 2) # 'original' plot(yy, penorm(yy), type = "l", col = "blue", ylim = 0:1, xlab = "y", ylab = "G(y)", main = "G(y) for N(0,1)") abline(v = 0, h = 0.5, col = "red", lty = "dashed") lines(yy, pnorm(yy), col = "green", lty = "dotted", lwd = 2) ## End(Not run)
Density function, distribution function, and quantile/expectile function and random generation for the scaled Student t distribution with 2 degrees of freedom.
dsc.t2(x, location = 0, scale = 1, log = FALSE) psc.t2(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) qsc.t2(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) rsc.t2(n, location = 0, scale = 1)
dsc.t2(x, location = 0, scale = 1, log = FALSE) psc.t2(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) qsc.t2(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) rsc.t2(n, location = 0, scale = 1)
x , q
|
Vector of expectiles/quantiles. See the terminology note below. |
p |
Vector of probabilities.
These should lie in |
n , log
|
See |
location , scale
|
Location and scale parameters. The latter should have positive values. Values of these vectors are recyled. |
lower.tail , log.p
|
A Student-t distribution with 2 degrees of freedom and
a scale parameter of sqrt(2)
is equivalent to
the standard form of this distribution
(called Koenker's distribution below).
Further details about this distribution are given in
sc.studentt2
.
dsc.t2(x)
gives the density function.
psc.t2(q)
gives the distribution function.
qsc.t2(p)
gives the expectile and quantile function.
rsc.t2(n)
gives random variates.
T. W. Yee and Kai Huang
my.p <- 0.25; y <- rsc.t2(nn <- 5000) (myexp <- qsc.t2(my.p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my.p # Equivalently: I1 <- mean(y <= myexp) * mean( myexp - y[y <= myexp]) I2 <- mean(y > myexp) * mean(-myexp + y[y > myexp]) I1 / (I1 + I2) # Should be my.p # Or: I1 <- sum( myexp - y[y <= myexp]) I2 <- sum(-myexp + y[y > myexp]) # Non-standard Koenker distribution myloc <- 1; myscale <- 2 yy <- rsc.t2(nn, myloc, myscale) (myexp <- qsc.t2(my.p, myloc, myscale)) sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my.p psc.t2(mean(yy), myloc, myscale) # Should be 0.5 abs(qsc.t2(0.5, myloc, myscale) - mean(yy)) # Should be 0 abs(psc.t2(myexp, myloc, myscale) - my.p) # Should be 0 integrate(f = dsc.t2, lower = -Inf, upper = Inf, locat = myloc, scale = myscale) # Should be 1 y <- seq(-7, 7, len = 201) max(abs(dsc.t2(y) - dt(y / sqrt(2), df = 2) / sqrt(2))) # Should be 0 ## Not run: plot(y, dsc.t2(y), type = "l", col = "blue", las = 1, ylim = c(0, 0.4), main = "Blue = Koenker; orange = N(0, 1)") lines(y, dnorm(y), type = "l", col = "orange") abline(h = 0, v = 0, lty = 2) ## End(Not run)
my.p <- 0.25; y <- rsc.t2(nn <- 5000) (myexp <- qsc.t2(my.p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my.p # Equivalently: I1 <- mean(y <= myexp) * mean( myexp - y[y <= myexp]) I2 <- mean(y > myexp) * mean(-myexp + y[y > myexp]) I1 / (I1 + I2) # Should be my.p # Or: I1 <- sum( myexp - y[y <= myexp]) I2 <- sum(-myexp + y[y > myexp]) # Non-standard Koenker distribution myloc <- 1; myscale <- 2 yy <- rsc.t2(nn, myloc, myscale) (myexp <- qsc.t2(my.p, myloc, myscale)) sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my.p psc.t2(mean(yy), myloc, myscale) # Should be 0.5 abs(qsc.t2(0.5, myloc, myscale) - mean(yy)) # Should be 0 abs(psc.t2(myexp, myloc, myscale) - my.p) # Should be 0 integrate(f = dsc.t2, lower = -Inf, upper = Inf, locat = myloc, scale = myscale) # Should be 1 y <- seq(-7, 7, len = 201) max(abs(dsc.t2(y) - dt(y / sqrt(2), df = 2) / sqrt(2))) # Should be 0 ## Not run: plot(y, dsc.t2(y), type = "l", col = "blue", las = 1, ylim = c(0, 0.4), main = "Blue = Koenker; orange = N(0, 1)") lines(y, dnorm(y), type = "l", col = "orange") abline(h = 0, v = 0, lty = 2) ## End(Not run)
Density function, distribution function, and expectile function and random generation for the distribution associated with the expectiles of a uniform distribution.
deunif(x, min = 0, max = 1, log = FALSE) peunif(q, min = 0, max = 1, lower.tail = TRUE, log.p = FALSE) qeunif(p, min = 0, max = 1, Maxit.nr = 10, Tol.nr = 1.0e-6, lower.tail = TRUE, log.p = FALSE) reunif(n, min = 0, max = 1)
deunif(x, min = 0, max = 1, log = FALSE) peunif(q, min = 0, max = 1, lower.tail = TRUE, log.p = FALSE) qeunif(p, min = 0, max = 1, Maxit.nr = 10, Tol.nr = 1.0e-6, lower.tail = TRUE, log.p = FALSE) reunif(n, min = 0, max = 1)
x , q
|
Vector of expectiles. See the terminology note below. |
p |
Vector of probabilities.
These should lie in |
n , min , max , log
|
See |
lower.tail , log.p
|
|
Maxit.nr |
Numeric.
Maximum number of Newton-Raphson iterations allowed.
A warning is issued if convergence is not obtained for all |
Tol.nr |
Numeric. Small positive value specifying the tolerance or precision to which the expectiles are computed. |
Jones (1994) elucidated on the property that the expectiles
of a random variable with distribution function
correspond to the
quantiles of a distribution
where
is related by an explicit formula to
.
In particular, let
be the
-expectile of
.
Then
is the
-quantile of
where
and
is the mean of
.
The derivative of
is
Here, is the partial moment
and
.
The 0.5-expectile is the mean
and
the 0.5-quantile is the median.
A note about the terminology used here.
Recall in the S language there are the dpqr
-type functions
associated with a distribution, e.g.,
dunif
,
punif
,
qunif
,
runif
,
for the uniform distribution.
Here,
unif
corresponds to and
eunif
corresponds to .
The addition of “
e
” (for expectile) is for the
‘other’
distribution associated with the parent distribution.
Thus
deunif
is for ,
peunif
is for ,
qeunif
is for the inverse of ,
reunif
generates random variates from .
For qeunif
the Newton-Raphson algorithm is used to solve for
satisfying
.
Numerical problems may occur when values of
p
are
very close to 0 or 1.
deunif(x)
gives the density function .
peunif(q)
gives the distribution function .
qeunif(p)
gives the expectile function:
the expectile such that
.
reunif(n)
gives random variates from
.
T. W. Yee and Kai Huang
Jones, M. C. (1994). Expectiles and M-quantiles are quantiles. Statistics and Probability Letters, 20, 149–153.
my.p <- 0.25; y <- runif(nn <- 1000) (myexp <- qeunif(my.p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my.p # Equivalently: I1 <- mean(y <= myexp) * mean( myexp - y[y <= myexp]) I2 <- mean(y > myexp) * mean(-myexp + y[y > myexp]) I1 / (I1 + I2) # Should be my.p # Or: I1 <- sum( myexp - y[y <= myexp]) I2 <- sum(-myexp + y[y > myexp]) # Non-standard uniform mymin <- 1; mymax <- 8 yy <- runif(nn, mymin, mymax) (myexp <- qeunif(my.p, mymin, mymax)) sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my.p peunif(mymin, mymin, mymax) # Should be 0 peunif(mymax, mymin, mymax) # Should be 1 peunif(mean(yy), mymin, mymax) # Should be 0.5 abs(qeunif(0.5, mymin, mymax) - mean(yy)) # Should be 0 abs(qeunif(0.5, mymin, mymax) - (mymin+mymax)/2) # Should be 0 abs(peunif(myexp, mymin, mymax) - my.p) # Should be 0 integrate(f = deunif, lower = mymin - 3, upper = mymax + 3, min = mymin, max = mymax) # Should be 1 ## Not run: par(mfrow = c(2,1)) yy <- seq(0.0, 1.0, len = nn) plot(yy, deunif(yy), type = "l", col = "blue", ylim = c(0, 2), xlab = "y", ylab = "g(y)", main = "g(y) for Uniform(0,1)") lines(yy, dunif(yy), col = "green", lty = "dotted", lwd = 2) # 'original' plot(yy, peunif(yy), type = "l", col = "blue", ylim = 0:1, xlab = "y", ylab = "G(y)", main = "G(y) for Uniform(0,1)") abline(a = 0.0, b = 1.0, col = "green", lty = "dotted", lwd = 2) abline(v = 0.5, h = 0.5, col = "red", lty = "dashed") ## End(Not run)
my.p <- 0.25; y <- runif(nn <- 1000) (myexp <- qeunif(my.p)) sum(myexp - y[y <= myexp]) / sum(abs(myexp - y)) # Should be my.p # Equivalently: I1 <- mean(y <= myexp) * mean( myexp - y[y <= myexp]) I2 <- mean(y > myexp) * mean(-myexp + y[y > myexp]) I1 / (I1 + I2) # Should be my.p # Or: I1 <- sum( myexp - y[y <= myexp]) I2 <- sum(-myexp + y[y > myexp]) # Non-standard uniform mymin <- 1; mymax <- 8 yy <- runif(nn, mymin, mymax) (myexp <- qeunif(my.p, mymin, mymax)) sum(myexp - yy[yy <= myexp]) / sum(abs(myexp - yy)) # Should be my.p peunif(mymin, mymin, mymax) # Should be 0 peunif(mymax, mymin, mymax) # Should be 1 peunif(mean(yy), mymin, mymax) # Should be 0.5 abs(qeunif(0.5, mymin, mymax) - mean(yy)) # Should be 0 abs(qeunif(0.5, mymin, mymax) - (mymin+mymax)/2) # Should be 0 abs(peunif(myexp, mymin, mymax) - my.p) # Should be 0 integrate(f = deunif, lower = mymin - 3, upper = mymax + 3, min = mymin, max = mymax) # Should be 1 ## Not run: par(mfrow = c(2,1)) yy <- seq(0.0, 1.0, len = nn) plot(yy, deunif(yy), type = "l", col = "blue", ylim = c(0, 2), xlab = "y", ylab = "g(y)", main = "g(y) for Uniform(0,1)") lines(yy, dunif(yy), col = "green", lty = "dotted", lwd = 2) # 'original' plot(yy, peunif(yy), type = "l", col = "blue", ylim = 0:1, xlab = "y", ylab = "G(y)", main = "G(y) for Uniform(0,1)") abline(a = 0.0, b = 1.0, col = "green", lty = "dotted", lwd = 2) abline(v = 0.5, h = 0.5, col = "red", lty = "dashed") ## End(Not run)
Estimates the two parameters of the exponentiated exponential distribution by maximum likelihood estimation.
expexpff(lrate = "loglink", lshape = "loglink", irate = NULL, ishape = 1.1, tolerance = 1.0e-6, zero = NULL)
expexpff(lrate = "loglink", lshape = "loglink", irate = NULL, ishape = 1.1, tolerance = 1.0e-6, zero = NULL)
lshape , lrate
|
Parameter link functions for the
|
ishape |
Initial value for the |
irate |
Initial value for the |
tolerance |
Numeric. Small positive value for testing whether values are close enough to 1 and 2. |
zero |
An integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
The default is none of them.
If used, choose one value from the set {1,2}.
See |
The exponentiated exponential distribution is an alternative to the Weibull and the gamma distributions. The formula for the density is
where ,
and
.
The mean of
is
(returned as the fitted values)
where
is the digamma function.
The variance of
is
where
is the trigamma function.
This distribution has been called the two-parameter generalized
exponential distribution by Gupta and Kundu (2006).
A special case of the exponentiated exponential distribution:
is the exponential distribution.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
Practical experience shows that reasonably good initial values really
helps. In particular, try setting different values for the ishape
argument if numerical problems are encountered or failure to convergence
occurs. Even if convergence occurs try perturbing the initial value
to make sure the global solution is obtained and not a local solution.
The algorithm may fail if the estimate of the shape parameter is
too close to unity.
Fisher scoring is used, however, convergence is usually very slow. This is a good sign that there is a bug, but I have yet to check that the expected information is correct. Also, I have yet to implement Type-I right censored data using the results of Gupta and Kundu (2006).
Another algorithm for fitting this model is implemented in
expexpff1
.
T. W. Yee
Gupta, R. D. and Kundu, D. (2001). Exponentiated exponential family: an alternative to gamma and Weibull distributions, Biometrical Journal, 43, 117–130.
Gupta, R. D. and Kundu, D. (2006). On the comparison of Fisher information of the Weibull and GE distributions, Journal of Statistical Planning and Inference, 136, 3130–3144.
expexpff1
,
gammaR
,
weibullR
,
CommonVGAMffArguments
.
## Not run: # A special case: exponential data edata <- data.frame(y = rexp(n <- 1000)) fit <- vglm(y ~ 1, fam = expexpff, data = edata, trace = TRUE, maxit = 99) coef(fit, matrix = TRUE) Coef(fit) # Ball bearings data (number of million revolutions before failure) edata <- data.frame(bbearings = c(17.88, 28.92, 33.00, 41.52, 42.12, 45.60, 48.80, 51.84, 51.96, 54.12, 55.56, 67.80, 68.64, 68.64, 68.88, 84.12, 93.12, 98.64, 105.12, 105.84, 127.92, 128.04, 173.40)) fit <- vglm(bbearings ~ 1, fam = expexpff(irate = 0.05, ish = 5), trace = TRUE, maxit = 300, data = edata) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(rate=0.0314, shape=5.2589) logLik(fit) # Authors get -112.9763 # Failure times of the airconditioning system of an airplane eedata <- data.frame(acplane = c(23, 261, 87, 7, 120, 14, 62, 47, 225, 71, 246, 21, 42, 20, 5, 12, 120, 11, 3, 14, 71, 11, 14, 11, 16, 90, 1, 16, 52, 95)) fit <- vglm(acplane ~ 1, fam = expexpff(ishape = 0.8, irate = 0.15), trace = TRUE, maxit = 99, data = eedata) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(rate=0.0145, shape=0.8130) logLik(fit) # Authors get log-lik -152.264 ## End(Not run)
## Not run: # A special case: exponential data edata <- data.frame(y = rexp(n <- 1000)) fit <- vglm(y ~ 1, fam = expexpff, data = edata, trace = TRUE, maxit = 99) coef(fit, matrix = TRUE) Coef(fit) # Ball bearings data (number of million revolutions before failure) edata <- data.frame(bbearings = c(17.88, 28.92, 33.00, 41.52, 42.12, 45.60, 48.80, 51.84, 51.96, 54.12, 55.56, 67.80, 68.64, 68.64, 68.88, 84.12, 93.12, 98.64, 105.12, 105.84, 127.92, 128.04, 173.40)) fit <- vglm(bbearings ~ 1, fam = expexpff(irate = 0.05, ish = 5), trace = TRUE, maxit = 300, data = edata) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(rate=0.0314, shape=5.2589) logLik(fit) # Authors get -112.9763 # Failure times of the airconditioning system of an airplane eedata <- data.frame(acplane = c(23, 261, 87, 7, 120, 14, 62, 47, 225, 71, 246, 21, 42, 20, 5, 12, 120, 11, 3, 14, 71, 11, 14, 11, 16, 90, 1, 16, 52, 95)) fit <- vglm(acplane ~ 1, fam = expexpff(ishape = 0.8, irate = 0.15), trace = TRUE, maxit = 99, data = eedata) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(rate=0.0145, shape=0.8130) logLik(fit) # Authors get log-lik -152.264 ## End(Not run)
Estimates the two parameters of the exponentiated exponential distribution by maximizing a profile (concentrated) likelihood.
expexpff1(lrate = "loglink", irate = NULL, ishape = 1)
expexpff1(lrate = "loglink", irate = NULL, ishape = 1)
lrate |
Parameter link function for the (positive) |
irate |
Initial value for the |
ishape |
Initial value for the |
See expexpff
for details about the exponentiated
exponential distribution. This family function uses a different
algorithm for fitting the model. Given ,
the MLE of
can easily be solved in terms of
. This family function maximizes a profile
(concentrated) likelihood with respect to
.
Newton-Raphson is used, which compares with Fisher scoring with
expexpff
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
The standard errors produced by a
summary
of the model may be wrong.
This family function works only for intercept-only models,
i.e., y ~ 1
where y
is the response.
The estimate of is attached to the
misc
slot of the object, which is a list and contains
the component shape
.
As Newton-Raphson is used, the working weights are sometimes negative, and some adjustment is made to these to make them positive.
Like expexpff
, good initial
values are needed. Convergence may be slow.
T. W. Yee
Gupta, R. D. and Kundu, D. (2001). Exponentiated exponential family: an alternative to gamma and Weibull distributions, Biometrical Journal, 43, 117–130.
expexpff
,
CommonVGAMffArguments
.
# Ball bearings data (number of million revolutions before failure) edata <- data.frame(bbearings = c(17.88, 28.92, 33.00, 41.52, 42.12, 45.60, 48.80, 51.84, 51.96, 54.12, 55.56, 67.80, 68.64, 68.64, 68.88, 84.12, 93.12, 98.64, 105.12, 105.84, 127.92, 128.04, 173.40)) fit <- vglm(bbearings ~ 1, expexpff1(ishape = 4), trace = TRUE, maxit = 250, checkwz = FALSE, data = edata) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(0.0314, 5.2589) with log-lik -112.9763 logLik(fit) fit@misc$shape # Estimate of shape # Failure times of the airconditioning system of an airplane eedata <- data.frame(acplane = c(23, 261, 87, 7, 120, 14, 62, 47, 225, 71, 246, 21, 42, 20, 5, 12, 120, 11, 3, 14, 71, 11, 14, 11, 16, 90, 1, 16, 52, 95)) fit <- vglm(acplane ~ 1, expexpff1(ishape = 0.8), trace = TRUE, maxit = 50, checkwz = FALSE, data = eedata) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(0.0145, 0.8130) with log-lik -152.264 logLik(fit) fit@misc$shape # Estimate of shape
# Ball bearings data (number of million revolutions before failure) edata <- data.frame(bbearings = c(17.88, 28.92, 33.00, 41.52, 42.12, 45.60, 48.80, 51.84, 51.96, 54.12, 55.56, 67.80, 68.64, 68.64, 68.88, 84.12, 93.12, 98.64, 105.12, 105.84, 127.92, 128.04, 173.40)) fit <- vglm(bbearings ~ 1, expexpff1(ishape = 4), trace = TRUE, maxit = 250, checkwz = FALSE, data = edata) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(0.0314, 5.2589) with log-lik -112.9763 logLik(fit) fit@misc$shape # Estimate of shape # Failure times of the airconditioning system of an airplane eedata <- data.frame(acplane = c(23, 261, 87, 7, 120, 14, 62, 47, 225, 71, 246, 21, 42, 20, 5, 12, 120, 11, 3, 14, 71, 11, 14, 11, 16, 90, 1, 16, 52, 95)) fit <- vglm(acplane ~ 1, expexpff1(ishape = 0.8), trace = TRUE, maxit = 50, checkwz = FALSE, data = eedata) coef(fit, matrix = TRUE) Coef(fit) # Authors get c(0.0145, 0.8130) with log-lik -152.264 logLik(fit) fit@misc$shape # Estimate of shape
Density, distribution function, quantile function and random generation for the exponential geometric distribution.
dexpgeom(x, scale = 1, shape, log = FALSE) pexpgeom(q, scale = 1, shape) qexpgeom(p, scale = 1, shape) rexpgeom(n, scale = 1, shape)
dexpgeom(x, scale = 1, shape, log = FALSE) pexpgeom(q, scale = 1, shape) qexpgeom(p, scale = 1, shape) rexpgeom(n, scale = 1, shape)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
scale , shape
|
positive scale and shape parameters. |
log |
Logical.
If |
See expgeometric
, the VGAM family function
for estimating the parameters,
for the formula of the probability density function and other details.
dexpgeom
gives the density,
pexpgeom
gives the distribution function,
qexpgeom
gives the quantile function, and
rexpgeom
generates random deviates.
We define scale
as the reciprocal of the scale parameter
used by Adamidis and Loukas (1998).
J. G. Lauder and T. W. Yee
expgeometric
,
exponential
,
geometric
.
## Not run: shape <- 0.5; scale <- 1; nn <- 501 x <- seq(-0.10, 3.0, len = nn) plot(x, dexpgeom(x, scale, shape), type = "l", las = 1, ylim = c(0, 2), ylab = paste("[dp]expgeom(shape = ", shape, ", scale = ", scale, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, red is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pexpgeom(x, scale, shape), col = "red") probs <- seq(0.1, 0.9, by = 0.1) Q <- qexpgeom(probs, scale, shape) lines(Q, dexpgeom(Q, scale, shape), col = "purple", lty = 3, type = "h") lines(Q, pexpgeom(Q, scale, shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pexpgeom(Q, scale, shape) - probs)) # Should be 0 ## End(Not run)
## Not run: shape <- 0.5; scale <- 1; nn <- 501 x <- seq(-0.10, 3.0, len = nn) plot(x, dexpgeom(x, scale, shape), type = "l", las = 1, ylim = c(0, 2), ylab = paste("[dp]expgeom(shape = ", shape, ", scale = ", scale, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, red is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pexpgeom(x, scale, shape), col = "red") probs <- seq(0.1, 0.9, by = 0.1) Q <- qexpgeom(probs, scale, shape) lines(Q, dexpgeom(Q, scale, shape), col = "purple", lty = 3, type = "h") lines(Q, pexpgeom(Q, scale, shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pexpgeom(Q, scale, shape) - probs)) # Should be 0 ## End(Not run)
Estimates the two parameters of the exponential geometric distribution by maximum likelihood estimation.
expgeometric(lscale = "loglink", lshape = "logitlink", iscale = NULL, ishape = NULL, tol12 = 1e-05, zero = 1, nsimEIM = 400)
expgeometric(lscale = "loglink", lshape = "logitlink", iscale = NULL, ishape = NULL, tol12 = 1e-05, zero = 1, nsimEIM = 400)
lscale , lshape
|
Link function for the two parameters.
See |
iscale , ishape
|
Numeric. Optional initial values for the scale and shape parameters. |
tol12 |
Numeric. Tolerance for testing whether a parameter has value 1 or 2. |
zero , nsimEIM
|
The exponential geometric distribution has density function
where ,
and
.
The mean,
is returned as the fitted values.
Note the median is
.
Simulated Fisher scoring is implemented.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
We define scale
as the reciprocal of the scale parameter
used by Adamidis and Loukas (1998).
J. G. Lauder and T. W. Yee
Adamidis, K., Loukas, S. (1998). A lifetime distribution with decreasing failure rate. Statistics and Probability Letters, 39, 35–42.
dexpgeom
,
exponential
,
geometric
.
## Not run: Scale <- exp(2); shape = logitlink(-1, inverse = TRUE); edata <- data.frame(y = rexpgeom(n = 2000, scale = Scale, shape = shape)) fit <- vglm(y ~ 1, expgeometric, edata, trace = TRUE) c(with(edata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
## Not run: Scale <- exp(2); shape = logitlink(-1, inverse = TRUE); edata <- data.frame(y = rexpgeom(n = 2000, scale = Scale, shape = shape)) fit <- vglm(y ~ 1, expgeometric, edata, trace = TRUE) c(with(edata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
Computes the exponential integral for real values,
as well as
and
and their derivatives (up to the 3rd derivative).
expint(x, deriv = 0) expexpint(x, deriv = 0) expint.E1(x, deriv = 0)
expint(x, deriv = 0) expexpint(x, deriv = 0) expint.E1(x, deriv = 0)
x |
Numeric. Ideally a vector of positive reals. |
deriv |
Integer. Either 0, 1, 2 or 3. |
The exponential integral function is the integral of
from 0 to
, for positive real
.
The function
is the integral of
from
to infinity, for positive real
.
Function expint(x, deriv = n)
returns the
th derivative of
(up to the 3rd),
function
expexpint(x, deriv = n)
returns the
th derivative of
(up to the 3rd),
function
expint.E1(x, deriv = n)
returns the th
derivative of
(up to the 3rd).
These functions have not been tested thoroughly.
T. W. Yee has simply written a small wrapper function to call the NETLIB FORTRAN code. Xiangjie Xue modified the functions to calculate derivatives. Higher derivatives can actually be calculated—please let me know if you need it.
https://netlib.org/specfun/ei.
log
,
exp
.
There is also a package called expint.
## Not run: par(mfrow = c(2, 2)) curve(expint, 0.01, 2, xlim = c(0, 2), ylim = c(-3, 5), las = 1, col = "orange") abline(v = (-3):5, h = (-4):5, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") curve(expexpint, 0.01, 2, xlim = c(0, 2), ylim = c(-3, 2), las = 1, col = "orange") abline(v = (-3):2, h = (-4):5, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") curve(expint.E1, 0.01, 2, xlim = c(0, 2), ylim = c(0, 5), las = 1, col = "orange") abline(v = (-3):2, h = (-4):5, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") ## End(Not run)
## Not run: par(mfrow = c(2, 2)) curve(expint, 0.01, 2, xlim = c(0, 2), ylim = c(-3, 5), las = 1, col = "orange") abline(v = (-3):5, h = (-4):5, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") curve(expexpint, 0.01, 2, xlim = c(0, 2), ylim = c(-3, 2), las = 1, col = "orange") abline(v = (-3):2, h = (-4):5, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") curve(expint.E1, 0.01, 2, xlim = c(0, 2), ylim = c(0, 5), las = 1, col = "orange") abline(v = (-3):2, h = (-4):5, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") ## End(Not run)
Computes the exponential transformation, including its inverse and the first two derivatives.
explink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
explink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
bvalue |
See |
inverse , deriv , short , tag
|
Details at |
The exponential link function is potentially
suitable for parameters that
are positive.
Numerical values of theta
close to negative
or positive infinity
may result in
0
, Inf
, -Inf
, NA
or NaN
.
For explink
with deriv = 0
,
the exponential of theta
, i.e.,
exp(theta)
when inverse = FALSE
.
And if inverse = TRUE
then
log(theta)
;
if theta
is not positive then it will return NaN
.
For deriv = 1
, then the function returns
d eta
/ d theta
as a
function of theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns the reciprocal.
Here, all logarithms are natural logarithms, i.e., to base e.
This function has particular use for
computing quasi-variances when
used with rcim
and uninormal
.
Numerical instability may occur when theta
is
close to negative or positive infinity.
One way of overcoming this (one day) is to use bvalue
.
Thomas W. Yee
Links
,
loglink
,
rcim
,
Qvar
,
uninormal
.
theta <- rnorm(30) explink(theta) max(abs(explink(explink(theta), inverse = TRUE) - theta)) # 0?
theta <- rnorm(30) explink(theta) max(abs(explink(explink(theta), inverse = TRUE) - theta)) # 0?
Density, distribution function, quantile function and random generation for the exponential logarithmic distribution.
dexplog(x, scale = 1, shape, log = FALSE) pexplog(q, scale = 1, shape) qexplog(p, scale = 1, shape) rexplog(n, scale = 1, shape)
dexplog(x, scale = 1, shape, log = FALSE) pexplog(q, scale = 1, shape) qexplog(p, scale = 1, shape) rexplog(n, scale = 1, shape)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
scale , shape
|
positive scale and shape parameters. |
log |
Logical.
If |
See explogff
, the VGAM family function
for estimating the parameters,
for the formula of the probability density function and other details.
dexplog
gives the density,
pexplog
gives the distribution function,
qexplog
gives the quantile function, and
rexplog
generates random deviates.
We define scale
as the reciprocal of the scale parameter
used by Tahmasabi and Rezaei (2008).
J. G. Lauder and T. W. Yee
## Not run: shape <- 0.5; scale <- 2; nn <- 501 x <- seq(-0.50, 6.0, len = nn) plot(x, dexplog(x, scale, shape), type = "l", las = 1, ylim = c(0, 1.1), ylab = paste("[dp]explog(shape = ", shape, ", scale = ", scale, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pexplog(x, scale, shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qexplog(probs, scale, shape = shape) lines(Q, dexplog(Q, scale, shape = shape), col = "purple", lty = 3, type = "h") lines(Q, pexplog(Q, scale, shape = shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pexplog(Q, scale, shape = shape) - probs)) # Should be 0 ## End(Not run)
## Not run: shape <- 0.5; scale <- 2; nn <- 501 x <- seq(-0.50, 6.0, len = nn) plot(x, dexplog(x, scale, shape), type = "l", las = 1, ylim = c(0, 1.1), ylab = paste("[dp]explog(shape = ", shape, ", scale = ", scale, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pexplog(x, scale, shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qexplog(probs, scale, shape = shape) lines(Q, dexplog(Q, scale, shape = shape), col = "purple", lty = 3, type = "h") lines(Q, pexplog(Q, scale, shape = shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pexplog(Q, scale, shape = shape) - probs)) # Should be 0 ## End(Not run)
Estimates the two parameters of the exponential logarithmic distribution by maximum likelihood estimation.
explogff(lscale = "loglink", lshape = "logitlink", iscale = NULL, ishape = NULL, tol12 = 1e-05, zero = 1, nsimEIM = 400)
explogff(lscale = "loglink", lshape = "logitlink", iscale = NULL, ishape = NULL, tol12 = 1e-05, zero = 1, nsimEIM = 400)
lscale , lshape
|
See |
tol12 |
Numeric. Tolerance for testing whether a parameter has value 1 or 2. |
iscale , ishape , zero , nsimEIM
|
The exponential logarithmic distribution has density function
where , scale parameter
, and
shape parameter
.
The mean,
is not returned as the fitted values.
Note the median is
and it is currently returned as the fitted values.
Simulated Fisher scoring is implemented.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
We define scale
as the reciprocal of the rate parameter
used by Tahmasabi and Sadegh (2008).
Yet to do: find a polylog()
function.
J. G. Lauder and T. W .Yee
Tahmasabi, R., Sadegh, R. (2008). A two-parameter lifetime distribution with decreasing failure rate. Computational Statistics and Data Analysis, 52, 3889–3901.
## Not run: Scale <- exp(2); shape <- logitlink(-1, inverse = TRUE) edata <- data.frame(y = rexplog(n = 2000, scale = Scale, shape = shape)) fit <- vglm(y ~ 1, explogff, data = edata, trace = TRUE) c(with(edata, median(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
## Not run: Scale <- exp(2); shape <- logitlink(-1, inverse = TRUE) edata <- data.frame(y = rexplog(n = 2000, scale = Scale, shape = shape)) fit <- vglm(y ~ 1, explogff, data = edata, trace = TRUE) c(with(edata, median(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
Maximum likelihood estimation for the exponential distribution.
exponential(link = "loglink", location = 0, expected = TRUE, type.fitted = c("mean", "percentiles", "Qlink"), percentiles = 50, ishrinkage = 0.95, parallel = FALSE, zero = NULL)
exponential(link = "loglink", location = 0, expected = TRUE, type.fitted = c("mean", "percentiles", "Qlink"), percentiles = 50, ishrinkage = 0.95, parallel = FALSE, zero = NULL)
link |
Parameter link function applied to the positive parameter |
location |
Numeric of length 1, the known location parameter, |
expected |
Logical. If |
ishrinkage , parallel , zero
|
See |
type.fitted , percentiles
|
See |
The family function assumes the response has density
for , where
is the known location parameter.
By default,
.
Then
and
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
Suppose .
For a fixed time interval, the number of events is
Poisson with mean
if the time
between events has a
geometric distribution with mean
.
The argument
rate
in exponential
is the same as
rexp
etc.
The argument lambda
in rpois
is somewhat
the same as rate
here.
T. W. Yee
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
amlexponential
,
gpd
,
laplace
,
expgeometric
,
explogff
,
poissonff
,
mix2exp
,
freund61
,
simulate.vlm
,
Exponential
.
edata <- data.frame(x2 = runif(nn <- 100) - 0.5) edata <- transform(edata, x3 = runif(nn) - 0.5) edata <- transform(edata, eta = 0.2 - 0.7 * x2 + 1.9 * x3) edata <- transform(edata, rate = exp(eta)) edata <- transform(edata, y = rexp(nn, rate = rate)) with(edata, stem(y)) fit.slow <- vglm(y ~ x2 + x3, exponential, data = edata, trace = TRUE) fit.fast <- vglm(y ~ x2 + x3, exponential(exp = FALSE), data = edata, trace = TRUE, crit = "coef") coef(fit.slow, mat = TRUE) summary(fit.slow) # Compare results with a GPD. Has a threshold. threshold <- 0.5 gdata <- data.frame(y1 = threshold + rexp(n = 3000, rate = exp(1.5))) fit.exp <- vglm(y1 ~ 1, exponential(location = threshold), data = gdata) coef(fit.exp, matrix = TRUE) Coef(fit.exp) logLik(fit.exp) fit.gpd <- vglm(y1 ~ 1, gpd(threshold = threshold), data = gdata) coef(fit.gpd, matrix = TRUE) Coef(fit.gpd) logLik(fit.gpd)
edata <- data.frame(x2 = runif(nn <- 100) - 0.5) edata <- transform(edata, x3 = runif(nn) - 0.5) edata <- transform(edata, eta = 0.2 - 0.7 * x2 + 1.9 * x3) edata <- transform(edata, rate = exp(eta)) edata <- transform(edata, y = rexp(nn, rate = rate)) with(edata, stem(y)) fit.slow <- vglm(y ~ x2 + x3, exponential, data = edata, trace = TRUE) fit.fast <- vglm(y ~ x2 + x3, exponential(exp = FALSE), data = edata, trace = TRUE, crit = "coef") coef(fit.slow, mat = TRUE) summary(fit.slow) # Compare results with a GPD. Has a threshold. threshold <- 0.5 gdata <- data.frame(y1 = threshold + rexp(n = 3000, rate = exp(1.5))) fit.exp <- vglm(y1 ~ 1, exponential(location = threshold), data = gdata) coef(fit.exp, matrix = TRUE) Coef(fit.exp) logLik(fit.exp) fit.gpd <- vglm(y1 ~ 1, gpd(threshold = threshold), data = gdata) coef(fit.gpd, matrix = TRUE) Coef(fit.gpd) logLik(fit.gpd)
Density, distribution function, quantile function and random generation for the exponential poisson distribution.
dexppois(x, rate = 1, shape, log = FALSE) pexppois(q, rate = 1, shape, lower.tail = TRUE, log.p = FALSE) qexppois(p, rate = 1, shape, lower.tail = TRUE, log.p = FALSE) rexppois(n, rate = 1, shape)
dexppois(x, rate = 1, shape, log = FALSE) pexppois(q, rate = 1, shape, lower.tail = TRUE, log.p = FALSE) qexppois(p, rate = 1, shape, lower.tail = TRUE, log.p = FALSE) rexppois(n, rate = 1, shape)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
shape , rate
|
positive parameters. |
log |
Logical.
If |
lower.tail , log.p
|
See exppoisson
, the VGAM family function
for estimating the parameters,
for the formula of the probability density function and other details.
dexppois
gives the density,
pexppois
gives the distribution function,
qexppois
gives the quantile function, and
rexppois
generates random deviates.
Kai Huang and J. G. Lauder
## Not run: rate <- 2; shape <- 0.5; nn <- 201 x <- seq(-0.05, 1.05, len = nn) plot(x, dexppois(x, rate = rate, shape), type = "l", las = 1, ylim = c(0, 3), ylab = paste("fexppoisson(rate = ", rate, ", shape = ", shape, ")"), col = "blue", cex.main = 0.8, main = "Blue is the density, orange the cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pexppois(x, rate = rate, shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qexppois(probs, rate = rate, shape) lines(Q, dexppois(Q, rate = rate, shape), col = "purple", lty = 3, type = "h") lines(Q, pexppois(Q, rate = rate, shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3); abline(h = 0, col = "gray50") max(abs(pexppois(Q, rate = rate, shape) - probs)) # Should be 0 ## End(Not run)
## Not run: rate <- 2; shape <- 0.5; nn <- 201 x <- seq(-0.05, 1.05, len = nn) plot(x, dexppois(x, rate = rate, shape), type = "l", las = 1, ylim = c(0, 3), ylab = paste("fexppoisson(rate = ", rate, ", shape = ", shape, ")"), col = "blue", cex.main = 0.8, main = "Blue is the density, orange the cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pexppois(x, rate = rate, shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qexppois(probs, rate = rate, shape) lines(Q, dexppois(Q, rate = rate, shape), col = "purple", lty = 3, type = "h") lines(Q, pexppois(Q, rate = rate, shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3); abline(h = 0, col = "gray50") max(abs(pexppois(Q, rate = rate, shape) - probs)) # Should be 0 ## End(Not run)
Estimates the two parameters of the exponential Poisson distribution by maximum likelihood estimation.
exppoisson(lrate = "loglink", lshape = "loglink", irate = 2, ishape = 1.1, zero = NULL)
exppoisson(lrate = "loglink", lshape = "loglink", irate = 2, ishape = 1.1, zero = NULL)
lshape , lrate
|
Link function for the two positive parameters.
See |
ishape , irate
|
Numeric.
Initial values for the |
zero |
The exponential Poisson distribution has density function
where ,
and the parameters shape,
,
and rate,
, are positive.
The distribution implies a population facing discrete
hazard rates which are multiples of a base hazard.
This VGAM family function requires the
hypergeo
package
(to use their genhypergeo
function).
The median is returned as the fitted value.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
This VGAM family function does not work properly!
J. G. Lauder, [email protected]
Kus, C., (2007). A new lifetime distribution. Computational Statistics and Data Analysis, 51, 4497–4509.
dexppois
,
exponential
,
poisson
.
## Not run: shape <- exp(1); rate <- exp(2) rdata <- data.frame(y = rexppois(n = 1000, rate = rate, shape = shape)) library("hypergeo") # Required! fit <- vglm(y ~ 1, exppoisson, data = rdata, trace = FALSE, maxit = 1200) c(with(rdata, median(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
## Not run: shape <- exp(1); rate <- exp(2) rdata <- data.frame(y = rexppois(n = 1000, rate = rate, shape = shape)) library("hypergeo") # Required! fit <- vglm(y ~ 1, exppoisson, data = rdata, trace = FALSE, maxit = 1200) c(with(rdata, median(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
Density, distribution function, quantile function and random generation for the extended beta-binomial distribution.
dextbetabinom(x, size, prob, rho = 0, log = FALSE, forbycol = TRUE) pextbetabinom(q, size, prob, rho = 0, lower.tail = TRUE, forbycol = TRUE) qextbetabinom(p, size, prob, rho = 0, forbycol = TRUE) rextbetabinom(n, size, prob, rho = 0)
dextbetabinom(x, size, prob, rho = 0, log = FALSE, forbycol = TRUE) pextbetabinom(q, size, prob, rho = 0, lower.tail = TRUE, forbycol = TRUE) qextbetabinom(p, size, prob, rho = 0, forbycol = TRUE) rextbetabinom(n, size, prob, rho = 0)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
size |
number of trials. |
n |
number of observations.
Same as |
prob |
the probability of success |
rho |
the correlation parameter |
log , lower.tail
|
Same meaning as |
forbycol |
Logical.
A |
The extended beta-binomial
distribution allows for a slightly negative
correlation parameter between binary
responses within a cluster (e.g., a litter).
An exchangeable error structure with
correlation is assumed.
dextbetabinom
gives the density,
pextbetabinom
gives the
distribution function,
qextbetabinom
gives the quantile function
and
rextbetabinom
generates random
deviates.
Setting rho = 1
is not recommended
as NaN
is returned,
however the code may be
modified in the future to handle this
special case.
Currently most of the code is quite slow.
Speed improvements are a future project.
Use forbycol
optimally.
extbetabinomial
,
Betabinom
,
Binomial
.
set.seed(1); rextbetabinom(10, 100, 0.5) set.seed(1); rbinom(10, 100, 0.5) # Same ## Not run: N <- 9; xx <- 0:N; prob <- 0.5; rho <- -0.02 dy <- dextbetabinom(xx, N, prob, rho) barplot(rbind(dy, dbinom(xx, size = N, prob)), beside = TRUE, col = c("blue","green"), las = 1, main = paste0("Beta-binom(size=", N, ", prob=", prob, ", rho=", rho, ") (blue) vs\n", " Binom(size=", N, ", prob=", prob, ") (green)"), names.arg = as.character(xx), cex.main = 0.8) sum(dy * xx) # Check expected values are equal sum(dbinom(xx, size = N, prob = prob) * xx) cumsum(dy) - pextbetabinom(xx, N, prob, rho) # 0? ## End(Not run)
set.seed(1); rextbetabinom(10, 100, 0.5) set.seed(1); rbinom(10, 100, 0.5) # Same ## Not run: N <- 9; xx <- 0:N; prob <- 0.5; rho <- -0.02 dy <- dextbetabinom(xx, N, prob, rho) barplot(rbind(dy, dbinom(xx, size = N, prob)), beside = TRUE, col = c("blue","green"), las = 1, main = paste0("Beta-binom(size=", N, ", prob=", prob, ", rho=", rho, ") (blue) vs\n", " Binom(size=", N, ", prob=", prob, ") (green)"), names.arg = as.character(xx), cex.main = 0.8) sum(dy * xx) # Check expected values are equal sum(dbinom(xx, size = N, prob = prob) * xx) cumsum(dy) - pextbetabinom(xx, N, prob, rho) # 0? ## End(Not run)
Fits an extended beta-binomial distribution by maximum likelihood estimation. The two parameters here are the mean and correlation coefficient.
extbetabinomial(lmu = "logitlink", lrho = "cloglink", zero = "rho", irho = 0, grho = c(0, 0.05, 0.1, 0.2), vfl = FALSE, Form2 = NULL, imethod = 1, ishrinkage = 0.95)
extbetabinomial(lmu = "logitlink", lrho = "cloglink", zero = "rho", irho = 0, grho = c(0, 0.05, 0.1, 0.2), vfl = FALSE, Form2 = NULL, imethod = 1, ishrinkage = 0.95)
lmu , lrho
|
Link functions applied to the two parameters.
See |
irho , grho
|
The first is similar to |
imethod |
Similar to |
zero |
Similar to |
ishrinkage |
See |
vfl , Form2
|
See |
The extended beta-binomial
distribution (EBBD) proposed
by Prentice (1986)
allows for a slightly negative correlation
parameter whereas the ordinary BBD
betabinomial
only allows values in so it
handles overdispersion only.
When negative, the data is underdispersed
relative to an ordinary binomial distribution.
Argument rho
is used here for the
used in Prentice (1986) because
it is the correlation between the
(almost) Bernoulli trials.
(They are actually simple binary variates.)
We use here
for the number of trials
(e.g., litter size),
is the number of successes, and
(or
)
is the probability of a success
(e.g., a malformation).
That is,
is the proportion
of successes. Like
binomialff
, the fitted values
are the
estimated probability
of success
(i.e., and not
)
and the prior weights
are attached
separately on the object in a slot.
The probability function is difficult
to write but it involves three
series of products.
Recall is the real response
being modelled, where
is the
(total) sum of
correlated
(almost) Bernoulli trials.
The default model is
and
because the first
parameter lies between 0 and 1.
The second link is
cloglink
.
The mean of is
and the variance of
is
.
Here, the correlation
may be slightly negative
and is the correlation between the
individuals within a litter.
A litter effect is typically reflected
by a positive value of
and
corresponds to overdispersion with
respect to the binomial distribution.
Thus an exchangeable error structure
is assumed between units within a litter
for the EBBD.
This family function uses Fisher scoring.
Elements of the second-order expected
derivatives
are computed numerically, which may
fail for models very near the boundary of the
parameter space.
Usually, the computations
are expensive for large because of
a
for
loop, so
it may take a long time.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
.
Suppose fit
is a fitted EBB
model. Then depvar(fit)
are the sample proportions ,
fitted(fit)
returns
estimates of ,
and
weights(fit, type = "prior")
returns
the number of trials .
Modelling rho
using covariates well
requires much data
so it is usually best to leave zero
alone.
It is good to set trace = TRUE
and
play around with irho
if there are
problems achieving convergence.
Convergence problems will occur when the
estimated rho
is close to the
lower bound,
i.e., the underdispersion
is almost too severe for the EBB to cope.
This function is recommended over
betabinomial
and
betabinomialff
.
It processes the input in the
same way as binomialff
.
But it does not handle the case
very well because there are two parameters to
estimate, not one, for each row of the input.
Cases where
can be selected via the
subset
argument of vglm
.
T. W. Yee
Prentice, R. L. (1986). Binary regression using an extended beta-binomial distribution, with discussion of correlation induced by covariate measurement errors. Journal of the American Statistical Association, 81, 321–327.
Extbetabinom
,
betabinomial
,
betabinomialff
,
binomialff
,
dirmultinomial
,
cloglink
,
lirat
.
# Example 1 edata <- data.frame(N = 10, mu = 0.5, rho = 0.1) edata <- transform(edata, y = rextbetabinom(100, N, mu, rho = rho)) fit1 <- vglm(cbind(y, N-y) ~ 1, extbetabinomial, edata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(cbind(depvar(fit1), weights(fit1, type = "prior"))) # Example 2: VFL model ## Not run: N <- size1 <- 10; nn <- 2000; set.seed(1) edata <- # Generate the data set. Expensive. data.frame(x2 = runif(nn), ooo = log(size1 / (size1 - 1))) edata <- transform(edata, x1copy = 1, x2copy = x2, y2 = rextbetabinom(nn, size1, # Expensive logitlink(1 + x2, inverse = TRUE), cloglink(ooo + 1 - 0.5 * x2, inv = TRUE))) fit2 <- vglm(data = edata, cbind(y2, N - y2) ~ x2 + x1copy + x2copy, extbetabinomial(zero = NULL, vfl = TRUE, Form2 = ~ x1copy + x2copy - 1), offset = cbind(0, ooo), trace = TRUE) coef(fit2, matrix = TRUE) wald.stat(fit2, values0 = c(1, 1, -0.5)) ## End(Not run)
# Example 1 edata <- data.frame(N = 10, mu = 0.5, rho = 0.1) edata <- transform(edata, y = rextbetabinom(100, N, mu, rho = rho)) fit1 <- vglm(cbind(y, N-y) ~ 1, extbetabinomial, edata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(cbind(depvar(fit1), weights(fit1, type = "prior"))) # Example 2: VFL model ## Not run: N <- size1 <- 10; nn <- 2000; set.seed(1) edata <- # Generate the data set. Expensive. data.frame(x2 = runif(nn), ooo = log(size1 / (size1 - 1))) edata <- transform(edata, x1copy = 1, x2copy = x2, y2 = rextbetabinom(nn, size1, # Expensive logitlink(1 + x2, inverse = TRUE), cloglink(ooo + 1 - 0.5 * x2, inv = TRUE))) fit2 <- vglm(data = edata, cbind(y2, N - y2) ~ x2 + x1copy + x2copy, extbetabinomial(zero = NULL, vfl = TRUE, Form2 = ~ x1copy + x2copy - 1), offset = cbind(0, ooo), trace = TRUE) coef(fit2, matrix = TRUE) wald.stat(fit2, values0 = c(1, 1, -0.5)) ## End(Not run)
Maximum likelihood estimation of the 1-parameter extended log-F distribution.
extlogF1(tau = c(0.25, 0.5, 0.75), parallel = TRUE ~ 0, seppar = 0, tol0 = -0.001, llocation = "identitylink", ilocation = NULL, lambda.arg = NULL, scale.arg = 1, ishrinkage = 0.95, digt = 4, idf.mu = 3, imethod = 1)
extlogF1(tau = c(0.25, 0.5, 0.75), parallel = TRUE ~ 0, seppar = 0, tol0 = -0.001, llocation = "identitylink", ilocation = NULL, lambda.arg = NULL, scale.arg = 1, ishrinkage = 0.95, digt = 4, idf.mu = 3, imethod = 1)
tau |
Numeric, the desired quantiles. A strictly increasing sequence,
each value must be in |
parallel |
Similar to Setting |
seppar , tol0
|
Numeric, both of unit length and nonnegative,
the separation and shift parameters.
If If If avoiding the quantile crossing problem is of concern to you,
try increasing |
llocation , ilocation
|
See |
lambda.arg |
Positive tuning parameter which controls the sharpness of the cusp.
The limit as it approaches 0 is probably very similar to
|
scale.arg |
Positive scale parameter and sometimes called |
ishrinkage , idf.mu , digt
|
Similar to |
imethod |
Initialization method.
Either the value 1, 2, or ....
See |
This is an experimental family function for quantile regression.
Fasiolo et al. (2020) propose an extended log-F distribution
(ELF)
however this family function only estimates the location parameter.
The distribution has a scale parameter which can be inputted
(default value is unity).
One location parameter is estimated for each tau
value
and these are the estimated quantiles.
For quantile regression it is not necessary to estimate
the scale parameter since the log-likelihood function is
triangle shaped.
The ELF is used as an approximation of the asymmetric Laplace
distribution (ALD).
The latter cannot be estimated properly using Fisher scoring/IRLS
but the ELF holds promise because it has continuous derivatives
and therefore fewer problems with the regularity conditions.
Because the ELF is fitted to data to obtain an
empirical result the convergence behaviour may not be gentle
and smooth.
Hence there is a function-specific control function called
extlogF1.control
which has something like
stepsize = 0.5
and maxits = 100
.
It has been found that
slowing down the rate of convergence produces greater
stability during the estimation process.
Regardless, convergence should be monitored carefully always.
This function accepts a vector response but not a matrix response.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
Changes will occur in the future to fine-tune things.
In general
setting trace = TRUE
is strongly encouraged because it is
needful to check that convergence occurs properly.
If seppar > 0
then logLik(fit)
will return the
penalized log-likelihood.
Thomas W. Yee
Fasiolo, M., Wood, S. N., Zaffran, M., Nedellec, R. and Goude, Y. (2020). Fast calibrated additive quantile regression. J. Amer. Statist. Assoc., in press.
Yee, T. W. (2020). On quantile regression based on the 1-parameter extended log-F distribution. In preparation.
dextlogF
,
is.crossing
,
fix.crossing
,
eCDF
,
vglm.control
,
logF
,
alaplace1
,
dalap
,
lms.bcn
.
## Not run: nn <- 1000; mytau <- c(0.25, 0.75) edata <- data.frame(x2 = sort(rnorm(nn))) edata <- transform(edata, y1 = 1 + x2 + rnorm(nn, sd = exp(-1)), y2 = cos(x2) / (1 + abs(x2)) + rnorm(nn, sd = exp(-1))) fit1 <- vglm(y1 ~ x2, extlogF1(tau = mytau), data = edata) # trace = TRUE fit2 <- vglm(y2 ~ bs(x2, 6), extlogF1(tau = mytau), data = edata) coef(fit1, matrix = TRUE) fit2@extra$percentile # Empirical percentiles here summary(fit2) c(is.crossing(fit1), is.crossing(fit2)) head(fitted(fit1)) plot(y2 ~ x2, edata, col = "blue") matlines(with(edata, x2), fitted(fit2), col="orange", lty = 1, lwd = 2) ## End(Not run)
## Not run: nn <- 1000; mytau <- c(0.25, 0.75) edata <- data.frame(x2 = sort(rnorm(nn))) edata <- transform(edata, y1 = 1 + x2 + rnorm(nn, sd = exp(-1)), y2 = cos(x2) / (1 + abs(x2)) + rnorm(nn, sd = exp(-1))) fit1 <- vglm(y1 ~ x2, extlogF1(tau = mytau), data = edata) # trace = TRUE fit2 <- vglm(y2 ~ bs(x2, 6), extlogF1(tau = mytau), data = edata) coef(fit1, matrix = TRUE) fit2@extra$percentile # Empirical percentiles here summary(fit2) c(is.crossing(fit1), is.crossing(fit2)) head(fitted(fit1)) plot(y2 ~ x2, edata, col = "blue") matlines(with(edata, x2), fitted(fit2), col="orange", lty = 1, lwd = 2) ## End(Not run)
Extractor function for the name of the family function of an object in the VGAM package.
familyname(object, ...) familyname.vlm(object, all = FALSE, ...)
familyname(object, ...) familyname.vlm(object, all = FALSE, ...)
object |
Some VGAM object, for example, having
class |
all |
If |
... |
Other possible arguments for the future. |
Currently VGAM implements over 150 family functions.
This function returns the name of the function assigned
to the family
argument, for modelling
functions such as vglm
and vgam
.
Sometimes a slightly different answer is returned, e.g.,
propodds
really calls cumulative
with some arguments set, hence the output returned by
this function is "cumulative"
(note that one day
this might change, however).
A character string or vector.
Arguments used in the invocation are not included. Possibly this is something to be done in the future.
pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo) familyname(fit1) familyname(fit1, all = TRUE) familyname(propodds()) # "cumulative"
pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo) familyname(fit1) familyname(fit1, all = TRUE) familyname(propodds()) # "cumulative"
Estimates the parameter of a Felix distribution by maximum likelihood estimation.
felix(lrate = extlogitlink(min = 0, max = 0.5), imethod = 1)
felix(lrate = extlogitlink(min = 0, max = 0.5), imethod = 1)
lrate |
Link function for the parameter,
called |
imethod |
See |
The Felix distribution is an important basic Lagrangian distribution. The density function is
where and
.
The mean is
(returned as the fitted values).
Fisher scoring is implemented.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
T. W. Yee
Consul, P. C. and Famoye, F. (2006). Lagrangian Probability Distributions, Boston, USA: Birkhauser.
fdata <- data.frame(y = 2 * rpois(n = 200, 1) + 1) # Not real data! fit <- vglm(y ~ 1, felix, data = fdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit)
fdata <- data.frame(y = 2 * rpois(n = 200, 1) + 1) # Not real data! fit <- vglm(y ~ 1, felix, data = fdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit)
Density for the Felix distribution.
dfelix(x, rate = 0.25, log = FALSE)
dfelix(x, rate = 0.25, log = FALSE)
x |
vector of quantiles. |
rate |
See |
log |
Logical.
If |
See felix
, the VGAM family function
for estimating the parameter,
for the formula of the probability density function and other
details.
dfelix
gives the density.
The default value of rate
is subjective.
T. W. Yee
## Not run: rate <- 0.25; x <- 1:15 plot(x, dfelix(x, rate), type = "h", las = 1, col = "blue", ylab = paste("dfelix(rate=", rate, ")"), main = "Felix density function") ## End(Not run)
## Not run: rate <- 0.25; x <- 1:15 plot(x, dfelix(x, rate), type = "h", las = 1, col = "blue", ylab = paste("dfelix(rate=", rate, ")"), main = "Felix density function") ## End(Not run)
Maximum likelihood estimation of the (2-parameter) F distribution.
fff(link = "loglink", idf1 = NULL, idf2 = NULL, nsimEIM = 100, imethod = 1, zero = NULL)
fff(link = "loglink", idf1 = NULL, idf2 = NULL, nsimEIM = 100, imethod = 1, zero = NULL)
link |
Parameter link function for both parameters.
See |
idf1 , idf2
|
Numeric and positive. Initial value for the parameters. The default is to choose each value internally. |
nsimEIM , zero
|
See |
imethod |
Initialization method. Either the value 1 or 2.
If both fail try setting values for
|
The F distribution is named after Fisher and has
a density function
that has two parameters, called df1
and df2
here.
This function treats these degrees of freedom
as positive reals
rather than integers.
The mean of the distribution is
provided
,
and its variance is
provided
.
The estimated mean is returned as the fitted values.
Although the F distribution can be defined to accommodate a
non-centrality parameter
ncp
, it is assumed zero here.
Actually it shouldn't be too difficult to handle any known
ncp
; something to do in the short future.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
Numerical problems will occur when the estimates of the parameters are too low or too high.
T. W. Yee
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
## Not run: fdata <- data.frame(x2 = runif(nn <- 2000)) fdata <- transform(fdata, df1 = exp(2+0.5*x2), df2 = exp(2-0.5*x2)) fdata <- transform(fdata, y = rf(nn, df1, df2)) fit <- vglm(y ~ x2, fff, data = fdata, trace = TRUE) coef(fit, matrix = TRUE) ## End(Not run)
## Not run: fdata <- data.frame(x2 = runif(nn <- 2000)) fdata <- transform(fdata, df1 = exp(2+0.5*x2), df2 = exp(2-0.5*x2)) fdata <- transform(fdata, y = rf(nn, df1, df2)) fit <- vglm(y ~ x2, fff, data = fdata, trace = TRUE) coef(fit, matrix = TRUE) ## End(Not run)
A support function for the argument xij
,
it generates a matrix
of an appropriate dimension.
fill1(x, values = 0, ncolx = ncol(x))
fill1(x, values = 0, ncolx = ncol(x))
x |
A vector or matrix which is used to determine
the dimension of the
answer, in particular, the number of rows.
After converting |
values |
Numeric.
The answer contains these values,
which are recycled columnwise if necessary, i.e.,
as |
ncolx |
The number of columns of the returned matrix.
The default is the number of columns of |
The xij
argument for vglm
allows
the user to input
variables specific to each linear/additive predictor.
For example, consider
the bivariate logit model where the
first/second linear/additive
predictor is the logistic regression of the
first/second binary response
respectively. The third linear/additive predictor
is log(OR) =
eta3
, where OR
is the odds ratio.
If one has ocular pressure
as a covariate in this model then xij
is required to handle the
ocular pressure for each eye, since these will
be different in general.
[This contrasts with a variable such
as age
, the age of the
person, which has a common value for both eyes.]
In order to input
these data into vglm
one often
finds that functions
fill1
, fill2
, etc. are useful.
All terms in the xij
and formula
arguments in vglm
must appear in the form2
argument too.
matrix(values, nrow=nrow(x), ncol=ncolx)
,
i.e., a matrix
consisting of values values
,
with the number of rows matching
x
, and the default number of columns
is the number of columns
of x
.
The effect of the xij
argument is after
other arguments such as
exchangeable
and zero
.
Hence xij
does not affect constraint matrices.
Additionally, there are currently 3 other
identical fill1
functions, called fill2
, fill3
and fill4
;
if you need more then assign fill5 = fill6 = fill1
etc.
The reason for this is that if more than
one fill1
function is
needed then they must be unique.
For example, if then
xij = list(op ~ lop + rop + fill1(mop) + fill1(mop))
would reduce to
xij = list(op ~ lop + rop + fill1(mop))
, whereas
xij = list(op ~ lop + rop + fill1(mop) + fill2(mop))
would retain
all terms, which is needed.
In Examples 1 to 3 below, the xij
argument
illustrates covariates
that are specific to a linear predictor.
Here, lop
/rop
are
the ocular pressures of the left/right eye
in an artificial dataset,
and mop
is their mean.
Variables leye
and reye
might be the presence/absence of a particular
disease on the LHS/RHS
eye respectively.
In Example 3, the xij
argument illustrates fitting the
(exchangeable) model where there is a common smooth function
of the
ocular pressure. One should use regression splines since
s
in vgam
does not handle
the xij
argument. However, regression splines such as
bs
and ns
need
to have
the same basis functions here for both functions, and Example 3
illustrates a trick involving a function BS
to obtain this,
e.g., same knots. Although regression splines create more than a
single column per term in the model matrix,
fill1(BS(lop,rop))
creates the required (same) number of columns.
T. W. Yee
vglm.control
,
vglm
,
multinomial
,
Select
.
fill1(runif(5)) fill1(runif(5), ncol = 3) fill1(runif(5), val = 1, ncol = 3) # Generate (independent) eyes data for the examples below; OR=1. ## Not run: nn <- 1000 # Number of people eyesdata <- data.frame(lop = round(runif(nn), 2), rop = round(runif(nn), 2), age = round(rnorm(nn, 40, 10))) eyesdata <- transform(eyesdata, mop = (lop + rop) / 2, # Mean ocular pressure op = (lop + rop) / 2, # Value unimportant unless plotting # op = lop, # Choose this if plotting eta1 = 0 - 2*lop + 0.04*age, # Linear predictor for left eye eta2 = 0 - 2*rop + 0.04*age) # Linear predictor for right eye eyesdata <- transform(eyesdata, leye = rbinom(nn, size=1, prob = logitlink(eta1, inverse = TRUE)), reye = rbinom(nn, size=1, prob = logitlink(eta2, inverse = TRUE))) # Example 1. All effects are linear. fit1 <- vglm(cbind(leye,reye) ~ op + age, family = binom2.or(exchangeable = TRUE, zero = 3), data = eyesdata, trace = TRUE, xij = list(op ~ lop + rop + fill1(lop)), form2 = ~ op + lop + rop + fill1(lop) + age) head(model.matrix(fit1, type = "lm")) # LM model matrix head(model.matrix(fit1, type = "vlm")) # Big VLM model matrix coef(fit1) coef(fit1, matrix = TRUE) # Unchanged with 'xij' constraints(fit1) max(abs(predict(fit1)-predict(fit1, new = eyesdata))) # Okay summary(fit1) plotvgam(fit1, se = TRUE) # Wrong, e.g., coz it plots against op, not lop. # So set op = lop in the above for a correct plot. # Example 2. This uses regression splines on ocular pressure. # It uses a trick to ensure common basis functions. BS <- function(x, ...) sm.bs(c(x,...), df = 3)[1:length(x), , drop = FALSE] # trick fit2 <- vglm(cbind(leye,reye) ~ BS(lop,rop) + age, family = binom2.or(exchangeable = TRUE, zero = 3), data = eyesdata, trace = TRUE, xij = list(BS(lop,rop) ~ BS(lop,rop) + BS(rop,lop) + fill1(BS(lop,rop))), form2 = ~ BS(lop,rop) + BS(rop,lop) + fill1(BS(lop,rop)) + lop + rop + age) head(model.matrix(fit2, type = "lm")) # LM model matrix head(model.matrix(fit2, type = "vlm")) # Big VLM model matrix coef(fit2) coef(fit2, matrix = TRUE) summary(fit2) [email protected] max(abs(predict(fit2) - predict(fit2, new = eyesdata))) # Okay predict(fit2, new = head(eyesdata)) # OR is 'scalar' as zero=3 max(abs(head(predict(fit2)) - predict(fit2, new = head(eyesdata)))) # Should be 0 plotvgam(fit2, se = TRUE, xlab = "lop") # Correct # Example 3. Capture-recapture model with ephemeral and enduring # memory effects. Similar to Yang and Chao (2005), Biometrics. deermice <- transform(deermice, Lag1 = y1) M.tbh.lag1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight + Lag1, posbernoulli.tb(parallel.t = FALSE ~ 0, parallel.b = FALSE ~ 0, drop.b = FALSE ~ 1), xij = list(Lag1 ~ fill1(y1) + fill1(y2) + fill1(y3) + fill1(y4) + fill1(y5) + fill1(y6) + y1 + y2 + y3 + y4 + y5), form2 = ~ sex + weight + Lag1 + fill1(y1) + fill1(y2) + fill1(y3) + fill1(y4) + fill1(y5) + fill1(y6) + y1 + y2 + y3 + y4 + y5 + y6, data = deermice, trace = TRUE) coef(M.tbh.lag1) ## End(Not run)
fill1(runif(5)) fill1(runif(5), ncol = 3) fill1(runif(5), val = 1, ncol = 3) # Generate (independent) eyes data for the examples below; OR=1. ## Not run: nn <- 1000 # Number of people eyesdata <- data.frame(lop = round(runif(nn), 2), rop = round(runif(nn), 2), age = round(rnorm(nn, 40, 10))) eyesdata <- transform(eyesdata, mop = (lop + rop) / 2, # Mean ocular pressure op = (lop + rop) / 2, # Value unimportant unless plotting # op = lop, # Choose this if plotting eta1 = 0 - 2*lop + 0.04*age, # Linear predictor for left eye eta2 = 0 - 2*rop + 0.04*age) # Linear predictor for right eye eyesdata <- transform(eyesdata, leye = rbinom(nn, size=1, prob = logitlink(eta1, inverse = TRUE)), reye = rbinom(nn, size=1, prob = logitlink(eta2, inverse = TRUE))) # Example 1. All effects are linear. fit1 <- vglm(cbind(leye,reye) ~ op + age, family = binom2.or(exchangeable = TRUE, zero = 3), data = eyesdata, trace = TRUE, xij = list(op ~ lop + rop + fill1(lop)), form2 = ~ op + lop + rop + fill1(lop) + age) head(model.matrix(fit1, type = "lm")) # LM model matrix head(model.matrix(fit1, type = "vlm")) # Big VLM model matrix coef(fit1) coef(fit1, matrix = TRUE) # Unchanged with 'xij' constraints(fit1) max(abs(predict(fit1)-predict(fit1, new = eyesdata))) # Okay summary(fit1) plotvgam(fit1, se = TRUE) # Wrong, e.g., coz it plots against op, not lop. # So set op = lop in the above for a correct plot. # Example 2. This uses regression splines on ocular pressure. # It uses a trick to ensure common basis functions. BS <- function(x, ...) sm.bs(c(x,...), df = 3)[1:length(x), , drop = FALSE] # trick fit2 <- vglm(cbind(leye,reye) ~ BS(lop,rop) + age, family = binom2.or(exchangeable = TRUE, zero = 3), data = eyesdata, trace = TRUE, xij = list(BS(lop,rop) ~ BS(lop,rop) + BS(rop,lop) + fill1(BS(lop,rop))), form2 = ~ BS(lop,rop) + BS(rop,lop) + fill1(BS(lop,rop)) + lop + rop + age) head(model.matrix(fit2, type = "lm")) # LM model matrix head(model.matrix(fit2, type = "vlm")) # Big VLM model matrix coef(fit2) coef(fit2, matrix = TRUE) summary(fit2) fit2@smart.prediction max(abs(predict(fit2) - predict(fit2, new = eyesdata))) # Okay predict(fit2, new = head(eyesdata)) # OR is 'scalar' as zero=3 max(abs(head(predict(fit2)) - predict(fit2, new = head(eyesdata)))) # Should be 0 plotvgam(fit2, se = TRUE, xlab = "lop") # Correct # Example 3. Capture-recapture model with ephemeral and enduring # memory effects. Similar to Yang and Chao (2005), Biometrics. deermice <- transform(deermice, Lag1 = y1) M.tbh.lag1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight + Lag1, posbernoulli.tb(parallel.t = FALSE ~ 0, parallel.b = FALSE ~ 0, drop.b = FALSE ~ 1), xij = list(Lag1 ~ fill1(y1) + fill1(y2) + fill1(y3) + fill1(y4) + fill1(y5) + fill1(y6) + y1 + y2 + y3 + y4 + y5), form2 = ~ sex + weight + Lag1 + fill1(y1) + fill1(y2) + fill1(y3) + fill1(y4) + fill1(y5) + fill1(y6) + y1 + y2 + y3 + y4 + y5 + y6, data = deermice, trace = TRUE) coef(M.tbh.lag1) ## End(Not run)
A data frame of a toxicity trial.
data(finney44)
data(finney44)
A data frame with 6 observations on the following 3 variables.
pconc
a numeric vector, percent concentration of pyrethrins.
hatched
number of eggs that hatched.
unhatched
number of eggs that did not hatch.
Finney (1944) describes a toxicity trial of five different concentrations of pyrethrins (percent) plus a control that were administered to eggs of Ephestia kuhniella. The natural mortality rate is large, and a common adjustment is to use Abbott's formula.
Finney, D. J. (1944). The application of the probit method to toxicity test data adjusted for mortality in the controls. Annals of Applied Biology, 31, 68–74.
Abbott, W. S. (1925). A method of computing the effectiveness of an insecticide. Journal of Economic Entomology, 18, 265–7.
data(finney44) transform(finney44, mortality = unhatched / (hatched + unhatched))
data(finney44) transform(finney44, mortality = unhatched / (hatched + unhatched))
Computes the Fisher Z transformation, including its inverse and the first two derivatives.
fisherzlink(theta, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
fisherzlink(theta, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
bminvalue , bmaxvalue
|
Optional boundary values.
Values of |
inverse , deriv , short , tag
|
Details at |
The fisherz
link function is commonly used for
parameters that
lie between and
.
Numerical values of
theta
close
to or
or
out of range result in
Inf
, -Inf
, NA
or NaN
.
For deriv = 0
,
0.5 * log((1+theta)/(1-theta))
(same as atanh(theta)
)
when inverse = FALSE
,
and if inverse = TRUE
then
(exp(2*theta)-1)/(exp(2*theta)+1)
(same as tanh(theta)
).
For deriv = 1
, then the function returns
d eta
/ d theta
as
a function of theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns the reciprocal.
Here, all logarithms are natural logarithms, i.e., to base e.
Numerical instability may occur when theta
is close to or
.
One way of overcoming this is to use,
e.g.,
bminvalue
.
The link function rhobitlink
is
very similar to fisherzlink
,
e.g., just twice the value of fisherzlink
.
This link function may be renamed to atanhlink
in the near future.
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
theta <- seq(-0.99, 0.99, by = 0.01) y <- fisherzlink(theta) ## Not run: plot(theta, y, type = "l", las = 1, ylab = "", main = "fisherzlink(theta)", col = "blue") abline(v = (-1):1, h = 0, lty = 2, col = "gray") ## End(Not run) x <- c(seq(-1.02, -0.98, by = 0.01), seq(0.97, 1.02, by = 0.01)) fisherzlink(x) # Has NAs fisherzlink(x, bminvalue = -1 + .Machine$double.eps, bmaxvalue = 1 - .Machine$double.eps) # Has no NAs
theta <- seq(-0.99, 0.99, by = 0.01) y <- fisherzlink(theta) ## Not run: plot(theta, y, type = "l", las = 1, ylab = "", main = "fisherzlink(theta)", col = "blue") abline(v = (-1):1, h = 0, lty = 2, col = "gray") ## End(Not run) x <- c(seq(-1.02, -0.98, by = 0.01), seq(0.97, 1.02, by = 0.01)) fisherzlink(x) # Has NAs fisherzlink(x, bminvalue = -1 + .Machine$double.eps, bmaxvalue = 1 - .Machine$double.eps) # Has no NAs
Maximum likelihood estimation of the 2-parameter Fisk distribution.
fisk(lscale = "loglink", lshape1.a = "loglink", iscale = NULL, ishape1.a = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
fisk(lscale = "loglink", lshape1.a = "loglink", iscale = NULL, ishape1.a = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
lss |
See |
lshape1.a , lscale
|
Parameter link functions applied to the
(positive) parameters |
iscale , ishape1.a , imethod , zero
|
See |
gscale , gshape1.a
|
See |
probs.y |
See |
The 2-parameter Fisk (aka log-logistic) distribution
is the 4-parameter
generalized beta II distribution with
shape parameter .
It is also the 3-parameter Singh-Maddala distribution
with shape parameter
, as well as the
Dagum distribution with
.
More details can be found in Kleiber and Kotz (2003).
The Fisk distribution has density
for ,
,
.
Here,
is the scale parameter
scale
,
and is a shape parameter.
The cumulative distribution function is
The mean is
provided ; these are returned as the fitted values.
This family function handles multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
See the notes in genbetaII
.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
Fisk
,
genbetaII
,
betaII
,
dagum
,
sinmad
,
inv.lomax
,
lomax
,
paralogistic
,
inv.paralogistic
,
simulate.vlm
.
fdata <- data.frame(y = rfisk(200, shape = exp(1), exp(2))) fit <- vglm(y ~ 1, fisk(lss = FALSE), data = fdata, trace = TRUE) fit <- vglm(y ~ 1, fisk(ishape1.a = exp(2)), fdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit)
fdata <- data.frame(y = rfisk(200, shape = exp(1), exp(2))) fit <- vglm(y ~ 1, fisk(lss = FALSE), data = fdata, trace = TRUE) fit <- vglm(y ~ 1, fisk(ishape1.a = exp(2)), fdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit)
Density, distribution function, quantile function and random
generation for the Fisk distribution with
shape parameter a
and scale parameter scale
.
dfisk(x, scale = 1, shape1.a, log = FALSE) pfisk(q, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) qfisk(p, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) rfisk(n, scale = 1, shape1.a)
dfisk(x, scale = 1, shape1.a, log = FALSE) pfisk(q, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) qfisk(p, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) rfisk(n, scale = 1, shape1.a)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
shape1.a |
shape parameter. |
scale |
scale parameter. |
log |
Logical.
If |
lower.tail , log.p
|
See fisk
, which is the VGAM family function
for estimating the parameters by maximum likelihood estimation.
dfisk
gives the density,
pfisk
gives the distribution function,
qfisk
gives the quantile function, and
rfisk
generates random deviates.
The Fisk distribution is a special case of the 4-parameter generalized beta II distribution.
T. W. Yee and Kai Huang
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
fdata <- data.frame(y = rfisk(1000, shape = exp(1), scale = exp(2))) fit <- vglm(y ~ 1, fisk(lss = FALSE), data = fdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit)
fdata <- data.frame(y = rfisk(1000, shape = exp(1), scale = exp(2))) fit <- vglm(y ~ 1, fisk(lss = FALSE), data = fdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit)
Extractor function for the fitted values of a model object that
inherits from a vector linear model (VLM), e.g.,
a model of class "vglm"
.
fittedvlm(object, drop = FALSE, type.fitted = NULL, percentiles = NULL, ...)
fittedvlm(object, drop = FALSE, type.fitted = NULL, percentiles = NULL, ...)
object |
a model object that inherits from a VLM. |
drop |
Logical.
If |
type.fitted |
Character.
Some VGAM family functions have a |
percentiles |
See |
... |
Currently unused. |
The “fitted values” usually corresponds to the mean response, however, because the VGAM package fits so many models, this sometimes refers to quantities such as quantiles. The mean may even not exist, e.g., for a Cauchy distribution.
Note that the fitted value is output from
the @linkinv
slot
of the VGAM family function,
where the eta
argument is
the matrix
of linear predictors.
The fitted values evaluated at the final IRLS iteration.
This function is one of several extractor functions for
the VGAM package. Others include coef
,
deviance
, weights
and constraints
etc.
This function is equivalent to the methods function for the
generic function fitted.values
.
If fit
is a VLM or VGLM then fitted(fit)
and
predict(fit, type = "response")
should be equivalent
(see predictvglm
).
The latter has the advantage in that it handles a newdata
argument so that the fitted values can be computed for a
different data set.
Thomas W. Yee
Chambers, J. M. and T. J. Hastie (eds) (1992). Statistical Models in S. Wadsworth & Brooks/Cole.
fitted
,
predictvglm
,
vglmff-class
.
# Categorical regression example 1 pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) fitted(fit1) # LMS quantile regression example 2 fit2 <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), data = bmi.nz, trace = TRUE) head(predict(fit2, type = "response")) # Equals to both these: head(fitted(fit2)) predict(fit2, type = "response", newdata = head(bmi.nz)) # Zero-inflated example 3 zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pstr0.3 = logitlink(-0.5 , inverse = TRUE), lambda.3 = loglink(-0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(nn, lambda = lambda.3, pstr0 = pstr0.3)) fit3 <- vglm(y1 ~ x2, zipoisson(zero = NULL), zdata, trace = TRUE) head(fitted(fit3, type.fitted = "mean" )) # E(Y) (the default) head(fitted(fit3, type.fitted = "pobs0")) # Pr(Y = 0) head(fitted(fit3, type.fitted = "pstr0")) # Prob of a structural 0 head(fitted(fit3, type.fitted = "onempstr0")) # 1 - Pr(structural 0)
# Categorical regression example 1 pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) fitted(fit1) # LMS quantile regression example 2 fit2 <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), data = bmi.nz, trace = TRUE) head(predict(fit2, type = "response")) # Equals to both these: head(fitted(fit2)) predict(fit2, type = "response", newdata = head(bmi.nz)) # Zero-inflated example 3 zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pstr0.3 = logitlink(-0.5 , inverse = TRUE), lambda.3 = loglink(-0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(nn, lambda = lambda.3, pstr0 = pstr0.3)) fit3 <- vglm(y1 ~ x2, zipoisson(zero = NULL), zdata, trace = TRUE) head(fitted(fit3, type.fitted = "mean" )) # E(Y) (the default) head(fitted(fit3, type.fitted = "pobs0")) # Pr(Y = 0) head(fitted(fit3, type.fitted = "pstr0")) # Prob of a structural 0 head(fitted(fit3, type.fitted = "onempstr0")) # 1 - Pr(structural 0)
Returns a similar object fitted with columns of the constraint matrices amalgamated so it is a partially parallel VGLM object. The columns combined correspond to certain crossing quantiles. This applies especially to an extlogF1() VGLM object.
fix.crossing.vglm(object, maxit = 100, trace = FALSE, ...)
fix.crossing.vglm(object, maxit = 100, trace = FALSE, ...)
object |
an object such as
a |
maxit , trace
|
values for overwriting components in |
... |
additional optional arguments. Currently unused. |
The quantile crossing problem has been described as
disturbing and embarrassing.
This function was specifically written for
a vglm
with family function extlogF1
.
It examines the fitted quantiles of object
to see if any cross.
If so, then a pair of columns is combined to make those
two quantiles parallel.
After fitting the submodel it then repeats testing for
crossing quantiles and repairing them, until there is
no more quantile crossing detected.
Note that it is possible that the quantiles cross in
some subset of the covariate space not covered by the
data—see is.crossing
.
This function is fragile and likely to change in the future.
For extlogF1
models, it is assumed
that argument data
has been assigned a data frame,
and
that the default values of the argument parallel
has been used; this means that the second constraint
matrix is diag(M)
.
The constraint matrix of the intercept term remains unchanged
as diag(M)
.
An object very similar to the original object, but with possibly different constraint matrices (partially parallel) so as to remove any quantile crossing.
extlogF1
,
is.crossing
,
lms.bcn
.
vglm
.
## Not run: ooo <- with(bmi.nz, order(age)) bmi.nz <- bmi.nz[ooo, ] # Sort by age with(bmi.nz, plot(age, BMI, col = "blue")) mytau <- c(50, 93, 95, 97) / 100 # Some quantiles are quite close fit1 <- vglm(BMI ~ ns(age, 7), extlogF1(mytau), bmi.nz, trace = TRUE) plot(BMI ~ age, bmi.nz, col = "blue", las = 1, main = "Partially parallel (darkgreen) & nonparallel quantiles", sub = "Crossing quantiles are orange") fix.crossing(fit1) matlines(with(bmi.nz, age), fitted(fit1), lty = 1, col = "orange") fit2 <- fix.crossing(fit1) # Some quantiles have been fixed constraints(fit2) matlines(with(bmi.nz, age), fitted(fit2), lty = "dashed", col = "darkgreen", lwd = 2) ## End(Not run)
## Not run: ooo <- with(bmi.nz, order(age)) bmi.nz <- bmi.nz[ooo, ] # Sort by age with(bmi.nz, plot(age, BMI, col = "blue")) mytau <- c(50, 93, 95, 97) / 100 # Some quantiles are quite close fit1 <- vglm(BMI ~ ns(age, 7), extlogF1(mytau), bmi.nz, trace = TRUE) plot(BMI ~ age, bmi.nz, col = "blue", las = 1, main = "Partially parallel (darkgreen) & nonparallel quantiles", sub = "Crossing quantiles are orange") fix.crossing(fit1) matlines(with(bmi.nz, age), fitted(fit1), lty = 1, col = "orange") fit2 <- fix.crossing(fit1) # Some quantiles have been fixed constraints(fit2) matlines(with(bmi.nz, age), fitted(fit2), lty = "dashed", col = "darkgreen", lwd = 2) ## End(Not run)
The flourbeetle
data frame has 8 rows and 4 columns.
Two columns are explanatory, the other two are responses.
data(flourbeetle)
data(flourbeetle)
This data frame contains the following columns:
log10
applied to CS2mgL
.
a numeric vector, the concentration of gaseous carbon disulphide in mg per litre.
a numeric vector, counts; the number of beetles exposed to the poison.
a numeric vector, counts; the numbers killed.
These data were originally given in Table IV of Bliss (1935) and are the combination of two series of toxicological experiments involving Tribolium confusum, also known as the flour beetle. Groups of such adult beetles were exposed for 5 hours of gaseous carbon disulphide at different concentrations, and their mortality measured.
Bliss, C.I., 1935. The calculation of the dosage-mortality curve. Annals of Applied Biology, 22, 134–167.
fit1 <- vglm(cbind(killed, exposed - killed) ~ logdose, binomialff(link = probitlink), flourbeetle, trace = TRUE) summary(fit1)
fit1 <- vglm(cbind(killed, exposed - killed) ~ logdose, binomialff(link = probitlink), flourbeetle, trace = TRUE) summary(fit1)
Density, distribution function, quantile function and random generation for the (generalized) folded-normal distribution.
dfoldnorm(x, mean = 0, sd = 1, a1 = 1, a2 = 1, log = FALSE) pfoldnorm(q, mean = 0, sd = 1, a1 = 1, a2 = 1, lower.tail = TRUE, log.p = FALSE) qfoldnorm(p, mean = 0, sd = 1, a1 = 1, a2 = 1, lower.tail = TRUE, log.p = FALSE, ...) rfoldnorm(n, mean = 0, sd = 1, a1 = 1, a2 = 1)
dfoldnorm(x, mean = 0, sd = 1, a1 = 1, a2 = 1, log = FALSE) pfoldnorm(q, mean = 0, sd = 1, a1 = 1, a2 = 1, lower.tail = TRUE, log.p = FALSE) qfoldnorm(p, mean = 0, sd = 1, a1 = 1, a2 = 1, lower.tail = TRUE, log.p = FALSE, ...) rfoldnorm(n, mean = 0, sd = 1, a1 = 1, a2 = 1)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as |
mean , sd
|
see |
a1 , a2
|
see |
log |
Logical.
If |
lower.tail , log.p
|
|
... |
Arguments that can be passed into |
See foldnormal
, the VGAM family function
for estimating the parameters,
for the formula of the probability density function
and other details.
dfoldnorm
gives the density,
pfoldnorm
gives the distribution function,
qfoldnorm
gives the quantile function, and
rfoldnorm
generates random deviates.
T. W. Yee and Kai Huang.
Suggestions from Mauricio Romero led to improvements
in qfoldnorm()
.
## Not run: m <- 1.5; SD <- exp(0) x <- seq(-1, 4, len = 501) plot(x, dfoldnorm(x, m = m, sd = SD), type = "l", ylim = 0:1, ylab = paste("foldnorm(m = ", m, ", sd = ", round(SD, digits = 3), ")"), las = 1, main = "Blue is density, orange is CDF", col = "blue", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "gray50") lines(x, pfoldnorm(x, m = m, sd = SD), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qfoldnorm(probs, m = m, sd = SD) lines(Q, dfoldnorm(Q, m, SD), col = "purple", lty = 3, type = "h") lines(Q, pfoldnorm(Q, m, SD), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pfoldnorm(Q, m = m, sd = SD) - probs)) # Should be 0 ## End(Not run)
## Not run: m <- 1.5; SD <- exp(0) x <- seq(-1, 4, len = 501) plot(x, dfoldnorm(x, m = m, sd = SD), type = "l", ylim = 0:1, ylab = paste("foldnorm(m = ", m, ", sd = ", round(SD, digits = 3), ")"), las = 1, main = "Blue is density, orange is CDF", col = "blue", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "gray50") lines(x, pfoldnorm(x, m = m, sd = SD), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qfoldnorm(probs, m = m, sd = SD) lines(Q, dfoldnorm(Q, m, SD), col = "purple", lty = 3, type = "h") lines(Q, pfoldnorm(Q, m, SD), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pfoldnorm(Q, m = m, sd = SD) - probs)) # Should be 0 ## End(Not run)
Fits a (generalized) folded (univariate) normal distribution.
foldnormal(lmean = "identitylink", lsd = "loglink", imean = NULL, isd = NULL, a1 = 1, a2 = 1, nsimEIM = 500, imethod = 1, zero = NULL)
foldnormal(lmean = "identitylink", lsd = "loglink", imean = NULL, isd = NULL, a1 = 1, a2 = 1, nsimEIM = 500, imethod = 1, zero = NULL)
lmean , lsd
|
Link functions for the mean and standard
deviation parameters of the usual univariate normal distribution.
They are |
imean , isd
|
Optional initial values for |
a1 , a2
|
Positive weights, called |
nsimEIM , imethod , zero
|
If a random variable has an ordinary univariate normal distribution then the absolute value of that random variable has an ordinary folded normal distribution. That is, the sign has not been recorded; only the magnitude has been measured.
More generally, suppose is normal with
mean
mean
and
standard deviation sd
.
Let
where
and
are positive weights.
This means that
for
,
and
for
.
Then
is said to have a
generalized folded normal distribution.
The ordinary folded normal distribution corresponds to the
special case
.
The probability density function of the ordinary
folded normal distribution
can be written
dnorm(y, mean, sd) + dnorm(y, -mean, sd)
for
.
By default,
mean
and log(sd)
are the
linear/additive
predictors.
Having mean=0
and sd=1
results in the
half-normal distribution.
The mean of an ordinary folded normal distribution is
and these are returned as the fitted values.
Here, is the cumulative distribution
function of a
standard normal (
pnorm
).
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such
as vglm
and vgam
.
Under- or over-flow may occur if the data is ill-conditioned. It is recommended that several different initial values be used to help avoid local solutions.
The response variable for this family function is the same as
uninormal
except positive values are required.
Reasonably good initial values are needed.
Fisher scoring using simulation is implemented.
See CommonVGAMffArguments
for
general information about
many of these arguments.
Yet to do: implement the results of Johnson (1962) which gives expressions for the EIM, albeit, under a different parameterization. Also, one element of the EIM appears to require numerical integration.
Thomas W. Yee
Lin, P. C. (2005). Application of the generalized folded-normal distribution to the process capability measures. International Journal of Advanced Manufacturing Technology, 26, 825–830.
Johnson, N. L. (1962). The folded normal distribution: accuracy of estimation by maximum likelihood. Technometrics, 4, 249–256.
rfoldnorm
,
uninormal
,
dnorm
,
skewnormal
.
## Not run: m <- 2; SD <- exp(1) fdata <- data.frame(y = rfoldnorm(n <- 1000, m = m, sd = SD)) hist(with(fdata, y), prob = TRUE, main = paste("foldnormal(m = ", m, ", sd = ", round(SD, 2), ")")) fit <- vglm(y ~ 1, foldnormal, data = fdata, trace = TRUE) coef(fit, matrix = TRUE) (Cfit <- Coef(fit)) # Add the fit to the histogram: mygrid <- with(fdata, seq(min(y), max(y), len = 200)) lines(mygrid, dfoldnorm(mygrid, Cfit[1], Cfit[2]), col = "orange") ## End(Not run)
## Not run: m <- 2; SD <- exp(1) fdata <- data.frame(y = rfoldnorm(n <- 1000, m = m, sd = SD)) hist(with(fdata, y), prob = TRUE, main = paste("foldnormal(m = ", m, ", sd = ", round(SD, 2), ")")) fit <- vglm(y ~ 1, foldnormal, data = fdata, trace = TRUE) coef(fit, matrix = TRUE) (Cfit <- Coef(fit)) # Add the fit to the histogram: mygrid <- with(fdata, seq(min(y), max(y), len = 200)) lines(mygrid, dfoldnorm(mygrid, Cfit[1], Cfit[2]), col = "orange") ## End(Not run)
The methods function for formula
to
extract the formula from a fitted object,
as well as a methods function to return the names
of the terms in the formula.
## S3 method for class 'vlm' formula(x, ...) formulavlm(x, form.number = 1, ...) term.names(model, ...) term.namesvlm(model, form.number = 1, ...)
## S3 method for class 'vlm' formula(x, ...) formulavlm(x, form.number = 1, ...) term.names(model, ...) term.namesvlm(model, form.number = 1, ...)
x , model
|
A fitted model object. |
form.number |
Formula number, is 1 or 2.
which correspond to the arguments |
... |
Same as |
The formula
methods function is
based on formula
.
The formula
methods function should return something similar to
formula
.
The term.names
methods function should return a character string
with the terms in the formula; this includes any intercept (which
is denoted by "(Intercept)"
as the first element.)
Thomas W. Yee
# Example: this is based on a glm example counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3, 1, 9); treatment <- gl(3, 3) vglm.D93 <- vglm(counts ~ outcome + treatment, family = poissonff) formula(vglm.D93) pdata <- data.frame(counts, outcome, treatment) # Better style vglm.D93 <- vglm(counts ~ outcome + treatment, poissonff, data = pdata) formula(vglm.D93) term.names(vglm.D93) responseName(vglm.D93) has.intercept(vglm.D93)
# Example: this is based on a glm example counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3, 1, 9); treatment <- gl(3, 3) vglm.D93 <- vglm(counts ~ outcome + treatment, family = poissonff) formula(vglm.D93) pdata <- data.frame(counts, outcome, treatment) # Better style vglm.D93 <- vglm(counts ~ outcome + treatment, poissonff, data = pdata) formula(vglm.D93) term.names(vglm.D93) responseName(vglm.D93) has.intercept(vglm.D93)
Density, distribution function, and random generation for the (one parameter) bivariate Frank distribution.
dbifrankcop(x1, x2, apar, log = FALSE) pbifrankcop(q1, q2, apar) rbifrankcop(n, apar)
dbifrankcop(x1, x2, apar, log = FALSE) pbifrankcop(q1, q2, apar) rbifrankcop(n, apar)
x1 , x2 , q1 , q2
|
vector of quantiles. |
n |
number of observations.
Same as in |
apar |
the positive association parameter. |
log |
Logical.
If |
See bifrankcop
, the VGAM
family functions for estimating the association
parameter by maximum likelihood estimation, for the formula of
the cumulative distribution function and other details.
dbifrankcop
gives the density,
pbifrankcop
gives the distribution function, and
rbifrankcop
generates random deviates (a two-column matrix).
T. W. Yee
Genest, C. (1987). Frank's family of bivariate distributions. Biometrika, 74, 549–555.
## Not run: N <- 100; apar <- exp(2) xx <- seq(-0.30, 1.30, len = N) ox <- expand.grid(xx, xx) zedd <- dbifrankcop(ox[, 1], ox[, 2], apar = apar) contour(xx, xx, matrix(zedd, N, N)) zedd <- pbifrankcop(ox[, 1], ox[, 2], apar = apar) contour(xx, xx, matrix(zedd, N, N)) plot(rr <- rbifrankcop(n = 3000, apar = exp(4))) par(mfrow = c(1, 2)) hist(rr[, 1]); hist(rr[, 2]) # Should be uniform ## End(Not run)
## Not run: N <- 100; apar <- exp(2) xx <- seq(-0.30, 1.30, len = N) ox <- expand.grid(xx, xx) zedd <- dbifrankcop(ox[, 1], ox[, 2], apar = apar) contour(xx, xx, matrix(zedd, N, N)) zedd <- pbifrankcop(ox[, 1], ox[, 2], apar = apar) contour(xx, xx, matrix(zedd, N, N)) plot(rr <- rbifrankcop(n = 3000, apar = exp(4))) par(mfrow = c(1, 2)) hist(rr[, 1]); hist(rr[, 2]) # Should be uniform ## End(Not run)
Maximum likelihood estimation of the 2-parameter Frechet distribution.
frechet(location = 0, lscale = "loglink", lshape = logofflink(offset = -2), iscale = NULL, ishape = NULL, nsimEIM = 250, zero = NULL)
frechet(location = 0, lscale = "loglink", lshape = logofflink(offset = -2), iscale = NULL, ishape = NULL, nsimEIM = 250, zero = NULL)
location |
Numeric. Location parameter.
It is called |
lscale , lshape
|
Link functions for the parameters;
see |
iscale , ishape , zero , nsimEIM
|
See |
The (3-parameter) Frechet distribution has a density function that can be written
for and scale parameter
.
The positive shape parameter is
.
The cumulative distribution function is
The mean of
is
for
(these are returned as the fitted values).
The variance of
is
for
.
Family frechet
has known, and
and
are the default
linear/additive predictors.
The working weights are estimated by simulated Fisher scoring.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such
as vglm
and vgam
.
Family function frechet
may fail for low values of
the shape parameter, e.g., near 2 or lower.
T. W. Yee
Castillo, E., Hadi, A. S., Balakrishnan, N. and Sarabia, J. S. (2005). Extreme Value and Related Models with Applications in Engineering and Science, Hoboken, NJ, USA: Wiley-Interscience.
## Not run: set.seed(123) fdata <- data.frame(y1 = rfrechet(1000, shape = 2 + exp(1))) with(fdata, hist(y1)) fit2 <- vglm(y1 ~ 1, frechet, data = fdata, trace = TRUE) coef(fit2, matrix = TRUE) Coef(fit2) head(fitted(fit2)) with(fdata, mean(y1)) head(weights(fit2, type = "working")) vcov(fit2) ## End(Not run)
## Not run: set.seed(123) fdata <- data.frame(y1 = rfrechet(1000, shape = 2 + exp(1))) with(fdata, hist(y1)) fit2 <- vglm(y1 ~ 1, frechet, data = fdata, trace = TRUE) coef(fit2, matrix = TRUE) Coef(fit2) head(fitted(fit2)) with(fdata, mean(y1)) head(weights(fit2, type = "working")) vcov(fit2) ## End(Not run)
Density, distribution function, quantile function and random generation for the three parameter Frechet distribution.
dfrechet(x, location = 0, scale = 1, shape, log = FALSE) pfrechet(q, location = 0, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qfrechet(p, location = 0, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rfrechet(n, location = 0, scale = 1, shape)
dfrechet(x, location = 0, scale = 1, shape, log = FALSE) pfrechet(q, location = 0, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qfrechet(p, location = 0, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rfrechet(n, location = 0, scale = 1, shape)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Passed into |
location , scale , shape
|
the location parameter |
log |
Logical.
If |
lower.tail , log.p
|
See frechet
, the VGAM
family function for estimating the 2 parameters
(without location
parameter) by maximum likelihood estimation, for the formula
of the probability density function and range restrictions on
the parameters.
dfrechet
gives the density,
pfrechet
gives the distribution function,
qfrechet
gives the quantile function, and
rfrechet
generates random deviates.
T. W. Yee and Kai Huang
Castillo, E., Hadi, A. S., Balakrishnan, N. and Sarabia, J. S. (2005). Extreme Value and Related Models with Applications in Engineering and Science, Hoboken, NJ, USA: Wiley-Interscience.
## Not run: shape <- 5 x <- seq(-0.1, 3.5, length = 401) plot(x, dfrechet(x, shape = shape), type = "l", ylab = "", main = "Frechet density divided into 10 equal areas", sub = "Orange = CDF", las = 1) abline(h = 0, col = "blue", lty = 2) qq <- qfrechet(seq(0.1, 0.9, by = 0.1), shape = shape) lines(qq, dfrechet(qq, shape = shape), col = 2, lty = 2, type = "h") lines(x, pfrechet(q = x, shape = shape), col = "orange") ## End(Not run)
## Not run: shape <- 5 x <- seq(-0.1, 3.5, length = 401) plot(x, dfrechet(x, shape = shape), type = "l", ylab = "", main = "Frechet density divided into 10 equal areas", sub = "Orange = CDF", las = 1) abline(h = 0, col = "blue", lty = 2) qq <- qfrechet(seq(0.1, 0.9, by = 0.1), shape = shape) lines(qq, dfrechet(qq, shape = shape), col = 2, lty = 2, type = "h") lines(x, pfrechet(q = x, shape = shape), col = "orange") ## End(Not run)
Estimate the four parameters of the Freund (1961) bivariate extension of the exponential distribution by maximum likelihood estimation.
freund61(la = "loglink", lap = "loglink", lb = "loglink", lbp = "loglink", ia = NULL, iap = NULL, ib = NULL, ibp = NULL, independent = FALSE, zero = NULL)
freund61(la = "loglink", lap = "loglink", lb = "loglink", lbp = "loglink", ia = NULL, iap = NULL, ib = NULL, ibp = NULL, independent = FALSE, zero = NULL)
la , lap , lb , lbp
|
Link functions applied to the (positive)
parameters |
ia , iap , ib , ibp
|
Initial value for the four parameters respectively. The default is to estimate them all internally. |
independent |
Logical.
If |
zero |
A vector specifying which
linear/additive predictors are modelled as intercepts only.
The values can be from the set {1,2,3,4}.
The default is none of them.
See |
This model represents one type of bivariate extension
of the exponential
distribution that is applicable to certain problems,
in particular,
to two-component systems which can function if one of
the components
has failed. For example, engine failures in
two-engine planes, paired
organs such as peoples' eyes, ears and kidneys.
Suppose and
are random variables
representing the lifetimes of
two components
and
in a two component system.
The dependence between
and
is essentially such that the failure of the
component
changes the parameter of the exponential life distribution
of the
component from
to
, while the failure of
the
component
changes the parameter of the exponential life distribution
of the
component from
to
.
The joint probability density function is given by
for , and
for .
Here, all four parameters are positive, as well
as the responses
and
.
Under this model, the probability that component
is the first to fail is
.
The time to the first failure is distributed as an
exponential distribution with rate
. Furthermore, the
distribution of the time from first failure to failure
of the other component is a mixture of
Exponential(
) and
Exponential(
) with proportions
and
respectively.
The marginal distributions are, in general, not exponential.
By default, the linear/additive predictors are
,
,
,
.
A special case is when
and
, which means that
and
are independent, and
both have an ordinary exponential distribution with means
and
respectively.
Fisher scoring is used, and the initial values correspond to the MLEs of an intercept model. Consequently, convergence may take only one iteration.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
To estimate all four parameters, it is necessary to have some
data where and
.
The response must be a two-column matrix, with columns
and
.
Currently, the fitted value is a matrix with two columns; the
first column has values
for the mean of
,
while the second column has values
for the mean of
.
The variance of
is
the variance of is
the covariance of and
is
T. W. Yee
Freund, J. E. (1961). A bivariate extension of the exponential distribution. Journal of the American Statistical Association, 56, 971–977.
fdata <- data.frame(y1 = rexp(nn <- 1000, rate = exp(1))) fdata <- transform(fdata, y2 = rexp(nn, rate = exp(2))) fit1 <- vglm(cbind(y1, y2) ~ 1, freund61, fdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) vcov(fit1) head(fitted(fit1)) summary(fit1) # y1 and y2 are independent, so fit an independence model fit2 <- vglm(cbind(y1, y2) ~ 1, freund61(indep = TRUE), data = fdata, trace = TRUE) coef(fit2, matrix = TRUE) constraints(fit2) pchisq(2 * (logLik(fit1) - logLik(fit2)), # p-value df = df.residual(fit2) - df.residual(fit1), lower.tail = FALSE) lrtest(fit1, fit2) # Better alternative
fdata <- data.frame(y1 = rexp(nn <- 1000, rate = exp(1))) fdata <- transform(fdata, y2 = rexp(nn, rate = exp(2))) fit1 <- vglm(cbind(y1, y2) ~ 1, freund61, fdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) vcov(fit1) head(fitted(fit1)) summary(fit1) # y1 and y2 are independent, so fit an independence model fit2 <- vglm(cbind(y1, y2) ~ 1, freund61(indep = TRUE), data = fdata, trace = TRUE) coef(fit2, matrix = TRUE) constraints(fit2) pchisq(2 * (logLik(fit1) - logLik(fit2)), # p-value df = df.residual(fit2) - df.residual(fit1), lower.tail = FALSE) lrtest(fit1, fit2) # Better alternative
Density, distribution function, quantile function and random generation for the generally altered, inflated, truncated and deflated binomial distribution. Both parametric and nonparametric variants are supported; these are based on finite mixtures of the parent with itself and the multinomial logit model (MLM) respectively.
dgaitdbinom(x, size.p, prob.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, prob.a = prob.p, prob.i = prob.p, prob.d = prob.p, log = FALSE, ...) pgaitdbinom(q, size.p, prob.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, prob.a = prob.p, prob.i = prob.p, prob.d = prob.p, lower.tail = TRUE, ...) qgaitdbinom(p, size.p, prob.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, prob.a = prob.p, prob.i = prob.p, prob.d = prob.p, ...) rgaitdbinom(n, size.p, prob.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, prob.a = prob.p, prob.i = prob.p, prob.d = prob.p, ...)
dgaitdbinom(x, size.p, prob.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, prob.a = prob.p, prob.i = prob.p, prob.d = prob.p, log = FALSE, ...) pgaitdbinom(q, size.p, prob.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, prob.a = prob.p, prob.i = prob.p, prob.d = prob.p, lower.tail = TRUE, ...) qgaitdbinom(p, size.p, prob.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, prob.a = prob.p, prob.i = prob.p, prob.d = prob.p, ...) rgaitdbinom(n, size.p, prob.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, prob.a = prob.p, prob.i = prob.p, prob.d = prob.p, ...)
x , q , p , n , log , lower.tail
|
Same meaning as in |
size.p , prob.p
|
Same meaning as in |
size.a , prob.a
|
See |
size.i , prob.i
|
See |
size.d , prob.d
|
See |
truncate |
See |
a.mix , i.mix , d.mix
|
See |
a.mlm , i.mlm , d.mlm
|
See |
pstr.mix , pstr.mlm , byrow.aid
|
See |
pobs.mix , pobs.mlm
|
See |
pdip.mix , pdip.mlm
|
See |
... |
Arguments such as |
These functions for the GAITD binomial distribution
are analogous to the GAITD Poisson,
hence most details have been put in
Gaitdpois
.
dgaitdbinom
gives the density,
pgaitdbinom
gives the distribution function,
qgaitdbinom
gives the quantile function, and
rgaitdbinom
generates random deviates.
The default values of the arguments correspond to ordinary
dbinom
,
pbinom
,
qbinom
,
rbinom
respectively.
See Gaitdpois
about the dangers
of too much inflation and/or deflation on
GAITD PMFs, and the difficulties detecting such.
Functions Posbinom
have been moved
to VGAMdata.
It is better to use
dgaitdbinom(x, size, prob, truncate = 0)
instead of
dposbinom(x, size, prob)
, etc.
T. W. Yee.
Gaitdpois
,
Gaitdnbinom
,
multinomial
,
Gaitdlog
,
Gaitdzeta
.
size <- 20 ivec <- c(6, 10); avec <- c(8, 11); prob <- 0.25; xgrid <- 0:25 tvec <- 14; pobs.a <- 0.05; pstr.i <- 0.15 dvec <- 5; pdip.mlm <- 0.05 (ddd <- dgaitdbinom(xgrid, size, prob.p = prob, prob.a = prob + 0.05, truncate = tvec, pobs.mix = pobs.a, pdip.mlm = pdip.mlm, d.mlm = dvec, pobs.mlm = pobs.a, a.mlm = avec, pstr.mix = pstr.i, i.mix = ivec)) ## Not run: dgaitdplot(c(size, prob), ylab = "Probability", xlab = "x", pobs.mix = pobs.mix, pobs.mlm = pobs.a, a.mlm = avec, all.lwd = 3, pdip.mlm = pdip.mlm, d.mlm = dvec, fam = "binom", pstr.mix = pstr.i, i.mix = ivec, deflation = TRUE, main = "GAITD Combo PMF---Binomial Parent") ## End(Not run)
size <- 20 ivec <- c(6, 10); avec <- c(8, 11); prob <- 0.25; xgrid <- 0:25 tvec <- 14; pobs.a <- 0.05; pstr.i <- 0.15 dvec <- 5; pdip.mlm <- 0.05 (ddd <- dgaitdbinom(xgrid, size, prob.p = prob, prob.a = prob + 0.05, truncate = tvec, pobs.mix = pobs.a, pdip.mlm = pdip.mlm, d.mlm = dvec, pobs.mlm = pobs.a, a.mlm = avec, pstr.mix = pstr.i, i.mix = ivec)) ## Not run: dgaitdplot(c(size, prob), ylab = "Probability", xlab = "x", pobs.mix = pobs.mix, pobs.mlm = pobs.a, a.mlm = avec, all.lwd = 3, pdip.mlm = pdip.mlm, d.mlm = dvec, fam = "binom", pstr.mix = pstr.i, i.mix = ivec, deflation = TRUE, main = "GAITD Combo PMF---Binomial Parent") ## End(Not run)
Fits a generally altered, inflated, truncated and deflated logarithmic regression by MLE. The GAITD combo model having 7 types of special values is implemented. This allows logarithmic mixtures on nested and/or partitioned support as well as a multinomial logit model for altered, inflated and deflated values. Truncation may include the upper tail.
gaitdlog(a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, zero = c("pobs", "pstr", "pdip"), eq.ap = TRUE, eq.ip = TRUE, eq.dp = TRUE, parallel.a = FALSE, parallel.i = FALSE, parallel.d = FALSE, lshape.p = "logitlink", lshape.a = lshape.p, lshape.i = lshape.p, lshape.d = lshape.p, type.fitted = c("mean", "shapes", "pobs.mlm", "pstr.mlm", "pdip.mlm", "pobs.mix", "pstr.mix", "pdip.mix", "Pobs.mix", "Pstr.mix", "Pdip.mix", "nonspecial", "Numer", "Denom.p", "sum.mlm.i", "sum.mix.i", "sum.mlm.d", "sum.mix.d", "ptrunc.p", "cdf.max.s"), gshape.p = -expm1(-7 * ppoints(12)), gpstr.mix = ppoints(7) / 3, gpstr.mlm = ppoints(7) / (3 + length(i.mlm)), imethod = 1, mux.init = c(0.75, 0.5, 0.75), ishape.p = NULL, ishape.a = ishape.p, ishape.i = ishape.p, ishape.d = ishape.p, ipobs.mix = NULL, ipstr.mix = NULL, ipdip.mix = NULL, ipobs.mlm = NULL, ipstr.mlm = NULL, ipdip.mlm = NULL, byrow.aid = FALSE, ishrinkage = 0.95, probs.y = 0.35)
gaitdlog(a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, zero = c("pobs", "pstr", "pdip"), eq.ap = TRUE, eq.ip = TRUE, eq.dp = TRUE, parallel.a = FALSE, parallel.i = FALSE, parallel.d = FALSE, lshape.p = "logitlink", lshape.a = lshape.p, lshape.i = lshape.p, lshape.d = lshape.p, type.fitted = c("mean", "shapes", "pobs.mlm", "pstr.mlm", "pdip.mlm", "pobs.mix", "pstr.mix", "pdip.mix", "Pobs.mix", "Pstr.mix", "Pdip.mix", "nonspecial", "Numer", "Denom.p", "sum.mlm.i", "sum.mix.i", "sum.mlm.d", "sum.mix.d", "ptrunc.p", "cdf.max.s"), gshape.p = -expm1(-7 * ppoints(12)), gpstr.mix = ppoints(7) / 3, gpstr.mlm = ppoints(7) / (3 + length(i.mlm)), imethod = 1, mux.init = c(0.75, 0.5, 0.75), ishape.p = NULL, ishape.a = ishape.p, ishape.i = ishape.p, ishape.d = ishape.p, ipobs.mix = NULL, ipstr.mix = NULL, ipdip.mix = NULL, ipobs.mlm = NULL, ipstr.mlm = NULL, ipdip.mlm = NULL, byrow.aid = FALSE, ishrinkage = 0.95, probs.y = 0.35)
truncate , max.support
|
See |
a.mix , i.mix , d.mix
|
See |
a.mlm , i.mlm , d.mlm
|
See |
lshape.p , lshape.a , lshape.i , lshape.d
|
Link functions.
See |
eq.ap , eq.ip , eq.dp
|
Single logical each.
See |
parallel.a , parallel.i , parallel.d
|
Single logical each.
See |
type.fitted , mux.init
|
See |
imethod , ipobs.mix , ipstr.mix , ipdip.mix
|
See |
ipobs.mlm , ipstr.mlm , ipdip.mlm , byrow.aid
|
See |
gpstr.mix , gpstr.mlm
|
See |
gshape.p , ishape.p
|
See |
ishape.a , ishape.i , ishape.d
|
See |
probs.y , ishrinkage
|
See |
zero |
See |
Many details to this family function can be
found in gaitdpoisson
because it
is also a 1-parameter discrete distribution.
This function currently does not handle
multiple responses. Further details are at
Gaitdlog
.
As alluded to above, when there are covariates
it is much more interpretable to model
the mean rather than the shape parameter.
Hence logffMlink
is
recommended. (This might become the default
in the future.) So installing VGAMextra
is a good idea.
Apart from the order of the linear/additive predictors,
the following are (or should be) equivalent:
gaitdlog()
and logff()
,
gaitdlog(a.mix = 1)
and oalog(zero = "pobs1")
,
gaitdlog(i.mix = 1)
and oilog(zero = "pstr1")
,
gaitdlog(truncate = 1)
and otlog()
.
The functions
oalog
,
oilog
and
otlog
have been placed in VGAMdata.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such
as vglm
,
rrvglm
and vgam
.
See gaitdpoisson
.
See gaitdpoisson
.
T. W. Yee
Gaitdlog
,
logff
,
logffMlink
,
Gaitdpois
,
gaitdpoisson
,
gaitdzeta
,
spikeplot
,
goffset
,
Trunc
,
oalog
,
oilog
,
otlog
,
CommonVGAMffArguments
,
rootogram4
,
simulate.vlm
.
## Not run: avec <- c(5, 10) # Alter these values parametrically ivec <- c(3, 15) # Inflate these values tvec <- c(6, 7) # Truncate these values max.support <- 20; set.seed(1) pobs.a <- pstr.i <- 0.1 gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, shape.p = logitlink(2+0.5*x2, inverse = TRUE)) gdata <- transform(gdata, y1 = rgaitdlog(nn, shape.p, a.mix = avec, pobs.mix = pobs.a, i.mix = ivec, pstr.mix = pstr.i, truncate = tvec, max.support = max.support)) gaitdlog(a.mix = avec, i.mix = ivec, max.support = max.support) with(gdata, table(y1)) spikeplot(with(gdata, y1), las = 1) fit7 <- vglm(y1 ~ x2, trace = TRUE, data = gdata, gaitdlog(i.mix = ivec, truncate = tvec, max.support = max.support, a.mix = avec, eq.ap = TRUE, eq.ip = TRUE)) head(fitted(fit7, type.fitted = "Pstr.mix")) head(predict(fit7)) t(coef(fit7, matrix = TRUE)) # Easier to see with t() summary(fit7) spikeplot(with(gdata, y1), lwd = 2, ylim = c(0, 0.4)) plotdgaitd(fit7, new.plot = FALSE, offset.x = 0.2, all.lwd = 2) ## End(Not run)
## Not run: avec <- c(5, 10) # Alter these values parametrically ivec <- c(3, 15) # Inflate these values tvec <- c(6, 7) # Truncate these values max.support <- 20; set.seed(1) pobs.a <- pstr.i <- 0.1 gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, shape.p = logitlink(2+0.5*x2, inverse = TRUE)) gdata <- transform(gdata, y1 = rgaitdlog(nn, shape.p, a.mix = avec, pobs.mix = pobs.a, i.mix = ivec, pstr.mix = pstr.i, truncate = tvec, max.support = max.support)) gaitdlog(a.mix = avec, i.mix = ivec, max.support = max.support) with(gdata, table(y1)) spikeplot(with(gdata, y1), las = 1) fit7 <- vglm(y1 ~ x2, trace = TRUE, data = gdata, gaitdlog(i.mix = ivec, truncate = tvec, max.support = max.support, a.mix = avec, eq.ap = TRUE, eq.ip = TRUE)) head(fitted(fit7, type.fitted = "Pstr.mix")) head(predict(fit7)) t(coef(fit7, matrix = TRUE)) # Easier to see with t() summary(fit7) spikeplot(with(gdata, y1), lwd = 2, ylim = c(0, 0.4)) plotdgaitd(fit7, new.plot = FALSE, offset.x = 0.2, all.lwd = 2) ## End(Not run)
Density, distribution function, quantile function and random generation for the generally altered, inflated, truncated and deflated logarithmic distribution. Both parametric and nonparametric variants are supported; these are based on finite mixtures of the parent with itself and the multinomial logit model (MLM) respectively.
dgaitdlog(x, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p, log = FALSE) pgaitdlog(q, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p, lower.tail = TRUE) qgaitdlog(p, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p) rgaitdlog(n, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p)
dgaitdlog(x, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p, log = FALSE) pgaitdlog(q, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p, lower.tail = TRUE) qgaitdlog(p, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p) rgaitdlog(n, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p)
x , q , p , n , log , lower.tail
|
Same meaning as in |
shape.p , shape.a , shape.i , shape.d
|
Same meaning as |
truncate , max.support
|
See |
a.mix , i.mix , d.mix
|
See |
a.mlm , i.mlm , d.mlm
|
See |
pobs.mlm , pstr.mlm , pdip.mlm , byrow.aid
|
See |
pobs.mix , pstr.mix , pdip.mix
|
See |
These functions for the logarithmic distribution
are analogous to the Poisson,
hence most details have been put in
Gaitdpois
.
These functions do what
Oalog
,
Oilog
,
Otlog
collectively did plus much more.
dgaitdlog
gives the density,
pgaitdlog
gives the distribution function,
qgaitdlog
gives the quantile function, and
rgaitdlog
generates random deviates.
The default values of the arguments correspond to ordinary
dlog
,
plog
,
qlog
,
rlog
respectively.
See Gaitdpois
about the dangers
of too much inflation and/or deflation on
GAITD PMFs, and the difficulties detecting such.
See Gaitdpois
for general information also relevant
to this parent distribution.
T. W. Yee.
gaitdlog
,
Gaitdpois
,
dgaitdplot
,
Gaitdzeta
,
multinomial
,
Oalog
,
Oilog
,
Otlog
.
ivec <- c(2, 10); avec <- ivec + 1; shape <- 0.995; xgrid <- 0:15 max.support <- 15; pobs.a <- 0.10; pstr.i <- 0.15 dvec <- 1; pdip.mlm <- 0.05 (ddd <- dgaitdlog(xgrid, shape, max.support = max.support, pobs.mix = pobs.a, pdip.mlm = pdip.mlm, d.mlm = dvec, a.mix = avec, pstr.mix = pstr.i, i.mix = ivec)) ## Not run: dgaitdplot(shape, ylab = "Probability", xlab = "x", max.support = max.support, pobs.mix = 0, pobs.mlm = 0, a.mlm = avec, all.lwd = 3, pdip.mlm = pdip.mlm, d.mlm = dvec, fam = "log", pstr.mix = pstr.i, i.mix = ivec, deflation = TRUE, main = "GAITD Combo PMF---Logarithmic Parent") ## End(Not run)
ivec <- c(2, 10); avec <- ivec + 1; shape <- 0.995; xgrid <- 0:15 max.support <- 15; pobs.a <- 0.10; pstr.i <- 0.15 dvec <- 1; pdip.mlm <- 0.05 (ddd <- dgaitdlog(xgrid, shape, max.support = max.support, pobs.mix = pobs.a, pdip.mlm = pdip.mlm, d.mlm = dvec, a.mix = avec, pstr.mix = pstr.i, i.mix = ivec)) ## Not run: dgaitdplot(shape, ylab = "Probability", xlab = "x", max.support = max.support, pobs.mix = 0, pobs.mlm = 0, a.mlm = avec, all.lwd = 3, pdip.mlm = pdip.mlm, d.mlm = dvec, fam = "log", pstr.mix = pstr.i, i.mix = ivec, deflation = TRUE, main = "GAITD Combo PMF---Logarithmic Parent") ## End(Not run)
Density, distribution function, quantile function and random generation for the generally altered, inflated, truncated and deflated negative binomial (GAITD-NB) distribution. Both parametric and nonparametric variants are supported; these are based on finite mixtures of the parent with itself and the multinomial logit model (MLM) respectively.
dgaitdnbinom(x, size.p, munb.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, munb.a = munb.p, munb.i = munb.p, munb.d = munb.p, log = FALSE) pgaitdnbinom(q, size.p, munb.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, munb.a = munb.p, munb.i = munb.p, munb.d = munb.p, lower.tail = TRUE) qgaitdnbinom(p, size.p, munb.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, munb.a = munb.p, munb.i = munb.p, munb.d = munb.p) rgaitdnbinom(n, size.p, munb.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, munb.a = munb.p, munb.i = munb.p, munb.d = munb.p)
dgaitdnbinom(x, size.p, munb.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, munb.a = munb.p, munb.i = munb.p, munb.d = munb.p, log = FALSE) pgaitdnbinom(q, size.p, munb.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, munb.a = munb.p, munb.i = munb.p, munb.d = munb.p, lower.tail = TRUE) qgaitdnbinom(p, size.p, munb.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, munb.a = munb.p, munb.i = munb.p, munb.d = munb.p) rgaitdnbinom(n, size.p, munb.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, size.a = size.p, size.i = size.p, size.d = size.p, munb.a = munb.p, munb.i = munb.p, munb.d = munb.p)
x , q , p , n , log , lower.tail
|
Same meaning as in |
size.p , munb.p
|
Same meaning as in |
size.a , munb.a
|
See |
size.i , munb.i
|
See |
size.d , munb.d
|
See |
truncate , max.support
|
See |
a.mix , i.mix , d.mix
|
See |
a.mlm , i.mlm , d.mlm
|
See |
pobs.mlm , pstr.mlm , byrow.aid
|
See |
pobs.mix , pstr.mix
|
See |
pdip.mix , pdip.mlm
|
See |
These functions for the NBD are analogous to the Poisson,
hence most details have been put in
Gaitdpois
.
The NBD has two possible parameterizations: one
involving a probability (argument begins with prob
)
and the other the mean (beginning with mu
).
Only the latter is supported here.
For now, arguments such as prob.p
and prob.a
are no longer supported.
That's because mu
is more likely to be
used by most statisticians than prob
;
see dnbinom
.
dgaitdnbinom
gives the density,
pgaitdnbinom
gives the distribution function,
qgaitdnbinom
gives the quantile function, and
rgaitdnbinom
generates random deviates.
The default values of the arguments correspond to ordinary
dnbinom
,
pnbinom
,
qnbinom
,
rnbinom
respectively.
See Gaitdpois
about the dangers
of too much inflation and/or deflation on
GAITD PMFs, and the difficulties detecting such.
Four functions were moved from VGAM to VGAMdata;
they can be seen at Posnegbin
.
It is preferable to use
dgaitdnbinom(x, size, munb.p = munb, truncate = 0)
instead of dposnbinom(x, size, munb = munb)
, etc.
T. W. Yee.
gaitdnbinomial
,
Gaitdpois
,
multinomial
,
Gaitdbinom
,
Gaitdlog
,
Gaitdzeta
.
size <- 10; xgrid <- 0:25 ivec <- c(5, 6, 10, 14); avec <- c(8, 11); munb <- 10 tvec <- 15; pobs.a <- 0.05; pstr.i <- 0.25 dvec <- 13; pdip.mlm <- 0.03; pobs.mlm <- 0.05 (ddd <- dgaitdnbinom(xgrid, size, munb.p = munb, munb.a = munb + 5, truncate = tvec, pobs.mix = pobs.a, pdip.mlm = pdip.mlm, d.mlm = dvec, pobs.mlm = pobs.a, a.mlm = avec, pstr.mix = pstr.i, i.mix = ivec)) ## Not run: dgaitdplot(c(size, munb), fam = "nbinom", ylab = "Probability", xlab = "x", xlim = c(0, 25), truncate = tvec, pobs.mix = pobs.mix, pobs.mlm = pobs.mlm, a.mlm = avec, all.lwd = 3, pdip.mlm = pdip.mlm, d.mlm = dvec, pstr.mix = pstr.i, i.mix = ivec, deflation = TRUE, main = "GAITD Combo PMF---NB Parent") ## End(Not run)
size <- 10; xgrid <- 0:25 ivec <- c(5, 6, 10, 14); avec <- c(8, 11); munb <- 10 tvec <- 15; pobs.a <- 0.05; pstr.i <- 0.25 dvec <- 13; pdip.mlm <- 0.03; pobs.mlm <- 0.05 (ddd <- dgaitdnbinom(xgrid, size, munb.p = munb, munb.a = munb + 5, truncate = tvec, pobs.mix = pobs.a, pdip.mlm = pdip.mlm, d.mlm = dvec, pobs.mlm = pobs.a, a.mlm = avec, pstr.mix = pstr.i, i.mix = ivec)) ## Not run: dgaitdplot(c(size, munb), fam = "nbinom", ylab = "Probability", xlab = "x", xlim = c(0, 25), truncate = tvec, pobs.mix = pobs.mix, pobs.mlm = pobs.mlm, a.mlm = avec, all.lwd = 3, pdip.mlm = pdip.mlm, d.mlm = dvec, pstr.mix = pstr.i, i.mix = ivec, deflation = TRUE, main = "GAITD Combo PMF---NB Parent") ## End(Not run)
Fits a generally altered, inflated truncated and deflated negative binomial regression by MLE. The GAITD combo model having 7 types of special values is implemented. This allows mixtures of negative binomial distributions on nested and/or partitioned support as well as a multinomial logit model for (nonparametric) altered, inflated and deflated values.
gaitdnbinomial(a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, truncate = NULL, zero = c("size", "pobs", "pstr", "pdip"), eq.ap = TRUE, eq.ip = TRUE, eq.dp = TRUE, parallel.a = FALSE, parallel.i = FALSE, parallel.d = FALSE, lmunb.p = "loglink", lmunb.a = lmunb.p, lmunb.i = lmunb.p, lmunb.d = lmunb.p, lsize.p = "loglink", lsize.a = lsize.p, lsize.i = lsize.p, lsize.d = lsize.p, type.fitted = c("mean", "munbs", "sizes", "pobs.mlm", "pstr.mlm", "pdip.mlm", "pobs.mix", "pstr.mix", "pdip.mix", "Pobs.mix", "Pstr.mix", "Pdip.mix", "nonspecial", "Numer", "Denom.p", "sum.mlm.i", "sum.mix.i", "sum.mlm.d", "sum.mix.d", "ptrunc.p", "cdf.max.s"), gpstr.mix = ppoints(7) / 3, gpstr.mlm = ppoints(7) / (3 + length(i.mlm)), imethod = 1, mux.init = c(0.75, 0.5, 0.75, 0.5), imunb.p = NULL, imunb.a = imunb.p, imunb.i = imunb.p, imunb.d = imunb.p, isize.p = NULL, isize.a = isize.p, isize.i = isize.p, isize.d = isize.p, ipobs.mix = NULL, ipstr.mix = NULL, ipdip.mix = NULL, ipobs.mlm = NULL, ipstr.mlm = NULL, ipdip.mlm = NULL, byrow.aid = FALSE, ishrinkage = 0.95, probs.y = 0.35, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, nbd.max.support = 4000, max.chunk.MB = 30)
gaitdnbinomial(a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, truncate = NULL, zero = c("size", "pobs", "pstr", "pdip"), eq.ap = TRUE, eq.ip = TRUE, eq.dp = TRUE, parallel.a = FALSE, parallel.i = FALSE, parallel.d = FALSE, lmunb.p = "loglink", lmunb.a = lmunb.p, lmunb.i = lmunb.p, lmunb.d = lmunb.p, lsize.p = "loglink", lsize.a = lsize.p, lsize.i = lsize.p, lsize.d = lsize.p, type.fitted = c("mean", "munbs", "sizes", "pobs.mlm", "pstr.mlm", "pdip.mlm", "pobs.mix", "pstr.mix", "pdip.mix", "Pobs.mix", "Pstr.mix", "Pdip.mix", "nonspecial", "Numer", "Denom.p", "sum.mlm.i", "sum.mix.i", "sum.mlm.d", "sum.mix.d", "ptrunc.p", "cdf.max.s"), gpstr.mix = ppoints(7) / 3, gpstr.mlm = ppoints(7) / (3 + length(i.mlm)), imethod = 1, mux.init = c(0.75, 0.5, 0.75, 0.5), imunb.p = NULL, imunb.a = imunb.p, imunb.i = imunb.p, imunb.d = imunb.p, isize.p = NULL, isize.a = isize.p, isize.i = isize.p, isize.d = isize.p, ipobs.mix = NULL, ipstr.mix = NULL, ipdip.mix = NULL, ipobs.mlm = NULL, ipstr.mlm = NULL, ipdip.mlm = NULL, byrow.aid = FALSE, ishrinkage = 0.95, probs.y = 0.35, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, nbd.max.support = 4000, max.chunk.MB = 30)
truncate |
See |
a.mix , i.mix , d.mix
|
See |
a.mlm , i.mlm , d.mlm
|
See |
lmunb.p , lmunb.a , lmunb.i , lmunb.d
|
Link functions pertaining to the mean parameters.
See |
lsize.p , lsize.a , lsize.i , lsize.d
|
Link functions pertaining to the |
eq.ap , eq.ip , eq.dp
|
See |
parallel.a , parallel.i , parallel.d
|
See |
type.fitted |
See |
gpstr.mix , gpstr.mlm
|
See |
imethod , ipobs.mix , ipstr.mix , ipdip.mix
|
See |
ipobs.mlm , ipstr.mlm , ipdip.mlm
|
See |
mux.init |
Numeric, of length 4.
General downward multiplier for initial values for
the sample proportions (MLEs actually).
See |
imunb.p , imunb.a , imunb.i , imunb.d
|
See |
isize.p , isize.a , isize.i , isize.d
|
See |
probs.y , ishrinkage
|
See |
byrow.aid |
Details are at |
zero |
See |
nsimEIM , cutoff.prob , eps.trig
|
See |
nbd.max.support , max.chunk.MB
|
See |
The GAITD–NB combo model is the pinnacle of GAITD regression
for counts because it potentially handles
underdispersion,
equidispersion and
overdispersion relative to the Poisson,
as well as
alteration,
inflation,
deflation and
truncation at arbitrary support points.
In contrast, gaitdpoisson
cannot handle
overdispersion so well.
The GAITD–NB is so flexible that it can accommodate up to
seven modes.
The full
GAITD–NB–NB–MLM–NB-MLM–NB-MLM combo model
may be fitted with this family function.
There are seven types of special values and all
arguments for these
may be used in a single model.
Here, the MLM represents the nonparametric while the NB
refers to the negative binomial mixtures.
The defaults for this function correspond to an
ordinary negative binomial
regression so that negbinomial
is called instead.
While much of the documentation here draws upon
gaitdpoisson
, there are additional
details here because the NBD is a two parameter
distribution that handles overdispersion relative
to the Possion.
Consequently, this family function is exceeding flexible
and there are many more pitfalls to avoid.
The order of the linear/additive predictors is
best explained by an example.
Suppose a combo model has
length(a.mix) > 3
and
length(i.mix) > 3
,
length(d.mix) > 3
,
a.mlm = 3:5
,
i.mlm = 6:9
and
d.mlm = 10:12
, say.
Then loglink(munb.p)
and loglink(size.p)
are the first two.
The third is multilogitlink(pobs.mix)
followed
by loglink(munb.a)
and loglink(size.a)
because a.mix
is long enough.
The sixth is multilogitlink(pstr.mix)
followed
by loglink(munb.i)
and loglink(size.i)
because i.mix
is long enough.
The ninth is multilogitlink(pdip.mix)
followed
by loglink(munb.d)
and loglink(size.d)
because d.mix
is long enough.
Next are the probabilities for the a.mlm
values.
Then are the probabilities for the i.mlm
values.
Lastly are the probabilities for the d.mlm
values.
All the probabilities are estimated by one big MLM
and effectively
the "(Others)"
column of left over probabilities is
associated with the nonspecial values.
These might be called the
nonspecial baseline probabilities (NBP)
or reserve probabilities.
The dimension of the vector of linear/additive predictors here
is .
Apart from the order of the linear/additive predictors,
the following are (or should be) equivalent:
gaitdnbinomial()
and negbinomial()
,
gaitdnbinomial(a.mix = 0)
and zanegbinomial(zero = "pobs0")
,
gaitdnbinomial(i.mix = 0)
and zinegbinomial(zero = "pstr0")
,
gaitdnbinomial(truncate = 0)
and posnegbinomial()
.
Likewise, if
a.mix
and i.mix
are assigned a scalar then
it effectively moves that scalar to a.mlm
and i.mlm
because there is no
parameters such as munb.i
being estimated.
Thus
gaitdnbinomial(a.mix = 0)
and gaitdnbinomial(a.mlm = 0)
are the effectively same, and ditto for
gaitdnbinomial(i.mix = 0)
and gaitdnbinomial(i.mlm = 0)
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
The fitted.values
slot of the fitted object,
which should be extracted by the generic function fitted
,
returns the mean by default.
See the information above on
type.fitted
.
See gaitdpoisson
.
Also, having eq.ap = TRUE
, eq.ip = TRUE
and eq.dp = TRUE
is often needed to obtain
initial values that are good enough because they borrow
strength across the different operators.
It is usually easy to relax these assumptions later.
This family function is under constant development and future changes will occur.
If length(a.mix)
is 1 then effectively this becomes a
value of a.mlm
.
If length(a.mix)
is 2 then an error message
will be issued (overfitting really).
If length(a.mix)
is 3 then this is almost
overfitting too.
Hence length(a.mix)
should be 4 or more.
Ditto for length(i.mix)
and length(d.mix)
.
See gaitdpoisson
for notes about numerical
problems that can easily arise. With the NBD there is
even more potential trouble that can occur.
In particular, good initial values are more necessary so
it pays to experiment with arguments such as
imunb.p
and isize.p
, as well as
fitting an intercept-only model first before adding
covariates and using etastart
.
Currently max.support
is missing because only
Inf
is handled. This might change later.
T. W. Yee
Yee, T. W. and Ma, C. (2024). Generally altered, inflated, truncated and deflated regression. Statistical Science, 39 (in press).
Gaitdnbinom
,
dgaitdplot
,
multinomial
,
rootogram4
,
specials
,
plotdgaitd
,
spikeplot
,
meangaitd
,
KLD
,
gaitdpoisson
,
gaitdlog
,
gaitdzeta
,
multilogitlink
,
multinomial
,
goffset
,
Trunc
,
negbinomial
,
CommonVGAMffArguments
,
simulate.vlm
.
## Not run: i.mix <- c(5, 10, 12, 16) # Inflate these values parametrically i.mlm <- c(14, 15) # Inflate these values a.mix <- c(1, 6, 13, 20) # Alter these values tvec <- c(3, 11) # Truncate these values pstr.mlm <- 0.1 # So parallel.i = TRUE pobs.mix <- pstr.mix <- 0.1; set.seed(1) gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, munb.p = exp(2 + 0.0 * x2), size.p = exp(1)) gdata <- transform(gdata, y1 = rgaitdnbinom(nn, size.p, munb.p, a.mix = a.mix, i.mix = i.mix, pobs.mix = pobs.mix, pstr.mix = pstr.mix, i.mlm = i.mlm, pstr.mlm = pstr.mlm, truncate = tvec)) gaitdnbinomial(a.mix = a.mix, i.mix = i.mix, i.mlm = i.mlm) with(gdata, table(y1)) fit1 <- vglm(y1 ~ 1, crit = "coef", trace = TRUE, data = gdata, gaitdnbinomial(a.mix = a.mix, i.mix = i.mix, i.mlm = i.mlm, parallel.i = TRUE, eq.ap = TRUE, eq.ip = TRUE, truncate = tvec)) head(fitted(fit1, type.fitted = "Pstr.mix")) head(predict(fit1)) t(coef(fit1, matrix = TRUE)) # Easier to see with t() summary(fit1) spikeplot(with(gdata, y1), lwd = 2) plotdgaitd(fit1, new.plot = FALSE, offset.x = 0.2, all.lwd = 2) ## End(Not run)
## Not run: i.mix <- c(5, 10, 12, 16) # Inflate these values parametrically i.mlm <- c(14, 15) # Inflate these values a.mix <- c(1, 6, 13, 20) # Alter these values tvec <- c(3, 11) # Truncate these values pstr.mlm <- 0.1 # So parallel.i = TRUE pobs.mix <- pstr.mix <- 0.1; set.seed(1) gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, munb.p = exp(2 + 0.0 * x2), size.p = exp(1)) gdata <- transform(gdata, y1 = rgaitdnbinom(nn, size.p, munb.p, a.mix = a.mix, i.mix = i.mix, pobs.mix = pobs.mix, pstr.mix = pstr.mix, i.mlm = i.mlm, pstr.mlm = pstr.mlm, truncate = tvec)) gaitdnbinomial(a.mix = a.mix, i.mix = i.mix, i.mlm = i.mlm) with(gdata, table(y1)) fit1 <- vglm(y1 ~ 1, crit = "coef", trace = TRUE, data = gdata, gaitdnbinomial(a.mix = a.mix, i.mix = i.mix, i.mlm = i.mlm, parallel.i = TRUE, eq.ap = TRUE, eq.ip = TRUE, truncate = tvec)) head(fitted(fit1, type.fitted = "Pstr.mix")) head(predict(fit1)) t(coef(fit1, matrix = TRUE)) # Easier to see with t() summary(fit1) spikeplot(with(gdata, y1), lwd = 2) plotdgaitd(fit1, new.plot = FALSE, offset.x = 0.2, all.lwd = 2) ## End(Not run)
Density, distribution function, quantile function and random generation for the generally altered, inflated, truncated and deflated Poisson distribution. Both parametric and nonparametric variants are supported; these are based on finite mixtures of the parent with itself and the multinomial logit model (MLM) respectively.
dgaitdpois(x, lambda.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, lambda.a = lambda.p, lambda.i = lambda.p, lambda.d = lambda.p, log = FALSE) pgaitdpois(q, lambda.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, lambda.a = lambda.p, lambda.i = lambda.p, lambda.d = lambda.p, lower.tail = TRUE, checkd = FALSE) qgaitdpois(p, lambda.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, lambda.a = lambda.p, lambda.i = lambda.p, lambda.d = lambda.p) rgaitdpois(n, lambda.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, lambda.a = lambda.p, lambda.i = lambda.p, lambda.d = lambda.p)
dgaitdpois(x, lambda.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, lambda.a = lambda.p, lambda.i = lambda.p, lambda.d = lambda.p, log = FALSE) pgaitdpois(q, lambda.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, lambda.a = lambda.p, lambda.i = lambda.p, lambda.d = lambda.p, lower.tail = TRUE, checkd = FALSE) qgaitdpois(p, lambda.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, lambda.a = lambda.p, lambda.i = lambda.p, lambda.d = lambda.p) rgaitdpois(n, lambda.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, lambda.a = lambda.p, lambda.i = lambda.p, lambda.d = lambda.p)
x , q , p , n
|
Same meaning as in |
log , lower.tail
|
Same meaning as in |
lambda.p , lambda.a , lambda.i , lambda.d
|
Same meaning as in |
truncate , max.support
|
numeric; these specify the set of truncated values.
The default value of |
a.mix , i.mix , d.mix
|
Vectors of nonnegative integers;
the altered, inflated and deflated values for the
parametric variant.
Each argument must have unique values only.
Assigning argument |
a.mlm , i.mlm , d.mlm
|
Similar to the above, but for the nonparametric (MLM) variant.
For example, assigning |
pobs.mlm , pstr.mlm , pdip.mlm , byrow.aid
|
The first three arguments are coerced into a matrix of
probabilities
using |
pobs.mix , pstr.mix , pdip.mix
|
Vectors of probabilities that are recycled if necessary to
length |
checkd |
Logical.
If |
These functions allow any combination of 4 operator types:
truncation, alteration, inflation and deflation.
The precedence is
truncation, then alteration and lastly inflation and deflation.
Informally, deflation can be thought of as the
opposite of inflation.
This order minimizes the potential interference among the operators.
Loosely, a set of probabilities is set to 0 by truncation
and the remaining probabilities are scaled up.
Then a different set of probabilities are set to some
values pobs.mix
and/or pobs.mlm
and the remaining probabilities are rescaled up.
Then another different set of probabilities is inflated by
an amount pstr.mlm
and/or proportional
to pstr.mix
so that individual elements in this set have two sources.
Then another different set of probabilities is deflated by
an amount pdip.mlm
and/or proportional
to pdip.mix
.
Then all the probabilities are
rescaled so that they sum to unity.
Both parametric and nonparametric variants are implemented.
They usually have arguments with suffix
.mix
and .mlm
respectively.
The MLM is a loose coupling that effectively separates
the parent (or base) distribution from
the altered values.
Values inflated nonparametrically effectively have
their spikes shaved off.
The .mix
variant has associated with it
lambda.a
and lambda.i
and lambda.d
because it is mixture of 4 Poisson distributions with
partitioned or nested support.
Any value of the support of the distribution that is
altered, inflated, truncated or deflated
is called a special value.
A special value that is altered may mean that its probability
increases or decreases relative to the parent distribution.
An inflated special value means that its probability has
increased, provided alteration elsewhere has not made it decrease
in the first case.
There are seven types of special values and they are
represented by
a.mix
,
a.mlm
,
i.mix
,
i.mlm
,
d.mix
,
d.mlm
,
truncate
.
Terminology-wise, special values
are altered or inflated or truncated or deflated, and
the remaining support points that correspond directly to
the parent distribution are nonspecial or ordinary.
These functions do what
Zapois
,
Zipois
,
Pospois
collectively did plus much more.
In the notation of Yee and Ma (2023)
these functions allow for the special cases:
(i) GAIT–Pois(lambda.p
)–Pois(lambda.a
,
a.mix
, pobs.mix
)–Pois(lambda.i
,
i.mix
, pstr.mix
);
(ii) GAIT–Pois(lambda.p
)–MLM(a.mlm
,
pobs.mlm
)–MLM(i.mlm
, pstr.mlm
).
Model (i) is totally parametric while model (ii) is the most
nonparametric possible.
dgaitdpois
gives the density,
pgaitdpois
gives the distribution function,
qgaitdpois
gives the quantile function, and
rgaitdpois
generates random deviates.
The default values of the arguments correspond to ordinary
dpois
,
ppois
,
qpois
,
rpois
respectively.
It is possible that the GAITD PMF is invalid because
of too much inflation and/or deflation.
This would result in some probabilities exceeding
unity or being negative.
Hence x
should ideally contain these types
of special values so that this can be detected.
If so then a NaN
is returned and
a warning is issued, e.g.,
same as dnorm(0, 0, sd = -1)
.
To help checking,
pgaitdpois(q, ...)
calls
dgaitdpois(floor(q), ...)
if checkd
is TRUE
.
That is, given the parameters, it is impractical to determine whether the PMF is valid. To do this, one would have to compute the PMF at all values of its support and check that they are nonnegative and sum to unity. Hence one must be careful to input values from the parameter space, especially for inflation and deflation. See Example 2 below.
Functions Pospois
and those similar
have been moved to VGAMdata.
It is better to use
dgaitdpois(x, lambda, truncate = 0)
instead of
dposbinom(x, lambda)
, etc.
T. W. Yee.
Yee, T. W. and Ma, C. (2024). Generally altered, inflated, truncated and deflated regression. Statistical Science, 39 (in press).
gaitdpoisson
,
multinomial
,
specials
,
spikeplot
,
dgaitdplot
,
Zapois
,
Zipois
,
Pospois
Poisson
;
Gaitdbinom
,
Gaitdnbinom
,
Gaitdlog
,
Gaitdzeta
.
# Example 1 ivec <- c(6, 14); avec <- c(8, 11); lambda <- 10; xgrid <- 0:25 tvec <- 15; max.support <- 20; pobs.mix <- 0.05; pstr.i <- 0.25 dvec <- 13; pdip.mlm <- 0.05; pobs.mlm <- 0.05 (ddd <- dgaitdpois(xgrid, lambda, lambda.a = lambda + 5, truncate = tvec, max.support = max.support, pobs.mix = pobs.mix, pobs.mlm = pobs.mlm, a.mlm = avec, pdip.mlm = pdip.mlm, d.mlm = dvec, pstr.mix = pstr.i, i.mix = ivec)) ## Not run: dgaitdplot(lambda, ylab = "Probability", xlab = "x", truncate = tvec, max.support = max.support, pobs.mix = pobs.mix, pobs.mlm = pobs.mlm, a.mlm = avec, all.lwd = 3, pdip.mlm = pdip.mlm, d.mlm = dvec, pstr.mix = pstr.i, i.mix = ivec, deflation = TRUE, main = "GAITD Combo PMF---Poisson Parent") ## End(Not run) # Example 2: detection of an invalid PMF xgrid <- 1:3 # Does not cover the special values purposely (ddd <- dgaitdpois(xgrid, 1, pdip.mlm = 0.1, d.mlm = 5, pstr.mix = 0.95, i.mix = 0)) # Undetected xgrid <- 0:13 # Wider range so this detects the problem (ddd <- dgaitdpois(xgrid, 1, pdip.mlm = 0.1, d.mlm = 5, pstr.mix = 0.95, i.mix = 0)) # Detected sum(ddd, na.rm = TRUE) # Something gone awry
# Example 1 ivec <- c(6, 14); avec <- c(8, 11); lambda <- 10; xgrid <- 0:25 tvec <- 15; max.support <- 20; pobs.mix <- 0.05; pstr.i <- 0.25 dvec <- 13; pdip.mlm <- 0.05; pobs.mlm <- 0.05 (ddd <- dgaitdpois(xgrid, lambda, lambda.a = lambda + 5, truncate = tvec, max.support = max.support, pobs.mix = pobs.mix, pobs.mlm = pobs.mlm, a.mlm = avec, pdip.mlm = pdip.mlm, d.mlm = dvec, pstr.mix = pstr.i, i.mix = ivec)) ## Not run: dgaitdplot(lambda, ylab = "Probability", xlab = "x", truncate = tvec, max.support = max.support, pobs.mix = pobs.mix, pobs.mlm = pobs.mlm, a.mlm = avec, all.lwd = 3, pdip.mlm = pdip.mlm, d.mlm = dvec, pstr.mix = pstr.i, i.mix = ivec, deflation = TRUE, main = "GAITD Combo PMF---Poisson Parent") ## End(Not run) # Example 2: detection of an invalid PMF xgrid <- 1:3 # Does not cover the special values purposely (ddd <- dgaitdpois(xgrid, 1, pdip.mlm = 0.1, d.mlm = 5, pstr.mix = 0.95, i.mix = 0)) # Undetected xgrid <- 0:13 # Wider range so this detects the problem (ddd <- dgaitdpois(xgrid, 1, pdip.mlm = 0.1, d.mlm = 5, pstr.mix = 0.95, i.mix = 0)) # Detected sum(ddd, na.rm = TRUE) # Something gone awry
Fits a generally altered, inflated, truncated and deflated Poisson regression by MLE. The GAITD combo model having 7 types of special values is implemented. This allows mixtures of Poissons on nested and/or partitioned support as well as a multinomial logit model for (nonparametric) altered, inflated and deflated values. Truncation may include the upper tail.
gaitdpoisson(a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, zero = c("pobs", "pstr", "pdip"), eq.ap = TRUE, eq.ip = TRUE, eq.dp = TRUE, parallel.a = FALSE, parallel.i = FALSE, parallel.d = FALSE, llambda.p = "loglink", llambda.a = llambda.p, llambda.i = llambda.p, llambda.d = llambda.p, type.fitted = c("mean", "lambdas", "pobs.mlm", "pstr.mlm", "pdip.mlm", "pobs.mix", "pstr.mix", "pdip.mix", "Pobs.mix", "Pstr.mix", "Pdip.mix", "nonspecial", "Numer", "Denom.p", "sum.mlm.i", "sum.mix.i", "sum.mlm.d", "sum.mix.d", "ptrunc.p", "cdf.max.s"), gpstr.mix = ppoints(7) / 3, gpstr.mlm = ppoints(7) / (3 + length(i.mlm)), imethod = 1, mux.init = c(0.75, 0.5, 0.75), ilambda.p = NULL, ilambda.a = ilambda.p, ilambda.i = ilambda.p, ilambda.d = ilambda.p, ipobs.mix = NULL, ipstr.mix = NULL, ipdip.mix = NULL, ipobs.mlm = NULL, ipstr.mlm = NULL, ipdip.mlm = NULL, byrow.aid = FALSE, ishrinkage = 0.95, probs.y = 0.35)
gaitdpoisson(a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, zero = c("pobs", "pstr", "pdip"), eq.ap = TRUE, eq.ip = TRUE, eq.dp = TRUE, parallel.a = FALSE, parallel.i = FALSE, parallel.d = FALSE, llambda.p = "loglink", llambda.a = llambda.p, llambda.i = llambda.p, llambda.d = llambda.p, type.fitted = c("mean", "lambdas", "pobs.mlm", "pstr.mlm", "pdip.mlm", "pobs.mix", "pstr.mix", "pdip.mix", "Pobs.mix", "Pstr.mix", "Pdip.mix", "nonspecial", "Numer", "Denom.p", "sum.mlm.i", "sum.mix.i", "sum.mlm.d", "sum.mix.d", "ptrunc.p", "cdf.max.s"), gpstr.mix = ppoints(7) / 3, gpstr.mlm = ppoints(7) / (3 + length(i.mlm)), imethod = 1, mux.init = c(0.75, 0.5, 0.75), ilambda.p = NULL, ilambda.a = ilambda.p, ilambda.i = ilambda.p, ilambda.d = ilambda.p, ipobs.mix = NULL, ipstr.mix = NULL, ipdip.mix = NULL, ipobs.mlm = NULL, ipstr.mlm = NULL, ipdip.mlm = NULL, byrow.aid = FALSE, ishrinkage = 0.95, probs.y = 0.35)
truncate , max.support
|
Vector of truncated values, i.e., nonnegative integers.
For the first seven arguments (for the special values)
a |
a.mix , i.mix , d.mix
|
Vector of altered and inflated values corresponding to finite mixture models. These are described as parametric or structured. The parameter If If Due to its great flexibility, it is easy to misuse this function
and ideally the values of the above arguments should be well
justified by the application on hand.
Adding inappropriate or
unnecessary values to these arguments willy-nilly
is a recipe for disaster, especially for
|
a.mlm , i.mlm , d.mlm
|
Vector of altered, inflated and deflated values corresponding
to the multinomial logit model (MLM) probabilities of
observing those values—see |
llambda.p , llambda.a , llambda.i , llambda.d
|
Link functions for the parent,
altered, inflated and deflated distributions respectively.
See |
eq.ap , eq.ip , eq.dp
|
Single logical each.
Constrain the rate parameters to be equal?
See For the GIT–Pois submodel,
after plotting the responses,
if the distribution of the spikes
above the nominal probabilities
has roughly the same shape
as the ordinary values then setting
|
parallel.a , parallel.i , parallel.d
|
Single logical each.
Constrain the MLM probabilities to be equal?
If so then this applies to all
|
type.fitted |
See The choice Option The choice The choice The choice The choice |
gpstr.mix , gpstr.mlm
|
See |
imethod , ipobs.mix , ipstr.mix , ipdip.mix
|
See |
ipobs.mlm , ipstr.mlm , ipdip.mlm
|
See |
mux.init |
Numeric, of length 3. General downward multiplier for initial values for the sample proportions (MLEs actually). This is under development and more details are forthcoming. In general, 1 means unchanged and values should lie in (0, 1], and values about 0.5 are recommended. The elements apply in order to altered, inflated and deflated (no distinction between mix and MLM). |
ilambda.p , ilambda.a , ilambda.i , ilambda.d
|
Initial values for the rate parameters;
see |
probs.y , ishrinkage
|
See |
byrow.aid |
Details are at |
zero |
See For the MLM probabilities,
to model It is noted that, amongst other things,
|
The full
GAITD–Pois combo model
may be fitted with this family function.
There are seven types of special values and all arguments for these
may be used in a single model.
Here, the MLM represents the nonparametric while the Pois
refers to the Poisson mixtures.
The defaults for this function correspond to an ordinary Poisson
regression so that poissonff
is called instead.
A MLM with only one probability to model is equivalent to
logistic regression
(binomialff
and logitlink
).
The order of the linear/additive predictors is best
explained by an example.
Suppose a combo model has
length(a.mix) > 2
and
length(i.mix) > 2
,
length(d.mix) > 2
,
a.mlm = 3:5
,
i.mlm = 6:9
and
d.mlm = 10:12
, say.
Then loglink(lambda.p)
is the first.
The second is multilogitlink(pobs.mix)
followed
by loglink(lambda.a)
because a.mix
is long enough.
The fourth is multilogitlink(pstr.mix)
followed
by loglink(lambda.i)
because i.mix
is long enough.
The sixth is multilogitlink(pdip.mix)
followed
by loglink(lambda.d)
because d.mix
is long enough.
Next are the probabilities for the a.mlm
values.
Then are the probabilities for the i.mlm
values.
Lastly are the probabilities for the d.mlm
values.
All the probabilities are estimated by one big MLM
and effectively
the "(Others)"
column of left over probabilities is
associated with the nonspecial values.
These might be called the
nonspecial baseline probabilities (NBP).
The dimension of the vector of linear/additive predictors here
is .
Two mixture submodels that may be fitted can be abbreviated
GAT–Pois or
GIT–Pois.
For the GAT model
the distribution being fitted is a (spliced) mixture
of two Poissons with differing (partitioned) support.
Likewise, for the GIT model
the distribution being fitted is a mixture
of two Poissons with nested support.
The two rate parameters may be constrained to be equal using
eq.ap
and eq.ip
.
A good first step is to apply spikeplot
for selecting
candidate values for altering, inflating and deflating.
Deciding between
parametrically or nonparametrically can also be determined from
examining the spike plot. Misspecified
a.mix/a.mlm/i.mix/i.mlm/d.mix/d.mlm
will result in convergence problems
(setting trace = TRUE
is a very good idea.)
This function currently does not handle multiple responses.
Further details are at Gaitdpois
.
A well-conditioned data–model combination should pose no
difficulties for the automatic starting value selection
being successful.
Failure to obtain initial values from this self-starting
family function indicates the degree of inflation/deflation
may be marginal and/or a misspecified model.
If this problem is worth surmounting
the arguments to focus on especially are
mux.init
,
gpstr.mix
, gpstr.mlm
,
ipdip.mix
and ipdip.mlm
.
See below for the stepping-stone trick.
Apart from the order of the linear/additive predictors,
the following are (or should be) equivalent:
gaitdpoisson()
and poissonff()
,
gaitdpoisson(a.mix = 0)
and zapoisson(zero = "pobs0")
,
gaitdpoisson(i.mix = 0)
and zipoisson(zero = "pstr0")
,
gaitdpoisson(truncate = 0)
and pospoisson()
.
Likewise, if
a.mix
and i.mix
are assigned a scalar then
it effectively moves that scalar to a.mlm
and i.mlm
because there is no lambda.a
or lambda.i
being estimated.
Thus
gaitdpoisson(a.mix = 0)
and gaitdpoisson(a.mlm = 0)
are the effectively same, and ditto for
gaitdpoisson(i.mix = 0)
and gaitdpoisson(i.mlm = 0)
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
The fitted.values
slot of the fitted object,
which should be extracted by the
generic function fitted
,
returns the mean by default.
See the information above on
type.fitted
.
Amateurs tend to be overzealous fitting
zero-inflated models when the
fitted mean is low—the warning of
ziP
should be heeded.
For GAITD regression the warning applies more
strongly and generally; here to all
i.mix
, i.mlm
, d.mix
and
d.mlm
values, not just 0. Even one
misspecified special value usually will cause
convergence problems.
Default values for this and similar family
functions may change in the future, e.g.,
eq.ap
and eq.ip
. Important
internal changes might occur too, such as the
ordering of the linear/additive predictors and
the quantities returned as the fitted values.
Using i.mlm
requires more caution
than a.mlm
because gross inflation
is ideally needed for it to work safely.
Ditto for i.mix
versus a.mix
.
Data exhibiting deflation or little to no
inflation will produce numerical problems,
hence set trace = TRUE
to monitor
convergence. More than c.10 IRLS iterations
should raise suspicion.
Ranking the four operators by difficulty, the easiest is truncation followed by alteration, then inflation and the most difficult is deflation. The latter needs good initial values and the current default will probably not work on some data sets. Studying the spikeplot is time very well spent. In general it is very easy to specify an overfitting model so it is a good idea to split the data into training and test sets.
This function is quite memory-hungry with
respect to length(c(a.mix, i.mix, d.mix,
a.mlm, i.mlm, d.mlm))
. On consuming something
different, because all values of the NBP vector
need to be positive it pays to be economical
with respect to d.mlm
especially so
that one does not consume up probabilities
unnecessarily so to speak.
It is often a good idea to set eq.ip =
TRUE
, especially when length(i.mix)
is not much more than 2 or the values of
i.mix
are not spread over the range
of the response. This way the estimation
can borrow strength from both the inflated
and non-inflated values. If the i.mix
values form a single small cluster then this
can easily create estimation difficulties—the
idea is somewhat similar to multicollinearity.
The same holds for d.mix
.
Numerical problems can easily arise because
of the exceeding flexibility of this
distribution and/or the lack of sizeable
inflation/deflation; it is a good idea to gain
experience with simulated data first before
applying it to real data.
Numerical problems may arise if any of the
special values are in remote places of the
support, e.g., a value y
such that
dpois(y, lambda.p)
is very close to
0. This is because the ratio of two tiny values
can be unstable.
Good initial values may be difficult to obtain
using self-starting procedures, especially
when there are covariates. If so, then it is
advisable to use a trick: fit an intercept-only
model first and then use etastart =
predict(int.only.model)
to fit the model
with covariates. This uses the simpler model
as a stepping-stone.
The labelling of the linear/additive predictors
has been abbreviated to reduce space. For
example, multilogitlink(pobs.mix)
and
multilogitlink(pstr.mix)
would be more
accurately multilogitlink(cbind(pobs.mix,
pstr.mix))
because one grand MLM is fitted.
This shortening may result in modifications
needed in other parts of VGAM to
compensate.
Because estimation involves a MLM, the restricted
parameter space means that if the dip probabilities
are large then the NBP may become too close to 0.
If this is so then there are tricks to avoid a
negative NBP.
One of them is to model as many values of d.mlm
as d.mix
, hence the dip probabilities become
modelled via the deflation distribution instead.
Another trick to alter those special values rather than
deflating them if the dip probabilities are large.
Due to its complexity,
the HDE test hdeff
is currently
unavailable for GAITD regressions.
Randomized quantile residuals (RQRs) are
available; see residualsvglm
.
T. W. Yee
Yee, T. W. and Ma, C. (2024). Generally altered, inflated, truncated and deflated regression. Statistical Science, 39 (in press).
Gaitdpois
,
multinomial
,
rootogram4
,
specials
,
plotdgaitd
,
spikeplot
,
meangaitd
,
KLD
,
goffset
,
Trunc
,
gaitdnbinomial
,
gaitdlog
,
gaitdzeta
,
multilogitlink
,
multinomial
,
residualsvglm
,
poissonff
,
zapoisson
,
zipoisson
,
pospoisson
,
CommonVGAMffArguments
,
simulate.vlm
.
## Not run: i.mix <- c(5, 10) # Inflate these values parametrically i.mlm <- c(14, 15) # Inflate these values a.mix <- c(1, 13) # Alter these values tvec <- c(3, 11) # Truncate these values pstr.mlm <- 0.1 # So parallel.i = TRUE pobs.mix <- pstr.mix <- 0.1 max.support <- 20; set.seed(1) gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, lambda.p = exp(2 + 0.0 * x2)) gdata <- transform(gdata, y1 = rgaitdpois(nn, lambda.p, a.mix = a.mix, i.mix = i.mix, pobs.mix = pobs.mix, pstr.mix = pstr.mix, i.mlm = i.mlm, pstr.mlm = pstr.mlm, truncate = tvec, max.support = max.support)) gaitdpoisson(a.mix = a.mix, i.mix = i.mix, i.mlm = i.mlm) with(gdata, table(y1)) fit1 <- vglm(y1 ~ 1, crit = "coef", trace = TRUE, data = gdata, gaitdpoisson(a.mix = a.mix, i.mix = i.mix, i.mlm = i.mlm, parallel.i = TRUE, eq.ap = TRUE, eq.ip = TRUE, truncate = tvec, max.support = max.support)) head(fitted(fit1, type.fitted = "Pstr.mix")) head(predict(fit1)) t(coef(fit1, matrix = TRUE)) # Easier to see with t() summary(fit1) # No HDE test by default but HDEtest = TRUE is ideal spikeplot(with(gdata, y1), lwd = 2) plotdgaitd(fit1, new.plot = FALSE, offset.x = 0.2, all.lwd = 2) ## End(Not run)
## Not run: i.mix <- c(5, 10) # Inflate these values parametrically i.mlm <- c(14, 15) # Inflate these values a.mix <- c(1, 13) # Alter these values tvec <- c(3, 11) # Truncate these values pstr.mlm <- 0.1 # So parallel.i = TRUE pobs.mix <- pstr.mix <- 0.1 max.support <- 20; set.seed(1) gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, lambda.p = exp(2 + 0.0 * x2)) gdata <- transform(gdata, y1 = rgaitdpois(nn, lambda.p, a.mix = a.mix, i.mix = i.mix, pobs.mix = pobs.mix, pstr.mix = pstr.mix, i.mlm = i.mlm, pstr.mlm = pstr.mlm, truncate = tvec, max.support = max.support)) gaitdpoisson(a.mix = a.mix, i.mix = i.mix, i.mlm = i.mlm) with(gdata, table(y1)) fit1 <- vglm(y1 ~ 1, crit = "coef", trace = TRUE, data = gdata, gaitdpoisson(a.mix = a.mix, i.mix = i.mix, i.mlm = i.mlm, parallel.i = TRUE, eq.ap = TRUE, eq.ip = TRUE, truncate = tvec, max.support = max.support)) head(fitted(fit1, type.fitted = "Pstr.mix")) head(predict(fit1)) t(coef(fit1, matrix = TRUE)) # Easier to see with t() summary(fit1) # No HDE test by default but HDEtest = TRUE is ideal spikeplot(with(gdata, y1), lwd = 2) plotdgaitd(fit1, new.plot = FALSE, offset.x = 0.2, all.lwd = 2) ## End(Not run)
Fits a generally altered, inflated, truncated and deflated zeta regression by MLE. The GAITD combo model having 7 types of special values is implemented. This allows mixtures of zetas on nested and/or partitioned support as well as a multinomial logit model for altered, inflated and deflated values.
gaitdzeta(a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, zero = c("pobs", "pstr", "pdip"), eq.ap = TRUE, eq.ip = TRUE, eq.dp = TRUE, parallel.a = FALSE, parallel.i = FALSE, parallel.d = FALSE, lshape.p = "loglink", lshape.a = lshape.p, lshape.i = lshape.p, lshape.d = lshape.p, type.fitted = c("mean", "shapes", "pobs.mlm", "pstr.mlm", "pdip.mlm", "pobs.mix", "pstr.mix", "pdip.mix", "Pobs.mix", "Pstr.mix", "Pdip.mix", "nonspecial", "Numer", "Denom.p", "sum.mlm.i", "sum.mix.i", "sum.mlm.d", "sum.mix.d", "ptrunc.p", "cdf.max.s"), gshape.p = -expm1(-ppoints(7)), gpstr.mix = ppoints(7) / 3, gpstr.mlm = ppoints(7) / (3 + length(i.mlm)), imethod = 1, mux.init = c(0.75, 0.5, 0.75), ishape.p = NULL, ishape.a = ishape.p, ishape.i = ishape.p, ishape.d = ishape.p, ipobs.mix = NULL, ipstr.mix = NULL, ipdip.mix = NULL, ipobs.mlm = NULL, ipstr.mlm = NULL, ipdip.mlm = NULL, byrow.aid = FALSE, ishrinkage = 0.95, probs.y = 0.35)
gaitdzeta(a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, zero = c("pobs", "pstr", "pdip"), eq.ap = TRUE, eq.ip = TRUE, eq.dp = TRUE, parallel.a = FALSE, parallel.i = FALSE, parallel.d = FALSE, lshape.p = "loglink", lshape.a = lshape.p, lshape.i = lshape.p, lshape.d = lshape.p, type.fitted = c("mean", "shapes", "pobs.mlm", "pstr.mlm", "pdip.mlm", "pobs.mix", "pstr.mix", "pdip.mix", "Pobs.mix", "Pstr.mix", "Pdip.mix", "nonspecial", "Numer", "Denom.p", "sum.mlm.i", "sum.mix.i", "sum.mlm.d", "sum.mix.d", "ptrunc.p", "cdf.max.s"), gshape.p = -expm1(-ppoints(7)), gpstr.mix = ppoints(7) / 3, gpstr.mlm = ppoints(7) / (3 + length(i.mlm)), imethod = 1, mux.init = c(0.75, 0.5, 0.75), ishape.p = NULL, ishape.a = ishape.p, ishape.i = ishape.p, ishape.d = ishape.p, ipobs.mix = NULL, ipstr.mix = NULL, ipdip.mix = NULL, ipobs.mlm = NULL, ipstr.mlm = NULL, ipdip.mlm = NULL, byrow.aid = FALSE, ishrinkage = 0.95, probs.y = 0.35)
truncate , max.support
|
See |
a.mix , i.mix , d.mix
|
See |
a.mlm , i.mlm , d.mlm
|
See |
lshape.p , lshape.a , lshape.i , lshape.d
|
Link functions.
See |
eq.ap , eq.ip , eq.dp
|
Single logical each.
See |
parallel.a , parallel.i , parallel.d
|
Single logical each.
See |
type.fitted , mux.init
|
See |
imethod , ipobs.mix , ipstr.mix , ipdip.mix
|
See |
ipobs.mlm , ipstr.mlm , ipdip.mlm , byrow.aid
|
See |
gpstr.mix , gpstr.mlm
|
See |
gshape.p , ishape.p
|
See |
ishape.a , ishape.i , ishape.d
|
See |
probs.y , ishrinkage
|
See |
zero |
See |
Many details to this family function can be
found in gaitdpoisson
because it
is also a 1-parameter discrete distribution.
This function currently does not handle
multiple responses. Further details are at
Gaitdzeta
.
As alluded to above, when there are covariates
it is much more interpretable to model
the mean rather than the shape parameter.
Hence zetaffMlink
is recommended. (This might become the default
in the future.) So installing VGAMextra
is a good idea.
Apart from the order of the linear/additive predictors,
the following are (or should be) equivalent:
gaitdzeta()
and zetaff()
,
gaitdzeta(a.mix = 1)
and oazeta(zero = "pobs1")
,
gaitdzeta(i.mix = 1)
and oizeta(zero = "pstr1")
,
gaitdzeta(truncate = 1)
and otzeta()
.
The functions
oazeta
,
oizeta
and
otzeta
have been placed in VGAMdata.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such
as vglm
,
rrvglm
and vgam
.
See gaitdpoisson
.
See gaitdpoisson
.
T. W. Yee
Gaitdzeta
,
zetaff
,
zetaffMlink
,
Gaitdpois
,
gaitdpoisson
,
gaitdlog
,
spikeplot
,
goffset
,
Trunc
,
oazeta
,
oizeta
,
otzeta
,
CommonVGAMffArguments
,
rootogram4
,
simulate.vlm
.
## Not run: avec <- c(5, 10) # Alter these values parametrically ivec <- c(3, 15) # Inflate these values tvec <- c(6, 7) # Truncate these values set.seed(1); pobs.a <- pstr.i <- 0.1 gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, shape.p = logitlink(2, inverse = TRUE)) gdata <- transform(gdata, y1 = rgaitdzeta(nn, shape.p, a.mix = avec, pobs.mix = pobs.a, i.mix = ivec, pstr.mix = pstr.i, truncate = tvec)) gaitdzeta(a.mix = avec, i.mix = ivec) with(gdata, table(y1)) spikeplot(with(gdata, y1), las = 1) fit7 <- vglm(y1 ~ 1, trace = TRUE, data = gdata, crit = "coef", gaitdzeta(i.mix = ivec, truncate = tvec, a.mix = avec, eq.ap = TRUE, eq.ip = TRUE)) head(fitted(fit7, type.fitted = "Pstr.mix")) head(predict(fit7)) t(coef(fit7, matrix = TRUE)) # Easier to see with t() summary(fit7) spikeplot(with(gdata, y1), lwd = 2, ylim = c(0, 0.6), xlim = c(0, 20)) plotdgaitd(fit7, new.plot = FALSE, offset.x = 0.2, all.lwd = 2) ## End(Not run)
## Not run: avec <- c(5, 10) # Alter these values parametrically ivec <- c(3, 15) # Inflate these values tvec <- c(6, 7) # Truncate these values set.seed(1); pobs.a <- pstr.i <- 0.1 gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, shape.p = logitlink(2, inverse = TRUE)) gdata <- transform(gdata, y1 = rgaitdzeta(nn, shape.p, a.mix = avec, pobs.mix = pobs.a, i.mix = ivec, pstr.mix = pstr.i, truncate = tvec)) gaitdzeta(a.mix = avec, i.mix = ivec) with(gdata, table(y1)) spikeplot(with(gdata, y1), las = 1) fit7 <- vglm(y1 ~ 1, trace = TRUE, data = gdata, crit = "coef", gaitdzeta(i.mix = ivec, truncate = tvec, a.mix = avec, eq.ap = TRUE, eq.ip = TRUE)) head(fitted(fit7, type.fitted = "Pstr.mix")) head(predict(fit7)) t(coef(fit7, matrix = TRUE)) # Easier to see with t() summary(fit7) spikeplot(with(gdata, y1), lwd = 2, ylim = c(0, 0.6), xlim = c(0, 20)) plotdgaitd(fit7, new.plot = FALSE, offset.x = 0.2, all.lwd = 2) ## End(Not run)
Density, distribution function, quantile function and random generation for the generally altered, inflated, truncated and deflated zeta distribution. Both parametric and nonparametric variants are supported; these are based on finite mixtures of the parent with itself and the multinomial logit model (MLM) respectively.
dgaitdzeta(x, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p, log = FALSE) pgaitdzeta(q, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p, lower.tail = TRUE) qgaitdzeta(p, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p) rgaitdzeta(n, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p)
dgaitdzeta(x, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p, log = FALSE) pgaitdzeta(q, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p, lower.tail = TRUE) qgaitdzeta(p, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p) rgaitdzeta(n, shape.p, a.mix = NULL, a.mlm = NULL, i.mix = NULL, i.mlm = NULL, d.mix = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, shape.a = shape.p, shape.i = shape.p, shape.d = shape.p)
x , q , p , n , log , lower.tail
|
Same meaning as in |
shape.p , shape.a , shape.i , shape.d
|
Same meaning as |
truncate , max.support
|
See |
a.mix , i.mix , d.mix
|
See |
a.mlm , i.mlm , d.mlm
|
See |
pobs.mlm , pstr.mlm , pdip.mlm , byrow.aid
|
See |
pobs.mix , pstr.mix , pdip.mix
|
See |
These functions for the zeta distribution are analogous to
the Poisson, hence most details have been put in
Gaitdpois
.
These functions do what
Oazeta
,
Oizeta
,
Otzeta
collectively did plus much more.
dgaitdzeta
gives the density,
pgaitdzeta
gives the distribution function,
qgaitdzeta
gives the quantile function, and
rgaitdzeta
generates random deviates.
The default values of the arguments correspond to ordinary
dzeta
,
pzeta
,
qzeta
,
rzeta
respectively.
See Gaitdpois
about the dangers
of too much inflation and/or deflation on
GAITD PMFs, and the difficulties detecting such.
See Gaitdpois
for general information
also relevant to this parent distribution.
T. W. Yee.
gaitdzeta
,
Gaitdpois
,
dgaitdplot
,
multinomial
,
Oazeta
,
Oizeta
,
Otzeta
.
ivec <- c(2, 10); avec <- ivec + 4; shape <- 0.95; xgrid <- 0:29 tvec <- 15; max.support <- 25; pobs.a <- 0.10; pstr.i <- 0.15 (ddd <- dgaitdzeta(xgrid, shape, truncate = tvec, max.support = max.support, pobs.mix = pobs.a, a.mix = avec, pstr.mix = pstr.i, i.mix = ivec)) ## Not run: plot(xgrid, ddd, type = "n", ylab = "Probability", xlab = "x", main = "GAIT PMF---Zeta Parent") mylwd <- 0.5 abline(v = avec, col = 'blue', lwd = mylwd) abline(v = ivec, col = 'purple', lwd = mylwd) abline(v = tvec, col = 'tan', lwd = mylwd) abline(v = max.support, col = 'magenta', lwd = mylwd) abline(h = c(pobs.a, pstr.i, 0:1), col = 'gray', lty = "dashed") lines(xgrid, dzeta(xgrid, shape), col='gray', lty="dashed") # f_{\pi} lines(xgrid, ddd, type = "h", col = "pink", lwd = 3) # GAIT PMF points(xgrid[ddd == 0], ddd[ddd == 0], pch = 16, col = 'tan', cex = 2) ## End(Not run)
ivec <- c(2, 10); avec <- ivec + 4; shape <- 0.95; xgrid <- 0:29 tvec <- 15; max.support <- 25; pobs.a <- 0.10; pstr.i <- 0.15 (ddd <- dgaitdzeta(xgrid, shape, truncate = tvec, max.support = max.support, pobs.mix = pobs.a, a.mix = avec, pstr.mix = pstr.i, i.mix = ivec)) ## Not run: plot(xgrid, ddd, type = "n", ylab = "Probability", xlab = "x", main = "GAIT PMF---Zeta Parent") mylwd <- 0.5 abline(v = avec, col = 'blue', lwd = mylwd) abline(v = ivec, col = 'purple', lwd = mylwd) abline(v = tvec, col = 'tan', lwd = mylwd) abline(v = max.support, col = 'magenta', lwd = mylwd) abline(h = c(pobs.a, pstr.i, 0:1), col = 'gray', lty = "dashed") lines(xgrid, dzeta(xgrid, shape), col='gray', lty="dashed") # f_{\pi} lines(xgrid, ddd, type = "h", col = "pink", lwd = 3) # GAIT PMF points(xgrid[ddd == 0], ddd[ddd == 0], pch = 16, col = 'tan', cex = 2) ## End(Not run)
Estimates the 1-parameter gamma distribution by maximum likelihood estimation.
gamma1(link = "loglink", zero = NULL, parallel = FALSE, type.fitted = c("mean", "percentiles", "Qlink"), percentiles = 50)
gamma1(link = "loglink", zero = NULL, parallel = FALSE, type.fitted = c("mean", "percentiles", "Qlink"), percentiles = 50)
link |
Link function applied to the (positive) shape parameter.
See |
zero , parallel
|
Details at |
type.fitted , percentiles
|
See |
The density function is given by
for and
.
Here,
is the gamma
function, as in
gamma
.
The mean of (returned as the default fitted values)
is
, and the variance is
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
This VGAM family function can handle a multiple responses, which is inputted as a matrix.
The parameter matches with
shape
in
rgamma
. The argument
rate
in rgamma
is assumed
1 for this family function, so that
scale = 1
is used for calls to
dgamma
,
qgamma
, etc.
If is unknown use the family function
gammaR
to estimate it too.
T. W. Yee
Most standard texts on statistical distributions describe the 1-parameter gamma distribution, e.g.,
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
gammaR
for the 2-parameter gamma distribution,
lgamma1
,
lindley
,
simulate.vlm
,
gammaff.mm
.
gdata <- data.frame(y = rgamma(n = 100, shape = exp(3))) fit <- vglm(y ~ 1, gamma1, data = gdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit)
gdata <- data.frame(y = rgamma(n = 100, shape = exp(3))) fit <- vglm(y ~ 1, gamma1, data = gdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit)
Estimates the 2-parameter gamma distribution by maximum likelihood estimation.
gamma2(lmu = "loglink", lshape = "loglink", imethod = 1, ishape = NULL, parallel = FALSE, deviance.arg = FALSE, zero = "shape")
gamma2(lmu = "loglink", lshape = "loglink", imethod = 1, ishape = NULL, parallel = FALSE, deviance.arg = FALSE, zero = "shape")
lmu , lshape
|
Link functions applied to the (positive) mu and shape
parameters (called |
ishape |
Optional initial value for shape.
A |
imethod |
An integer with value |
deviance.arg |
Logical. If |
zero |
See |
parallel |
Details at |
This distribution can model continuous skewed responses. The density function is given by
for
,
and
.
Here,
is the gamma
function, as in
gamma
.
The mean of Y is (returned as the fitted
values) with variance
. If
then the density has a
pole at the origin and decreases monotonically as
increases.
If
then this corresponds to the exponential
distribution. If
then the density is zero at the
origin and is unimodal with mode at
; this can be achieved with
lshape="logloglink"
.
By default, the two linear/additive predictors are
and
.
This family function implements Fisher scoring and the working
weight matrices are diagonal.
This VGAM family function handles multivariate responses,
so that a matrix can be used as the response. The number of columns is
the number of species, say, and zero=-2
means that all
species have a shape parameter equalling a (different) intercept only.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
The response must be strictly positive. A moment estimator for the shape parameter may be implemented in the future.
If mu
and shape
are vectors, then rgamma(n = n,
shape = shape, scale = mu/shape)
will generate random gamma variates of this
parameterization, etc.;
see GammaDist
.
T. W. Yee
The parameterization of this VGAM family function is the 2-parameter gamma distribution described in the monograph
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
gamma1
for the 1-parameter gamma distribution,
gammaR
for another parameterization of
the 2-parameter gamma distribution that is directly matched
with rgamma
,
bigamma.mckay
for a bivariate gamma distribution,
gammaff.mm
for another,
expexpff
,
GammaDist
,
CommonVGAMffArguments
,
simulate.vlm
,
negloglink
.
# Essentially a 1-parameter gamma gdata <- data.frame(y = rgamma(n = 100, shape = exp(1))) fit1 <- vglm(y ~ 1, gamma1, data = gdata) fit2 <- vglm(y ~ 1, gamma2, data = gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) c(Coef(fit2), colMeans(gdata)) # Essentially a 2-parameter gamma gdata <- data.frame(y = rgamma(n = 500, rate = exp(-1), shape = exp(2))) fit2 <- vglm(y ~ 1, gamma2, data = gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) c(Coef(fit2), colMeans(gdata)) summary(fit2)
# Essentially a 1-parameter gamma gdata <- data.frame(y = rgamma(n = 100, shape = exp(1))) fit1 <- vglm(y ~ 1, gamma1, data = gdata) fit2 <- vglm(y ~ 1, gamma2, data = gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) c(Coef(fit2), colMeans(gdata)) # Essentially a 2-parameter gamma gdata <- data.frame(y = rgamma(n = 500, rate = exp(-1), shape = exp(2))) fit2 <- vglm(y ~ 1, gamma2, data = gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) c(Coef(fit2), colMeans(gdata)) summary(fit2)
Estimate the scale parameter and shape parameters of the Mathai and Moschopoulos (1992) multivariate gamma distribution by maximum likelihood estimation.
gammaff.mm(lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL, imethod = 1, eq.shapes = FALSE, sh.byrow = TRUE, zero = "shape")
gammaff.mm(lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL, imethod = 1, eq.shapes = FALSE, sh.byrow = TRUE, zero = "shape")
lscale , lshape
|
Link functions applied to the (positive)
parameters |
iscale , ishape , sh.byrow
|
Optional initial values.
The default is to compute them internally.
Argument |
eq.shapes |
Logical.
Constrain the shape parameters to be equal?
See also |
imethod , zero
|
This distribution has the
bivariate gamma distribution
bigamma.mckay
as a special case.
Let be the number of columns of the
response matrix
y
.
Then the
joint probability density function is given by
for ,
, ...,
and
.
Also,
.
Here,
is
the
gamma
function,
By default, the linear/additive predictors are
,
,
...,
.
Hence
.
The marginal distributions are gamma,
with shape parameters
up to
, but they have a
common scale parameter
.
The fitted value returned
is a matrix with columns equalling
their respective means;
for column it is
sum(shape[1:j]) * scale
.
The correlations are always positive;
for columns and
with
,
the correlation is
sqrt(sum(shape[1:j]) /sum(shape[1:k]))
.
Hence the variance of column
is
sum(shape[1:j]) * scale^2
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
The response must be a matrix with at least two columns. Apart from the first column, the differences between a column and its LHS adjacent column must all be positive. That is, each row must be strictly increasing.
T. W. Yee
Mathai, A. M. and Moschopoulos, P. G. (1992). A form of multivariate gamma distribution. Ann. Inst. Statist. Math., 44, 97–106.
## Not run: data("mbflood", package = "VGAMdata") mbflood <- transform(mbflood, VdivD = V / D) fit <- vglm(cbind(Q, y2 = Q + VdivD) ~ 1, gammaff.mm, trace = TRUE, data = mbflood) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) colMeans(depvar(fit)) # Check moments head(fitted(fit), 1) ## End(Not run)
## Not run: data("mbflood", package = "VGAMdata") mbflood <- transform(mbflood, VdivD = V / D) fit <- vglm(cbind(Q, y2 = Q + VdivD) ~ 1, gammaff.mm, trace = TRUE, data = mbflood) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) colMeans(depvar(fit)) # Check moments head(fitted(fit), 1) ## End(Not run)
Estimate the parameter of a gamma hyperbola bivariate distribution by maximum likelihood estimation.
gammahyperbola(ltheta = "loglink", itheta = NULL, expected = FALSE)
gammahyperbola(ltheta = "loglink", itheta = NULL, expected = FALSE)
ltheta |
Link function applied to the (positive) parameter |
itheta |
Initial value for the parameter. The default is to estimate it internally. |
expected |
Logical. |
The joint probability density function is given by
for ,
,
.
The random variables
and
are independent.
The marginal distribution of
is an exponential distribution
with rate parameter
.
The marginal distribution of
is an exponential distribution
that has been shifted to the right by 1 and with
rate parameter
.
The fitted values are stored in a two-column matrix with the marginal
means, which are
and
.
The default algorithm is Newton-Raphson because Fisher scoring tends to be much slower for this distribution.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
The response must be a two-column matrix.
T. W. Yee
Reid, N. (2003). Asymptotics and the theory of inference. Annals of Statistics, 31, 1695–1731.
gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, theta = exp(-2 + x2)) gdata <- transform(gdata, y1 = rexp(nn, rate = exp(-theta)/theta), y2 = rexp(nn, rate = theta) + 1) fit <- vglm(cbind(y1, y2) ~ x2, gammahyperbola(expected = TRUE), data = gdata) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) summary(fit)
gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, theta = exp(-2 + x2)) gdata <- transform(gdata, y1 = rexp(nn, rate = exp(-theta)/theta), y2 = rexp(nn, rate = theta) + 1) fit <- vglm(cbind(y1, y2) ~ x2, gammahyperbola(expected = TRUE), data = gdata) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) summary(fit)
Estimates the 2-parameter gamma distribution by maximum likelihood estimation.
gammaR(lrate = "loglink", lshape = "loglink", irate = NULL, ishape = NULL, lss = TRUE, zero = "shape")
gammaR(lrate = "loglink", lshape = "loglink", irate = NULL, ishape = NULL, lss = TRUE, zero = "shape")
lrate , lshape
|
Link functions applied to the (positive) rate and shape
parameters.
See |
irate , ishape
|
Optional initial values for rate and shape.
A |
zero , lss
|
Details at |
The density function is given by
for ,
and
.
Here,
is the gamma
function, as in
gamma
.
The mean of Y is
(returned as the fitted values) with variance
.
By default, the two linear/additive predictors are
and
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
The parameters and
match with the arguments
rate
and shape
of rgamma
.
The order of the arguments agree too.
Here, is used, so one can use
negloglink
.
Multiple responses are handled.
If use the family function
gamma1
to
estimate .
The reciprocal of a 2-parameter gamma random variate has an
inverse gamma distribution.
One might write a VGAM family function called invgammaR()
to estimate this, but for now, just feed in the reciprocal of the
response.
T. W. Yee
Most standard texts on statistical distributions describe the 2-parameter gamma distribution, e.g.,
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
gamma1
for the 1-parameter gamma distribution,
gamma2
for another parameterization of
the 2-parameter gamma distribution,
bigamma.mckay
for a bivariate gamma distribution,
gammaff.mm
for another,
expexpff
,
simulate.vlm
,
rgamma
,
negloglink
.
# Essentially a 1-parameter gamma gdata <- data.frame(y1 = rgamma(n <- 100, shape = exp(1))) fit1 <- vglm(y1 ~ 1, gamma1, data = gdata, trace = TRUE) fit2 <- vglm(y1 ~ 1, gammaR, data = gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) Coef(fit2) # Essentially a 2-parameter gamma gdata <- data.frame(y2 = rgamma(n = 500, rate = exp(1), shape = exp(2))) fit2 <- vglm(y2 ~ 1, gammaR, data = gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) Coef(fit2) summary(fit2)
# Essentially a 1-parameter gamma gdata <- data.frame(y1 = rgamma(n <- 100, shape = exp(1))) fit1 <- vglm(y1 ~ 1, gamma1, data = gdata, trace = TRUE) fit2 <- vglm(y1 ~ 1, gammaR, data = gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) Coef(fit2) # Essentially a 2-parameter gamma gdata <- data.frame(y2 = rgamma(n = 500, rate = exp(1), shape = exp(2))) fit2 <- vglm(y2 ~ 1, gammaR, data = gdata, trace = TRUE, crit = "coef") coef(fit2, matrix = TRUE) Coef(fit2) summary(fit2)
Fits GARMA models to time series data.
garma(link = "identitylink", p.ar.lag = 1, q.ma.lag = 0, coefstart = NULL, step = 1)
garma(link = "identitylink", p.ar.lag = 1, q.ma.lag = 0, coefstart = NULL, step = 1)
link |
Link function applied to the mean response.
The default is suitable for continuous responses.
The link Note that when the log or logit link is chosen:
for log and logit,
zero values can be replaced by |
p.ar.lag |
A positive integer,
the lag for the autoregressive component.
Called |
q.ma.lag |
A non-negative integer,
the lag for the moving-average component.
Called |
coefstart |
Starting values for the coefficients.
Assigning this argument is highly recommended.
For technical reasons, the
argument |
step |
Numeric. Step length, e.g., |
This function draws heavily on Benjamin et al. (1998).
See also Benjamin et al. (2003).
GARMA models extend the ARMA time series model to generalized
responses in the exponential family, e.g., Poisson counts,
binary responses. Currently, this function is rudimentary and
can handle only certain continuous, count and binary responses only.
The user must choose an appropriate link for the link
argument.
The GARMA() model is defined by firstly
having a response belonging to the exponential family
where and
are the
canonical and scale parameters
respectively, and
are known prior weights.
The mean
is related to
the linear predictor
by the link
function
.
Here,
is the previous information set.
Secondly, the GARMA(
) model is defined by
Parameter vectors ,
and
are estimated by maximum likelihood.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
.
This VGAM family function is 'non-standard' in that the model does need some coercing to get it into the VGLM framework. Special code is required to get it running. A consequence is that some methods functions may give wrong results when applied to the fitted object.
This function is unpolished and is requires lots of improvements.
In particular, initialization is very poor.
Results appear very sensitive to quality of initial values.
A limited amount of experience has shown that half-stepsizing is
often needed for convergence, therefore choosing crit = "coef"
is not recommended.
Overdispersion is not handled.
For binomial responses it is currently best to input a vector
of 1s and 0s rather than the cbind(successes, failures)
because the initialize slot is rudimentary.
T. W. Yee
Benjamin, M. A., Rigby, R. A. and Stasinopoulos, M. D. (1998). Fitting Non-Gaussian Time Series Models. Pages 191–196 in: Proceedings in Computational Statistics COMPSTAT 1998 by Payne, R. and P. J. Green. Physica-Verlag.
Benjamin, M. A., Rigby, R. A. and Stasinopoulos, M. D. (2003). Generalized Autoregressive Moving Average Models. Journal of the American Statistical Association, 98: 214–223.
Zeger, S. L. and Qaqish, B. (1988). Markov regression models for time series: a quasi-likelihood approach. Biometrics, 44: 1019–1031.
gdata <- data.frame(interspike = c(68, 41, 82, 66, 101, 66, 57, 41, 27, 78, 59, 73, 6, 44, 72, 66, 59, 60, 39, 52, 50, 29, 30, 56, 76, 55, 73, 104, 104, 52, 25, 33, 20, 60, 47, 6, 47, 22, 35, 30, 29, 58, 24, 34, 36, 34, 6, 19, 28, 16, 36, 33, 12, 26, 36, 39, 24, 14, 28, 13, 2, 30, 18, 17, 28, 9, 28, 20, 17, 12, 19, 18, 14, 23, 18, 22, 18, 19, 26, 27, 23, 24, 35, 22, 29, 28, 17, 30, 34, 17, 20, 49, 29, 35, 49, 25, 55, 42, 29, 16)) # See Zeger and Qaqish (1988) gdata <- transform(gdata, spikenum = seq(interspike)) bvalue <- 0.1 # .Machine$double.xmin # Boundary value fit <- vglm(interspike ~ 1, trace = TRUE, data = gdata, garma(loglink(bvalue = bvalue), p = 2, coefstart = c(4, 0.3, 0.4))) summary(fit) coef(fit, matrix = TRUE) Coef(fit) # A bug here ## Not run: with(gdata, plot(interspike, ylim = c(0, 120), las = 1, xlab = "Spike Number", ylab = "Inter-Spike Time (ms)", col = "blue")) with(gdata, lines(spikenum[-(1:fit@misc$plag)], fitted(fit), col = "orange")) abline(h = mean(with(gdata, interspike)), lty = "dashed", col = "gray") ## End(Not run)
gdata <- data.frame(interspike = c(68, 41, 82, 66, 101, 66, 57, 41, 27, 78, 59, 73, 6, 44, 72, 66, 59, 60, 39, 52, 50, 29, 30, 56, 76, 55, 73, 104, 104, 52, 25, 33, 20, 60, 47, 6, 47, 22, 35, 30, 29, 58, 24, 34, 36, 34, 6, 19, 28, 16, 36, 33, 12, 26, 36, 39, 24, 14, 28, 13, 2, 30, 18, 17, 28, 9, 28, 20, 17, 12, 19, 18, 14, 23, 18, 22, 18, 19, 26, 27, 23, 24, 35, 22, 29, 28, 17, 30, 34, 17, 20, 49, 29, 35, 49, 25, 55, 42, 29, 16)) # See Zeger and Qaqish (1988) gdata <- transform(gdata, spikenum = seq(interspike)) bvalue <- 0.1 # .Machine$double.xmin # Boundary value fit <- vglm(interspike ~ 1, trace = TRUE, data = gdata, garma(loglink(bvalue = bvalue), p = 2, coefstart = c(4, 0.3, 0.4))) summary(fit) coef(fit, matrix = TRUE) Coef(fit) # A bug here ## Not run: with(gdata, plot(interspike, ylim = c(0, 120), las = 1, xlab = "Spike Number", ylab = "Inter-Spike Time (ms)", col = "blue")) with(gdata, lines(spikenum[-(1:fit@misc$plag)], fitted(fit), col = "orange")) abline(h = mean(with(gdata, interspike)), lty = "dashed", col = "gray") ## End(Not run)
Maximum likelihood estimation of the 4-parameter generalized beta II distribution.
genbetaII(lscale = "loglink", lshape1.a = "loglink", lshape2.p = "loglink", lshape3.q = "loglink", iscale = NULL, ishape1.a = NULL, ishape2.p = NULL, ishape3.q = NULL, lss = TRUE, gscale = exp(-5:5), gshape1.a = exp(-5:5), gshape2.p = exp(-5:5), gshape3.q = exp(-5:5), zero = "shape")
genbetaII(lscale = "loglink", lshape1.a = "loglink", lshape2.p = "loglink", lshape3.q = "loglink", iscale = NULL, ishape1.a = NULL, ishape2.p = NULL, ishape3.q = NULL, lss = TRUE, gscale = exp(-5:5), gshape1.a = exp(-5:5), gshape2.p = exp(-5:5), gshape3.q = exp(-5:5), zero = "shape")
lss |
See |
lshape1.a , lscale , lshape2.p , lshape3.q
|
Parameter link functions applied to the
shape parameter |
iscale , ishape1.a , ishape2.p , ishape3.q
|
Optional initial values for the parameters.
A |
gscale , gshape1.a , gshape2.p , gshape3.q
|
See |
zero |
The default is to set all the shape parameters to be
intercept-only.
See |
This distribution is most useful for unifying a substantial number of size distributions. For example, the Singh-Maddala, Dagum, Fisk (log-logistic), Lomax (Pareto type II), inverse Lomax, beta distribution of the second kind distributions are all special cases. Full details can be found in Kleiber and Kotz (2003), and Brazauskas (2002). The argument names given here are used by other families that are special cases of this family. Fisher scoring is used here and for the special cases too.
The 4-parameter generalized beta II distribution has density
for ,
,
,
,
.
Here
is the beta function, and
is the scale parameter
scale
,
while the others are shape parameters.
The mean is
provided ; these are returned as the fitted values.
This family function handles multiple responses.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
, and vgam
.
This distribution is very flexible and it is not generally
recommended to use this family function when the sample size
is small—numerical problems easily occur with small samples.
Probably several hundred observations at least are needed in
order to estimate the parameters with any level of confidence.
Neither is the inclusion of covariates recommended at all—not
unless there are several thousand observations. The mean is
finite only when , and this can be easily
violated by the parameter estimates for small sample sizes.
Try fitting some of the special cases of this distribution
(e.g.,
sinmad
, fisk
, etc.) first,
and then possibly use those models for initial values for
this distribution.
The default is to use a grid search with respect to all
four parameters; this is quite costly and is time consuming.
If the self-starting initial values fail, try experimenting
with the initial value arguments.
Also, the constraint
may be violated as the iterations progress so it pays
to monitor convergence, e.g., set
trace = TRUE
.
Successful convergence depends on having very good initial
values. This is rather difficult for this distribution so that
a grid search is conducted by default.
One suggestion for increasing the estimation reliability
is to set stepsize = 0.5
and maxit = 100
;
see vglm.control
.
T. W. Yee, with help from Victor Miranda.
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
Brazauskas, V. (2002). Fisher information matrix for the Feller-Pareto distribution. Statistics & Probability Letters, 59, 159–167.
dgenbetaII
,
betaff
,
betaII
,
dagum
,
sinmad
,
fisk
,
lomax
,
inv.lomax
,
paralogistic
,
inv.paralogistic
,
lino
,
CommonVGAMffArguments
,
vglm.control
.
## Not run: gdata <- data.frame(y = rsinmad(3000, shape1 = exp(1), scale = exp(2), shape3 = exp(1))) # A special case! fit <- vglm(y ~ 1, genbetaII(lss = FALSE), data = gdata, trace = TRUE) fit <- vglm(y ~ 1, data = gdata, trace = TRUE, genbetaII(ishape1.a = 3, iscale = 7, ishape3.q = 2.3)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
## Not run: gdata <- data.frame(y = rsinmad(3000, shape1 = exp(1), scale = exp(2), shape3 = exp(1))) # A special case! fit <- vglm(y ~ 1, genbetaII(lss = FALSE), data = gdata, trace = TRUE) fit <- vglm(y ~ 1, data = gdata, trace = TRUE, genbetaII(ishape1.a = 3, iscale = 7, ishape3.q = 2.3)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
Density
for the generalized beta II distribution
with shape parameters a
and p
and q
, and scale parameter scale
.
dgenbetaII(x, scale = 1, shape1.a, shape2.p, shape3.q, log = FALSE)
dgenbetaII(x, scale = 1, shape1.a, shape2.p, shape3.q, log = FALSE)
x |
vector of quantiles. |
shape1.a , shape2.p , shape3.q
|
positive shape parameters. |
scale |
positive scale parameter. |
log |
Logical.
If |
See genbetaII
, which is the VGAM family function
for estimating the parameters by maximum likelihood estimation.
Several distributions, such as the Singh-Maddala, are special case of
this flexible 4-parameter distribution.
The product of shape1.a
and shape2.p
determines the
behaviour of the density at the origin.
dgenbetaII
gives the density.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
dgenbetaII(0, shape1.a = 1/4, shape2.p = 4, shape3.q = 3) dgenbetaII(0, shape1.a = 1/4, shape2.p = 2, shape3.q = 3) dgenbetaII(0, shape1.a = 1/4, shape2.p = 8, shape3.q = 3)
dgenbetaII(0, shape1.a = 1/4, shape2.p = 4, shape3.q = 3) dgenbetaII(0, shape1.a = 1/4, shape2.p = 2, shape3.q = 3) dgenbetaII(0, shape1.a = 1/4, shape2.p = 8, shape3.q = 3)
Estimation of the 3-parameter generalized gamma distribution proposed by Stacy (1962).
gengamma.stacy(lscale = "loglink", ld = "loglink", lk = "loglink", iscale = NULL, id = NULL, ik = NULL, imethod = 1, gscale.mux = exp((-4:4)/2), gshape1.d = exp((-5:5)/2), gshape2.k = exp((-5:5)/2), probs.y = 0.3, zero = c("d", "k"))
gengamma.stacy(lscale = "loglink", ld = "loglink", lk = "loglink", iscale = NULL, id = NULL, ik = NULL, imethod = 1, gscale.mux = exp((-4:4)/2), gshape1.d = exp((-5:5)/2), gshape2.k = exp((-5:5)/2), probs.y = 0.3, zero = c("d", "k"))
lscale , ld , lk
|
Parameter link function applied to each of the positive parameters
|
iscale , id , ik
|
Initial value for |
gscale.mux , gshape1.d , gshape2.k
|
See |
imethod , probs.y , zero
|
See |
The probability density function can be written
for scale parameter ,
and Weibull-type shape parameter
,
gamma-type shape parameter
,
and
.
The mean of
is
(returned as the fitted values),
which equals
if
.
There are many special cases, as given in Table 1 of Stacey and Mihram (1965).
In the following, the parameters are in the order .
The special cases are:
Exponential
,
Gamma
,
Weibull
,
Chi Squared
with
degrees of freedom,
Chi
with
degrees of freedom,
Half-normal
,
Circular normal
,
Spherical normal
,
Rayleigh
where
.
Also the log-normal distribution corresponds to when
k = Inf
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
Several authors have considered maximum likelihood estimation for the
generalized gamma distribution and have found that the Newton-Raphson
algorithm does not work very well and that the existence of solutions
to the log-likelihood equations is sometimes in doubt.
Although Fisher scoring is used here, it is likely that the same
problems will be encountered.
It appears that large samples
are required, for example, the estimator of became asymptotically
normal only with 400 or more observations.
It is not uncommon for maximum likelihood estimates to fail to converge
even with two or three hundred observations.
With covariates, even more observations are needed to increase the
chances of convergence.
Using covariates is not advised unless the sample size is at least
a few thousand, and even if so,
modelling 1 or 2 parameters as intercept-only is a very good idea
(e.g.,
zero = 2:3
).
Monitoring convergence is also a very good idea
(e.g., set trace = TRUE
).
Half-stepping is not uncommon, and if this occurs, then the
results should be viewed with more suspicion.
The notation used here differs from Stacy (1962) and Prentice (1974).
Poor initial values may result in failure to converge so
if there are covariates and there are convergence problems,
try using or checking the zero
argument (e.g., zero = 2:3
)
or the ik
argument
or the imethod
argument, etc.
T. W. Yee
Stacy, E. W. (1962). A generalization of the gamma distribution. Annals of Mathematical Statistics, 33(3), 1187–1192.
Stacy, E. W. and Mihram, G. A. (1965). Parameter estimation for a generalized gamma distribution. Technometrics, 7, 349–358.
Prentice, R. L. (1974). A log gamma model and its maximum likelihood estimation. Biometrika, 61, 539–544.
rgengamma.stacy
,
gamma1
,
gamma2
,
prentice74
,
simulate.vlm
,
chisq
,
lognormal
,
rayleigh
,
weibullR
.
## Not run: k <- exp(-1); Scale <- exp(1); dd <- exp(0.5); set.seed(1) gdata <- data.frame(y = rgamma(2000, shape = k, scale = Scale)) gfit <- vglm(y ~ 1, gengamma.stacy, data = gdata, trace = TRUE) coef(gfit, matrix = TRUE) ## End(Not run)
## Not run: k <- exp(-1); Scale <- exp(1); dd <- exp(0.5); set.seed(1) gdata <- data.frame(y = rgamma(2000, shape = k, scale = Scale)) gfit <- vglm(y ~ 1, gengamma.stacy, data = gdata, trace = TRUE) coef(gfit, matrix = TRUE) ## End(Not run)
Density, distribution function, quantile function and random
generation for the generalized gamma distribution with
scale parameter scale
,
and parameters d
and k
.
dgengamma.stacy(x, scale = 1, d, k, log = FALSE) pgengamma.stacy(q, scale = 1, d, k, lower.tail = TRUE, log.p = FALSE) qgengamma.stacy(p, scale = 1, d, k, lower.tail = TRUE, log.p = FALSE) rgengamma.stacy(n, scale = 1, d, k)
dgengamma.stacy(x, scale = 1, d, k, log = FALSE) pgengamma.stacy(q, scale = 1, d, k, lower.tail = TRUE, log.p = FALSE) qgengamma.stacy(p, scale = 1, d, k, lower.tail = TRUE, log.p = FALSE) rgengamma.stacy(n, scale = 1, d, k)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as in |
scale |
the (positive) scale parameter |
d , k
|
the (positive) parameters |
log |
Logical.
If |
lower.tail , log.p
|
See gengamma.stacy
, the VGAM family function
for estimating the generalized gamma distribution
by maximum likelihood estimation,
for formulae and other details.
Apart from n
, all the above arguments may be vectors and
are recyled to the appropriate length if necessary.
dgengamma.stacy
gives the density,
pgengamma.stacy
gives the distribution function,
qgengamma.stacy
gives the quantile function, and
rgengamma.stacy
generates random deviates.
T. W. Yee and Kai Huang
Stacy, E. W. and Mihram, G. A. (1965). Parameter estimation for a generalized gamma distribution. Technometrics, 7, 349–358.
## Not run: x <- seq(0, 14, by = 0.01); d <- 1.5; Scale <- 2; k <- 6 plot(x, dgengamma.stacy(x, Scale, d = d, k = k), type = "l", col = "blue", ylim = 0:1, main = "Blue is density, orange is the CDF", sub = "Purple are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qgengamma.stacy(seq(0.05, 0.95, by = 0.05), Scale, d = d, k = k), dgengamma.stacy(qgengamma.stacy(seq(0.05, 0.95, by = 0.05), Scale, d = d, k = k), Scale, d = d, k = k), col = "purple", lty = 3, type = "h") lines(x, pgengamma.stacy(x, Scale, d = d, k = k), col = "orange") abline(h = 0, lty = 2) ## End(Not run)
## Not run: x <- seq(0, 14, by = 0.01); d <- 1.5; Scale <- 2; k <- 6 plot(x, dgengamma.stacy(x, Scale, d = d, k = k), type = "l", col = "blue", ylim = 0:1, main = "Blue is density, orange is the CDF", sub = "Purple are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qgengamma.stacy(seq(0.05, 0.95, by = 0.05), Scale, d = d, k = k), dgengamma.stacy(qgengamma.stacy(seq(0.05, 0.95, by = 0.05), Scale, d = d, k = k), Scale, d = d, k = k), col = "purple", lty = 3, type = "h") lines(x, pgengamma.stacy(x, Scale, d = d, k = k), col = "orange") abline(h = 0, lty = 2) ## End(Not run)
Density, distribution function, quantile function and random generation for the original parameterization of the generalized Poisson distribution.
dgenpois0(x, theta, lambda = 0, log = FALSE) pgenpois0(q, theta, lambda = 0, lower.tail = TRUE) qgenpois0(p, theta, lambda = 0) rgenpois0(n, theta, lambda = 0, algorithm = c("qgenpois0", "inv", "bup","chdn", "napp", "bran"))
dgenpois0(x, theta, lambda = 0, log = FALSE) pgenpois0(q, theta, lambda = 0, lower.tail = TRUE) qgenpois0(p, theta, lambda = 0) rgenpois0(n, theta, lambda = 0, algorithm = c("qgenpois0", "inv", "bup","chdn", "napp", "bran"))
x , q
|
Vector of quantiles. |
p |
Vector of probabilities. |
n |
Similar to |
theta , lambda
|
See |
lower.tail , log
|
Similar to |
algorithm |
Character.
Six choices are available, standing for the
qgenpois0,
inversion, build-up, chop-down,
normal approximation and branching methods.
The first one is the default and
calls |
Most of the background to these functions are given
in genpoisson0
.
Some warnings relevant to this distribution are given there.
The complicated range of the
parameter lambda
when negative is no longer
supported because the distribution is not normalized.
For other GPD variants see Genpois1
.
dgenpois0
gives the density,
pgenpois0
gives the distribution function,
qgenpois0
gives the quantile function, and
rgenpois
generates random deviates.
For some of these functions such as
dgenpois0
and pgenpois0
the value NaN
is returned for elements not satisfying
the parameter restrictions, e.g., if .
For some of these functions such as
rgenpois0
the input must not contain NA
s or NaN
s, etc. since
the implemented algorithms are fragile.
These have not been tested thoroughly.
For pgentpois0()
mapply
is called
with 0:q
as input, hence will be very slow and
memory-hungry for large values of q
.
Likewise qgentpois0()
and rgentpois0()
may suffer from the same limitations.
For rgentpois0()
:
(1). "inv"
, "bup"
and "chdn"
appear similar and
seem to work okay.
(2). "napp"
works only when theta is large, away from 0.
It suffers from 0-inflation.
(3). "bran"
has a relatively heavy RHS tail and
requires positive lambda
.
More details can be found in
Famoye (1997) and
Demirtas (2017).
The function dgenpois0
uses lfactorial
, which
equals Inf
when x
is approximately 1e306
on many machines.
So the density is returned as 0
in very extreme cases;
see .Machine
.
T. W. Yee.
For rgenpois0()
the last 5 algorithms are based on
code written in H. Demirtas (2017) and vectorized by T. W. Yee;
but the "bran"
algorithm was rewritten from
Famoye (1997).
Demirtas, H. (2017). On accurate and precise generation of generalized Poisson variates. Communications in Statistics—Simulation and Computation, 46, 489–499.
Famoye, F. (1997). Generalized Poisson random variate generation. Amer. J. Mathematical and Management Sciences, 17, 219–237.
sum(dgenpois0(0:1000, theta = 2, lambda = 0.5)) ## Not run: theta <- 2; lambda <- 0.2; y <- 0:10 proby <- dgenpois0(y, theta = theta, lambda = lambda, log = FALSE) plot(y, proby, type = "h", col = "blue", lwd = 2, ylab = "Pr(Y=y)", main = paste0("Y ~ GP-0(theta=", theta, ", lambda=", lambda, ")"), las = 1, ylim = c(0, 0.3), sub = "Orange is the Poisson probability function") lines(y + 0.1, dpois(y, theta), type = "h", lwd = 2, col = "orange") ## End(Not run)
sum(dgenpois0(0:1000, theta = 2, lambda = 0.5)) ## Not run: theta <- 2; lambda <- 0.2; y <- 0:10 proby <- dgenpois0(y, theta = theta, lambda = lambda, log = FALSE) plot(y, proby, type = "h", col = "blue", lwd = 2, ylab = "Pr(Y=y)", main = paste0("Y ~ GP-0(theta=", theta, ", lambda=", lambda, ")"), las = 1, ylim = c(0, 0.3), sub = "Orange is the Poisson probability function") lines(y + 0.1, dpois(y, theta), type = "h", lwd = 2, col = "orange") ## End(Not run)
Density, distribution function, quantile function and random generation for two parameterizations (GP-1 and GP-2) of the generalized Poisson distribution of the mean.
dgenpois1(x, meanpar, dispind = 1, log = FALSE) pgenpois1(q, meanpar, dispind = 1, lower.tail = TRUE) qgenpois1(p, meanpar, dispind = 1) rgenpois1(n, meanpar, dispind = 1) dgenpois2(x, meanpar, disppar = 0, log = FALSE) pgenpois2(q, meanpar, disppar = 0, lower.tail = TRUE) qgenpois2(p, meanpar, disppar = 0) rgenpois2(n, meanpar, disppar = 0)
dgenpois1(x, meanpar, dispind = 1, log = FALSE) pgenpois1(q, meanpar, dispind = 1, lower.tail = TRUE) qgenpois1(p, meanpar, dispind = 1) rgenpois1(n, meanpar, dispind = 1) dgenpois2(x, meanpar, disppar = 0, log = FALSE) pgenpois2(q, meanpar, disppar = 0, lower.tail = TRUE) qgenpois2(p, meanpar, disppar = 0) rgenpois2(n, meanpar, disppar = 0)
x , q
|
Vector of quantiles. |
p |
Vector of probabilities. |
n |
Similar to |
meanpar , dispind
|
The mean and dispersion index (index of dispersion), which
are the two parameters for the GP-1.
The mean is positive while the |
disppar |
The dispersion parameter for the GP-2:
|
lower.tail , log
|
See |
These are wrapper functions for those in Genpois0
.
The first parameter is the mean,
therefore both the GP-1 and GP-2 are recommended for regression
and can be compared somewhat
to poissonff
and negbinomial
.
The variance of a GP-1 is
where
is
dispind
.
The variance of a GP-2 is
where
,
,
and is
is the dispersion parameter
disppar
.
Thus the variance is linear with respect to the mean for GP-1
while
the variance is cubic with respect to the mean for GP-2.
Recall that the index of dispersion
(also known as the dispersion index)
is the ratio of the variance and the mean.
Also, in the original
formulation with variance
.
The GP-1 is due to Consul and Famoye (1992).
The GP-2 is due to Wang and Famoye (1997).
dgenpois1
and dgenpois2
give the density,
pgenpois1
and dgenpois2
give the distribution function,
qgenpois1
and dgenpois2
give the quantile function, and
rgenpois1
and dgenpois2
generate random deviates.
See Genpois0
for more information.
Genpois0
has warnings that should be heeded.
T. W. Yee.
Consul, P. C. and Famoye, F. (1992). Generalized Poisson regression model. Comm. Statist.—Theory and Meth., 2, 89–109.
Wang, W. and Famoye, F. (1997). Modeling household fertility decisions with generalized Poisson regression. J. Population Econom., 10, 273–283.
sum(dgenpois1(0:1000, meanpar = 5, dispind = 2)) ## Not run: dispind <- 5; meanpar <- 5; y <- 0:15 proby <- dgenpois1(y, meanpar = meanpar, dispind) plot(y, proby, type = "h", col = "blue", lwd = 2, ylab = "P[Y=y]", main = paste0("Y ~ GP-1(meanpar=", meanpar, ", dispind=", dispind, ")"), las = 1, ylim = c(0, 0.3), sub = "Orange is the Poisson probability function") lines(y + 0.1, dpois(y, meanpar), type = "h", lwd = 2, col = "orange") ## End(Not run)
sum(dgenpois1(0:1000, meanpar = 5, dispind = 2)) ## Not run: dispind <- 5; meanpar <- 5; y <- 0:15 proby <- dgenpois1(y, meanpar = meanpar, dispind) plot(y, proby, type = "h", col = "blue", lwd = 2, ylab = "P[Y=y]", main = paste0("Y ~ GP-1(meanpar=", meanpar, ", dispind=", dispind, ")"), las = 1, ylim = c(0, 0.3), sub = "Orange is the Poisson probability function") lines(y + 0.1, dpois(y, meanpar), type = "h", lwd = 2, col = "orange") ## End(Not run)
Estimation of the two-parameter generalized Poisson distribution (original parameterization).
genpoisson0(ltheta = "loglink", llambda = "logitlink", itheta = NULL, ilambda = NULL, imethod = c(1, 1), ishrinkage = 0.95, glambda = ppoints(5), parallel = FALSE, zero = "lambda")
genpoisson0(ltheta = "loglink", llambda = "logitlink", itheta = NULL, ilambda = NULL, imethod = c(1, 1), ishrinkage = 0.95, glambda = ppoints(5), parallel = FALSE, zero = "lambda")
ltheta , llambda
|
Parameter link functions for |
itheta , ilambda
|
Optional initial values for |
imethod |
See |
ishrinkage , zero
|
See |
glambda , parallel
|
See |
The generalized Poisson distribution (GPD) was proposed by Consul and Jain (1973), and it has PMF
for and
.
Theoretically,
where
is the greatest positive
integer satisfying
when
[and then
for
].
However, there are problems with a negative
such as
it not being normalized, so this family function restricts
to
.
This original parameterization is called the GP-0 by VGAM,
partly because there are two other common parameterizations
called the GP-1 and GP-2 (see Yang et al. (2009)),
genpoisson1
and genpoisson2
)
that are more suitable for regression.
However, genpoisson()
has been simplified to
genpoisson0
by only handling positive parameters,
hence only overdispersion relative to the Poisson is accommodated.
Some of the reasons for this are described in
Scollnik (1998), e.g., the probabilities do not
sum to unity when lambda
is negative.
To simply things, VGAM 1.1-4 and later will only
handle positive lambda
.
An ordinary Poisson distribution corresponds
to .
The mean (returned as the fitted values) is
and the variance is
so that the variance is proportional to the mean,
just like the NB-1 and quasi-Poisson.
For more information see Consul and Famoye (2006) for a summary and Consul (1989) for more details.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
Although this family function is far less fragile compared to
what used to be called genpoisson()
it is still a
good idea to monitor convergence because
equidispersion may result in numerical problems;
try poissonff
instead.
And underdispersed data will definitely result in
numerical problems and warnings;
try quasipoisson
instead.
This family function replaces genpoisson()
, and some of the
major changes are:
(i) the swapping of the linear predictors;
(ii) the change from rhobitlink
to
logitlink
in llambda
to reflect the no longer handling of underdispersion;
(iii) proper Fisher scoring is implemented to give improved
convergence.
Notationally, and in the literature too,
don't get confused because theta
(and not lambda
) here really
matches more closely with lambda
of
dpois
.
This family function handles multiple responses.
This distribution is potentially useful for dispersion modelling.
Convergence and numerical problems may occur when lambda
becomes very close to 0 or 1.
T. W. Yee.
Easton Huch derived the EIM and it has been implemented
in the weights
slot.
Consul, P. C. and Jain, G. C. (1973). A generalization of the Poisson distribution. Technometrics, 15, 791–799.
Consul, P. C. and Famoye, F. (2006). Lagrangian Probability Distributions, Boston, USA: Birkhauser.
Jorgensen, B. (1997). The Theory of Dispersion Models. London: Chapman & Hall.
Consul, P. C. (1989). Generalized Poisson Distributions: Properties and Applications. New York, USA: Marcel Dekker.
Yang, Z., Hardin, J. W., Addy, C. L. (2009). A score test for overdispersion in Poisson regression based on the generalized Poisson-2 model. J. Statist. Plann. Infer., 139, 1514–1521.
Yee, T. W. (2020). On generalized Poisson regression. In preparation.
Genpois0
,
genpoisson1
,
genpoisson2
,
poissonff
,
negbinomial
,
Poisson
,
quasipoisson
.
gdata <- data.frame(x2 = runif(nn <- 500)) gdata <- transform(gdata, y1 = rgenpois0(nn, theta = exp(2 + x2), logitlink(1, inverse = TRUE))) gfit0 <- vglm(y1 ~ x2, genpoisson0, data = gdata, trace = TRUE) coef(gfit0, matrix = TRUE) summary(gfit0)
gdata <- data.frame(x2 = runif(nn <- 500)) gdata <- transform(gdata, y1 = rgenpois0(nn, theta = exp(2 + x2), logitlink(1, inverse = TRUE))) gfit0 <- vglm(y1 ~ x2, genpoisson0, data = gdata, trace = TRUE) coef(gfit0, matrix = TRUE) summary(gfit0)
Estimation of the two-parameter generalized Poisson distribution (GP-1 parameterization) which has the variance as a linear function of the mean.
genpoisson1(lmeanpar = "loglink", ldispind = "logloglink", parallel = FALSE, zero = "dispind", vfl = FALSE, Form2 = NULL, imeanpar = NULL, idispind = NULL, imethod = c(1, 1), ishrinkage = 0.95, gdispind = exp(1:5))
genpoisson1(lmeanpar = "loglink", ldispind = "logloglink", parallel = FALSE, zero = "dispind", vfl = FALSE, Form2 = NULL, imeanpar = NULL, idispind = NULL, imethod = c(1, 1), ishrinkage = 0.95, gdispind = exp(1:5))
lmeanpar , ldispind
|
Parameter link functions for |
vfl , Form2
|
If |
imeanpar , idispind
|
Optional initial values for |
imethod |
See |
ishrinkage , zero
|
See |
gdispind , parallel
|
See |
This is a variant of the generalized Poisson
distribution (GPD) and is similar to the GP-1
referred to by some writers such as Yang,
et al. (2009). Compared to the original GP-0
(see genpoisson0
) the GP-1 has
and
so that
the variance is
.
The first linear predictor by default is
so that
the GP-1 is more suitable for regression than
the GP-1.
This family function can handle only
overdispersion relative to the Poisson.
An ordinary Poisson distribution corresponds
to . The mean (returned
as the fitted values) is
.
For overdispersed data, this GP parameterization
is a direct competitor of the NB-1 and
quasi-Poisson.
An object of class "vglmff"
(see
vglmff-class
). The object
is used by modelling functions such as
vglm
, and vgam
.
See genpoisson0
for warnings
relevant here, e.g., it is a good idea to
monitor convergence because of equidispersion
and underdispersion.
T. W. Yee.
Genpois1
,
genpoisson0
,
genpoisson2
,
poissonff
,
negbinomial
,
Poisson
,
quasipoisson
.
gdata <- data.frame(x2 = runif(nn <- 500)) gdata <- transform(gdata, y1 = rgenpois1(nn, exp(2 + x2), logloglink(-1, inverse = TRUE))) gfit1 <- vglm(y1 ~ x2, genpoisson1, gdata, trace = TRUE) coef(gfit1, matrix = TRUE) summary(gfit1)
gdata <- data.frame(x2 = runif(nn <- 500)) gdata <- transform(gdata, y1 = rgenpois1(nn, exp(2 + x2), logloglink(-1, inverse = TRUE))) gfit1 <- vglm(y1 ~ x2, genpoisson1, gdata, trace = TRUE) coef(gfit1, matrix = TRUE) summary(gfit1)
Estimation of the two-parameter generalized Poisson distribution (GP-2 parameterization) which has the variance as a cubic function of the mean.
genpoisson2(lmeanpar = "loglink", ldisppar = "loglink", parallel = FALSE, zero = "disppar", vfl = FALSE, oparallel = FALSE, imeanpar = NULL, idisppar = NULL, imethod = c(1, 1), ishrinkage = 0.95, gdisppar = exp(1:5))
genpoisson2(lmeanpar = "loglink", ldisppar = "loglink", parallel = FALSE, zero = "disppar", vfl = FALSE, oparallel = FALSE, imeanpar = NULL, idisppar = NULL, imethod = c(1, 1), ishrinkage = 0.95, gdisppar = exp(1:5))
lmeanpar , ldisppar
|
Parameter link functions for |
imeanpar , idisppar
|
Optional initial values for |
vfl , oparallel
|
Argument |
imethod |
See |
ishrinkage , zero
|
See |
gdisppar , parallel
|
See |
This is a variant of the generalized
Poisson distribution (GPD) and called
GP-2 by some writers such as Yang, et
al. (2009). Compared to the original GP-0
(see genpoisson0
) the GP-2 has
and
so that the variance is
. The first linear predictor
by default is
so that the GP-2 is more suitable
for regression than the GP-0.
This family function can handle only
overdispersion relative to the Poisson.
An ordinary Poisson distribution corresponds
to . The mean (returned as
the fitted values) is
.
An object of class "vglmff"
(see
vglmff-class
). The object
is used by modelling functions such as
vglm
, and vgam
.
See genpoisson0
for warnings
relevant here, e.g., it is a good idea to
monitor convergence because of equidispersion
and underdispersion.
T. W. Yee.
Letac, G. and Mora, M. (1990). Natural real exponential familes with cubic variance functions. Annals of Statistics 18, 1–37.
Genpois2
,
genpoisson0
,
genpoisson1
,
poissonff
,
negbinomial
,
Poisson
,
quasipoisson
.
gdata <- data.frame(x2 = runif(nn <- 500)) gdata <- transform(gdata, y1 = rgenpois2(nn, exp(2 + x2), loglink(-1, inverse = TRUE))) gfit2 <- vglm(y1 ~ x2, genpoisson2, gdata, trace = TRUE) coef(gfit2, matrix = TRUE) summary(gfit2)
gdata <- data.frame(x2 = runif(nn <- 500)) gdata <- transform(gdata, y1 = rgenpois2(nn, exp(2 + x2), loglink(-1, inverse = TRUE))) gfit2 <- vglm(y1 ~ x2, genpoisson2, gdata, trace = TRUE) coef(gfit2, matrix = TRUE) summary(gfit2)
Density, distribution function, quantile function and random generation for the generalized Rayleigh distribution.
dgenray(x, scale = 1, shape, log = FALSE) pgenray(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qgenray(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rgenray(n, scale = 1, shape)
dgenray(x, scale = 1, shape, log = FALSE) pgenray(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qgenray(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rgenray(n, scale = 1, shape)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
scale , shape
|
positive scale and shape parameters. |
log |
Logical.
If |
lower.tail , log.p
|
See genrayleigh
, the VGAM family function
for estimating the parameters,
for the formula of the probability density function and other details.
dgenray
gives the density,
pgenray
gives the distribution function,
qgenray
gives the quantile function, and
rgenray
generates random deviates.
We define scale
as the reciprocal of the scale parameter
used by Kundu and Raqab (2005).
Kai Huang and J. G. Lauder and T. W. Yee
## Not run: shape <- 0.5; Scale <- 1; nn <- 501 x <- seq(-0.10, 3.0, len = nn) plot(x, dgenray(x, shape, scale = Scale), type = "l", las = 1, ylim = c(0, 1.2), ylab = paste("[dp]genray(shape = ", shape, ", scale = ", Scale, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pgenray(x, shape, scale = Scale), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qgenray(probs, shape, scale = Scale) lines(Q, dgenray(Q, shape, scale = Scale), col = "purple", lty = 3, type = "h") lines(Q, pgenray(Q, shape, scale = Scale), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pgenray(Q, shape, scale = Scale) - probs)) # Should be 0 ## End(Not run)
## Not run: shape <- 0.5; Scale <- 1; nn <- 501 x <- seq(-0.10, 3.0, len = nn) plot(x, dgenray(x, shape, scale = Scale), type = "l", las = 1, ylim = c(0, 1.2), ylab = paste("[dp]genray(shape = ", shape, ", scale = ", Scale, ")"), col = "blue", cex.main = 0.8, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles") lines(x, pgenray(x, shape, scale = Scale), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qgenray(probs, shape, scale = Scale) lines(Q, dgenray(Q, shape, scale = Scale), col = "purple", lty = 3, type = "h") lines(Q, pgenray(Q, shape, scale = Scale), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pgenray(Q, shape, scale = Scale) - probs)) # Should be 0 ## End(Not run)
Estimates the two parameters of the generalized Rayleigh distribution by maximum likelihood estimation.
genrayleigh(lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL, tol12 = 1e-05, nsimEIM = 300, zero = 2)
genrayleigh(lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL, tol12 = 1e-05, nsimEIM = 300, zero = 2)
lscale , lshape
|
Link function for the two positive parameters, scale and shape.
See |
iscale , ishape
|
Numeric. Optional initial values for the scale and shape parameters. |
nsimEIM , zero
|
|
tol12 |
Numeric and positive. Tolerance for testing whether the second shape parameter is either 1 or 2. If so then the working weights need to handle these singularities. |
The generalized Rayleigh distribution has density function
where and the two parameters,
and
, are positive.
The mean cannot be expressed nicely so the median is returned as
the fitted values.
Applications of the generalized Rayleigh distribution include modeling
strength data and general lifetime data.
Simulated Fisher scoring is implemented.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
We define scale
as the reciprocal of the scale parameter
used by Kundu and Raqab (2005).
J. G. Lauder and T. W. Yee
Kundu, D., Raqab, M. C. (2005). Generalized Rayleigh distribution: different methods of estimations. Computational Statistics and Data Analysis, 49, 187–200.
## Not run: Scale <- exp(1); shape <- exp(1) rdata <- data.frame(y = rgenray(n = 1000, scale = Scale, shape = shape)) fit <- vglm(y ~ 1, genrayleigh, data = rdata, trace = TRUE) c(with(rdata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
## Not run: Scale <- exp(1); shape <- exp(1) rdata <- data.frame(y = rgenray(n = 1000, scale = Scale, shape = shape)) fit <- vglm(y ~ 1, genrayleigh, data = rdata, trace = TRUE) c(with(rdata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
Estimation of the parameters of the generalized secant hyperbolic distribution.
gensh(shape, llocation = "identitylink", lscale = "loglink", zero = "scale", ilocation = NULL, iscale = NULL, imethod = 1, glocation.mux = exp((-4:4)/2), gscale.mux = exp((-4:4)/2), probs.y = 0.3, tol0 = 1e-4)
gensh(shape, llocation = "identitylink", lscale = "loglink", zero = "scale", ilocation = NULL, iscale = NULL, imethod = 1, glocation.mux = exp((-4:4)/2), gscale.mux = exp((-4:4)/2), probs.y = 0.3, tol0 = 1e-4)
shape |
Numeric of length 1.
Shape parameter, called |
llocation , lscale
|
Parameter link functions applied to the
two parameters.
See |
zero , imethod
|
See |
ilocation , iscale
|
See |
glocation.mux , gscale.mux
|
See |
probs.y , tol0
|
See |
The probability density function of the hyperbolic secant distribution is given by
for shape
parameter
and all real
.
The scalars
,
,
are functions of
.
The mean of
is
the location parameter
(returned as the fitted values).
All moments of the distribution are finite.
Further details about
the parameterization can be found
in Vaughan (2002).
Fisher scoring is implemented and it has
a diagonal EIM.
More details are at
Gensh
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
T. W. Yee
Vaughan, D. C. (2002). The generalized secant hyperbolic distribution and its properties. Communications in Statistics—Theory and Methods, 31(2): 219–238.
sh <- -pi / 2; loc <- 2 hdata <- data.frame(x2 = rnorm(nn <- 200)) hdata <- transform(hdata, y = rgensh(nn, sh, loc)) fit <- vglm(y ~ x2, gensh(sh), hdata, trace = TRUE) coef(fit, matrix = TRUE)
sh <- -pi / 2; loc <- 2 hdata <- data.frame(x2 = rnorm(nn <- 200)) hdata <- transform(hdata, y = rgensh(nn, sh, loc)) fit <- vglm(y ~ x2, gensh(sh), hdata, trace = TRUE) coef(fit, matrix = TRUE)
Density, distribution function, quantile function and random generation for the generalized secant hyperbolic distribution.
dgensh(x, shape, location = 0, scale = 1, tol0 = 1e-4, log = FALSE) pgensh(q, shape, location = 0, scale = 1, tol0 = 1e-4, lower.tail = TRUE) qgensh(p, shape, location = 0, scale = 1, tol0 = 1e-4) rgensh(n, shape, location = 0, scale = 1, tol0 = 1e-4)
dgensh(x, shape, location = 0, scale = 1, tol0 = 1e-4, log = FALSE) pgensh(q, shape, location = 0, scale = 1, tol0 = 1e-4, lower.tail = TRUE) qgensh(p, shape, location = 0, scale = 1, tol0 = 1e-4) rgensh(n, shape, location = 0, scale = 1, tol0 = 1e-4)
x , q , p , n , log , lower.tail
|
Similar meaning as in |
shape |
Numeric.
Shape parameter, called |
location , scale
|
Numeric. The location and (positive) scale parameters. |
tol0 |
Numeric. Used to test whether the shape parameter is close enough to be treated as 0. |
This is an implementation of the family of
symmetric densities described
by Vaughan (2002).
By default, the mean and variance are 0
and 1, for all .
Some special (default) cases are:
: logistic
(which is similar to
stats:dt
with 9 degrees of freedom);
: the standard secant
hyperbolic (whence the name);
:
uniform(-sqrt(3), sqrt(3)).
dgensh
gives the density,
pgensh
gives the distribution function,
qgensh
gives the quantile function, and
rgensh
generates random deviates.
Numerical problems may occur when some argument values are extreme.
T. W. Yee.
gensh
,
logistic
,
hypersecant
,
Logistic
.
x <- seq(-2, 4, by = 0.01) loc <- 1; shape <- -pi /2 ## Not run: plot(x, dgensh(x, shape, loc), type = "l", main = "Blue is density, orange is the CDF", ylim = 0:1, las = 1, ylab = "", sub = "Purple are 5, 10, ..., 95 percentiles", col = "blue") abline(h = 0, col = "blue", lty = 2) lines(qgensh((1:19) / 20, shape, loc), type = "h", dgensh(qgensh((1:19) / 20, shape, loc), shape, loc), col = "purple", lty = 3) lines(x, pgensh(x, shape, loc), col = "orange") abline(h = 0, lty = 2) ## End(Not run) pp <- (1:19) / 20 # Test two functions max(abs(pgensh(qgensh(pp, shape, loc), shape,loc) - pp)) # Should be 0
x <- seq(-2, 4, by = 0.01) loc <- 1; shape <- -pi /2 ## Not run: plot(x, dgensh(x, shape, loc), type = "l", main = "Blue is density, orange is the CDF", ylim = 0:1, las = 1, ylab = "", sub = "Purple are 5, 10, ..., 95 percentiles", col = "blue") abline(h = 0, col = "blue", lty = 2) lines(qgensh((1:19) / 20, shape, loc), type = "h", dgensh(qgensh((1:19) / 20, shape, loc), shape, loc), col = "purple", lty = 3) lines(x, pgensh(x, shape, loc), col = "orange") abline(h = 0, lty = 2) ## End(Not run) pp <- (1:19) / 20 # Test two functions max(abs(pgensh(qgensh(pp, shape, loc), shape,loc) - pp)) # Should be 0
Maximum likelihood estimation for the geometric and truncated geometric distributions.
geometric(link = "logitlink", expected = TRUE, imethod = 1, iprob = NULL, zero = NULL) truncgeometric(upper.limit = Inf, link = "logitlink", expected = TRUE, imethod = 1, iprob = NULL, zero = NULL)
geometric(link = "logitlink", expected = TRUE, imethod = 1, iprob = NULL, zero = NULL) truncgeometric(upper.limit = Inf, link = "logitlink", expected = TRUE, imethod = 1, iprob = NULL, zero = NULL)
link |
Parameter link function applied to the
probability parameter |
expected |
Logical.
Fisher scoring is used if |
iprob , imethod , zero
|
See |
upper.limit |
Numeric. Upper values. As a vector, it is recycled across responses first. The default value means both family functions should give the same result. |
A random variable has a 1-parameter geometric distribution
if
for
.
Here,
is the probability of success,
and
is the number of (independent) trials that are fails
until a success occurs.
Thus the response
should be a non-negative integer.
The mean of
is
and its variance is
.
The geometric distribution is a special case of the
negative binomial distribution (see
negbinomial
).
The geometric distribution is also a special case of the
Borel distribution, which is a Lagrangian distribution.
If has a geometric distribution with parameter
then
has a positive-geometric distribution with the same parameter.
Multiple responses are permitted.
For truncgeometric()
,
the (upper) truncated geometric distribution can have response integer
values from 0 to upper.limit
.
It has density prob * (1 - prob)^y / [1-(1-prob)^(1+upper.limit)]
.
For a generalized truncated geometric distribution with
integer values to
, say, subtract
from the response and feed in
as the upper limit.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
T. W. Yee. Help from Viet Hoang Quoc is gratefully acknowledged.
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
negbinomial
,
Geometric
,
betageometric
,
expgeometric
,
zageometric
,
zigeometric
,
rbetageom
,
simulate.vlm
.
gdata <- data.frame(x2 = runif(nn <- 1000) - 0.5) gdata <- transform(gdata, x3 = runif(nn) - 0.5, x4 = runif(nn) - 0.5) gdata <- transform(gdata, eta = -1.0 - 1.0 * x2 + 2.0 * x3) gdata <- transform(gdata, prob = logitlink(eta, inverse = TRUE)) gdata <- transform(gdata, y1 = rgeom(nn, prob)) with(gdata, table(y1)) fit1 <- vglm(y1 ~ x2 + x3 + x4, geometric, data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) # Truncated geometric (between 0 and upper.limit) upper.limit <- 5 tdata <- subset(gdata, y1 <= upper.limit) nrow(tdata) # Less than nn fit2 <- vglm(y1 ~ x2 + x3 + x4, truncgeometric(upper.limit), data = tdata, trace = TRUE) coef(fit2, matrix = TRUE) # Generalized truncated geometric (between lower.limit and upper.limit) lower.limit <- 1 upper.limit <- 8 gtdata <- subset(gdata, lower.limit <= y1 & y1 <= upper.limit) with(gtdata, table(y1)) nrow(gtdata) # Less than nn fit3 <- vglm(y1 - lower.limit ~ x2 + x3 + x4, truncgeometric(upper.limit - lower.limit), data = gtdata, trace = TRUE) coef(fit3, matrix = TRUE)
gdata <- data.frame(x2 = runif(nn <- 1000) - 0.5) gdata <- transform(gdata, x3 = runif(nn) - 0.5, x4 = runif(nn) - 0.5) gdata <- transform(gdata, eta = -1.0 - 1.0 * x2 + 2.0 * x3) gdata <- transform(gdata, prob = logitlink(eta, inverse = TRUE)) gdata <- transform(gdata, y1 = rgeom(nn, prob)) with(gdata, table(y1)) fit1 <- vglm(y1 ~ x2 + x3 + x4, geometric, data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) # Truncated geometric (between 0 and upper.limit) upper.limit <- 5 tdata <- subset(gdata, y1 <= upper.limit) nrow(tdata) # Less than nn fit2 <- vglm(y1 ~ x2 + x3 + x4, truncgeometric(upper.limit), data = tdata, trace = TRUE) coef(fit2, matrix = TRUE) # Generalized truncated geometric (between lower.limit and upper.limit) lower.limit <- 1 upper.limit <- 8 gtdata <- subset(gdata, lower.limit <= y1 & y1 <= upper.limit) with(gtdata, table(y1)) nrow(gtdata) # Less than nn fit3 <- vglm(y1 - lower.limit ~ x2 + x3 + x4, truncgeometric(upper.limit - lower.limit), data = gtdata, trace = TRUE) coef(fit3, matrix = TRUE)
Retrieve one component of the list .smart.prediction
from
smartpredenv
.
get.smart()
get.smart()
get.smart
is used in "read"
mode within a smart function:
it retrieves parameters saved at the time of fitting, and
is used for prediction.
get.smart
is only used in smart functions such as
sm.poly
;
get.smart.prediction
is only used in modelling functions
such as lm
and glm
.
The function
get.smart
gets only a part of .smart.prediction
whereas
get.smart.prediction
gets the entire .smart.prediction
.
Returns with one list component of .smart.prediction
from
smartpredenv
,
in fact, .smart.prediction[[.smart.prediction.counter]]
.
The whole procedure mimics a first-in first-out stack (better known
as a queue).
The variable .smart.prediction.counter
in
smartpredenv
is incremented beforehand, and then written back to
smartpredenv
.
print(sm.min1)
print(sm.min1)
Retrieves .smart.prediction
from
smartpredenv
.
get.smart.prediction()
get.smart.prediction()
A smart modelling function such as lm
allows
smart functions such as sm.bs
to write to
a data structure called .smart.prediction
in
smartpredenv
.
At the end of fitting,
get.smart.prediction
retrieves this data structure.
It is then attached to the object, and used for prediction later.
Returns with the list .smart.prediction
from
smartpredenv
.
## Not run: fit$smart <- get.smart.prediction() # Put at the end of lm() ## End(Not run)
## Not run: fit$smart <- get.smart.prediction() # Put at the end of lm() ## End(Not run)
Maximum likelihood estimation of the 3-parameter generalized extreme value (GEV) distribution.
gev(llocation = "identitylink", lscale = "loglink", lshape = logofflink(offset = 0.5), percentiles = c(95, 99), ilocation = NULL, iscale = NULL, ishape = NULL, imethod = 1, gprobs.y = (1:9)/10, gscale.mux = exp((-5:5)/6), gshape = (-5:5) / 11 + 0.01, iprobs.y = NULL, tolshape0 = 0.001, type.fitted = c("percentiles", "mean"), zero = c("scale", "shape")) gevff(llocation = "identitylink", lscale = "loglink", lshape = logofflink(offset = 0.5), percentiles = c(95, 99), ilocation = NULL, iscale = NULL, ishape = NULL, imethod = 1, gprobs.y = (1:9)/10, gscale.mux = exp((-5:5)/6), gshape = (-5:5) / 11 + 0.01, iprobs.y = NULL, tolshape0 = 0.001, type.fitted = c("percentiles", "mean"), zero = c("scale", "shape"))
gev(llocation = "identitylink", lscale = "loglink", lshape = logofflink(offset = 0.5), percentiles = c(95, 99), ilocation = NULL, iscale = NULL, ishape = NULL, imethod = 1, gprobs.y = (1:9)/10, gscale.mux = exp((-5:5)/6), gshape = (-5:5) / 11 + 0.01, iprobs.y = NULL, tolshape0 = 0.001, type.fitted = c("percentiles", "mean"), zero = c("scale", "shape")) gevff(llocation = "identitylink", lscale = "loglink", lshape = logofflink(offset = 0.5), percentiles = c(95, 99), ilocation = NULL, iscale = NULL, ishape = NULL, imethod = 1, gprobs.y = (1:9)/10, gscale.mux = exp((-5:5)/6), gshape = (-5:5) / 11 + 0.01, iprobs.y = NULL, tolshape0 = 0.001, type.fitted = c("percentiles", "mean"), zero = c("scale", "shape"))
llocation , lscale , lshape
|
Parameter link functions for For the shape parameter,
the default |
percentiles |
Numeric vector of percentiles used for the fitted values.
Values should be between 0 and 100.
This argument is ignored if |
type.fitted |
See |
ilocation , iscale , ishape
|
Numeric. Initial value for the location parameter, |
imethod |
Initialization method. Either the value 1 or 2.
If both methods fail then try using |
gshape |
Numeric vector.
The values are used for a grid search for an initial value
for |
gprobs.y , gscale.mux , iprobs.y
|
Numeric vectors, used for the initial values.
See |
tolshape0 |
Passed into |
zero |
A specifying which
linear/additive predictors are modelled as intercepts only.
The values can be from the set {1,2,3} corresponding
respectively to |
The GEV distribution function can be written
where ,
,
and
.
Here,
.
The
,
,
are known as the
location, scale and shape parameters respectively.
The cases
,
,
correspond to the Frechet,
reverse
Weibull, and Gumbel types respectively.
It can be noted that the Gumbel (or Type I) distribution accommodates
many commonly-used distributions such as the normal, lognormal,
logistic, gamma, exponential and Weibull.
For the GEV distribution, the th moment about the mean exists
if
.
Provided they exist, the mean and variance are given by
and
respectively,
where
is the gamma function.
Smith (1985) established that when ,
the maximum likelihood estimators are completely regular.
To have some control over the estimated
try
using
lshape = logofflink(offset = 0.5)
, say,
or lshape = extlogitlink(min = -0.5, max = 0.5)
, say.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
Currently, if an estimate of is too close to 0 then
an error may occur for
gev()
with multivariate responses.
In general, gevff()
is more reliable than gev()
.
Fitting the GEV by maximum likelihood estimation can be numerically
fraught. If then some crude evasive action is taken but the estimation process
can still fail. This is particularly the case if
vgam
with s
is used; then smoothing is best done with
vglm
with regression splines (bs
or ns
) because vglm
implements
half-stepsizing whereas vgam
doesn't (half-stepsizing
helps handle the problem of straying outside the parameter space.)
The VGAM family function gev
can handle a multivariate
(matrix) response, cf. multiple responses.
If so, each row of the matrix is sorted into
descending order and NA
s are put last.
With a vector or one-column matrix response using
gevff
will give the same result but be faster and it handles
the case.
The function
gev
implements Tawn (1988) while
gevff
implements Prescott and Walden (1980).
Function egev()
has been replaced by the
new family function gevff()
. It now
conforms to the usual VGAM philosophy of
having M1
linear predictors per (independent) response.
This is the usual way multiple responses are handled.
Hence vglm(cbind(y1, y2)..., gevff, ...)
will have
6 linear predictors and it is possible to constrain the
linear predictors so that the answer is similar to gev()
.
Missing values in the response of gevff()
will be deleted;
this behaviour is the same as with almost every other
VGAM family function.
The shape parameter is difficult to estimate
accurately unless there is a lot of data.
Convergence is slow when
is near
.
Given many explanatory variables, it is often a good idea
to make sure
zero = 3
.
The range restrictions of the parameter are not
enforced; thus it is possible for a violation to occur.
Successful convergence often depends on having a reasonably good initial
value for . If failure occurs try various values for the
argument
ishape
, and if there are covariates,
having zero = 3
is advised.
T. W. Yee
Yee, T. W. and Stephenson, A. G. (2007). Vector generalized linear and additive extreme value models. Extremes, 10, 1–19.
Tawn, J. A. (1988). An extreme-value theory model for dependent observations. Journal of Hydrology, 101, 227–250.
Prescott, P. and Walden, A. T. (1980). Maximum likelihood estimation of the parameters of the generalized extreme-value distribution. Biometrika, 67, 723–724.
Smith, R. L. (1985). Maximum likelihood estimation in a class of nonregular cases. Biometrika, 72, 67–90.
rgev
,
gumbel
,
gumbelff
,
guplot
,
rlplot.gevff
,
gpd
,
weibullR
,
frechet
,
extlogitlink
,
oxtemp
,
venice
,
CommonVGAMffArguments
.
## Not run: # Multivariate example fit1 <- vgam(cbind(r1, r2) ~ s(year, df = 3), gev(zero = 2:3), data = venice, trace = TRUE) coef(fit1, matrix = TRUE) head(fitted(fit1)) par(mfrow = c(1, 2), las = 1) plot(fit1, se = TRUE, lcol = "blue", scol = "forestgreen", main = "Fitted mu(year) function (centered)", cex.main = 0.8) with(venice, matplot(year, depvar(fit1)[, 1:2], ylab = "Sea level (cm)", col = 1:2, main = "Highest 2 annual sea levels", cex.main = 0.8)) with(venice, lines(year, fitted(fit1)[,1], lty = "dashed", col = "blue")) legend("topleft", lty = "dashed", col = "blue", "Fitted 95 percentile") # Univariate example (fit <- vglm(maxtemp ~ 1, gevff, data = oxtemp, trace = TRUE)) head(fitted(fit)) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) vcov(fit, untransform = TRUE) sqrt(diag(vcov(fit))) # Approximate standard errors rlplot(fit) ## End(Not run)
## Not run: # Multivariate example fit1 <- vgam(cbind(r1, r2) ~ s(year, df = 3), gev(zero = 2:3), data = venice, trace = TRUE) coef(fit1, matrix = TRUE) head(fitted(fit1)) par(mfrow = c(1, 2), las = 1) plot(fit1, se = TRUE, lcol = "blue", scol = "forestgreen", main = "Fitted mu(year) function (centered)", cex.main = 0.8) with(venice, matplot(year, depvar(fit1)[, 1:2], ylab = "Sea level (cm)", col = 1:2, main = "Highest 2 annual sea levels", cex.main = 0.8)) with(venice, lines(year, fitted(fit1)[,1], lty = "dashed", col = "blue")) legend("topleft", lty = "dashed", col = "blue", "Fitted 95 percentile") # Univariate example (fit <- vglm(maxtemp ~ 1, gevff, data = oxtemp, trace = TRUE)) head(fitted(fit)) coef(fit, matrix = TRUE) Coef(fit) vcov(fit) vcov(fit, untransform = TRUE) sqrt(diag(vcov(fit))) # Approximate standard errors rlplot(fit) ## End(Not run)
Density, distribution function, quantile function and random
generation for the generalized extreme value distribution
(GEV) with location parameter location
, scale parameter
scale
and shape parameter shape
.
dgev(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = sqrt(.Machine$double.eps)) pgev(q, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) qgev(p, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) rgev(n, location = 0, scale = 1, shape = 0)
dgev(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = sqrt(.Machine$double.eps)) pgev(q, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) qgev(p, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) rgev(n, location = 0, scale = 1, shape = 0)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
location |
the location parameter |
scale |
the (positive) scale parameter |
shape |
the shape parameter |
log |
Logical.
If |
lower.tail , log.p
|
|
tolshape0 |
Positive numeric.
Threshold/tolerance value for resting whether |
See gev
, the VGAM family function
for estimating the 3 parameters by maximum likelihood estimation,
for formulae and other details.
Apart from n
, all the above arguments may be vectors and
are recyled to the appropriate length if necessary.
dgev
gives the density,
pgev
gives the distribution function,
qgev
gives the quantile function, and
rgev
generates random deviates.
The default value of means the default
distribution is the Gumbel.
Currently, these functions have different argument names compared with those in the evd package.
T. W. Yee
Coles, S. (2001). An Introduction to Statistical Modeling of Extreme Values. London: Springer-Verlag.
gev
,
gevff
,
vglm.control
.
loc <- 2; sigma <- 1; xi <- -0.4 pgev(qgev(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi) ## Not run: x <- seq(loc - 3, loc + 3, by = 0.01) plot(x, dgev(x, loc, sigma, xi), type = "l", col = "blue", ylim = c(0, 1), main = "Blue is density, orange is the CDF", sub = "Purple are 10,...,90 percentiles", ylab = "", las = 1) abline(h = 0, col = "blue", lty = 2) lines(qgev(seq(0.1, 0.9, by = 0.1), loc, sigma, xi), dgev(qgev(seq(0.1, 0.9, by = 0.1), loc, sigma, xi), loc, sigma, xi), col = "purple", lty = 3, type = "h") lines(x, pgev(x, loc, sigma, xi), type = "l", col = "orange") abline(h = (0:10)/10, lty = 2, col = "gray50") ## End(Not run)
loc <- 2; sigma <- 1; xi <- -0.4 pgev(qgev(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi) ## Not run: x <- seq(loc - 3, loc + 3, by = 0.01) plot(x, dgev(x, loc, sigma, xi), type = "l", col = "blue", ylim = c(0, 1), main = "Blue is density, orange is the CDF", sub = "Purple are 10,...,90 percentiles", ylab = "", las = 1) abline(h = 0, col = "blue", lty = 2) lines(qgev(seq(0.1, 0.9, by = 0.1), loc, sigma, xi), dgev(qgev(seq(0.1, 0.9, by = 0.1), loc, sigma, xi), loc, sigma, xi), col = "purple", lty = 3, type = "h") lines(x, pgev(x, loc, sigma, xi), type = "l", col = "orange") abline(h = (0:10)/10, lty = 2, col = "gray50") ## End(Not run)
General Electric and Westinghouse capital data.
data(gew)
data(gew)
A data frame with 20 observations on the following 7 variables.
All variables are numeric vectors.
Variables ending in .g
correspond to General Electric and
those ending in .w
are Westinghouse.
The observations are the years from 1934 to 1953
investment figures.
These are Gross investment =
additions to plant and equipment plus maintenance and repairs
in millions of dollars deflated by
.
capital stocks.
These are The stock of plant and equipment =
accumulated sum of net additions to plant and equipment deflated
by
minus depreciation allowance deflated by
.
market values.
These are Value of the firm =
price of common and preferred shares at December 31
(or average price of December 31 and January 31 of the following year)
times number of common and preferred shares outstanding plus
total book value of debt at December 31 in millions of
dollars deflated by
.
These data are a subset of a table in Boot and de Wit (1960),
also known as the Grunfeld data.
It is used a lot in econometrics,
e.g., for seemingly unrelated regressions
(see SURff
).
Here,
Implicit price deflator of producers durable
equipment (base 1947),
Implicit price deflator of G.N.P.
(base 1947),
Depreciation expense deflator = ten years
moving average of wholesale price index of metals and metal
products (base 1947).
Table 10 of: Boot, J. C. G. and de Wit, G. M. (1960) Investment Demand: An Empirical Contribution to the Aggregation Problem. International Economic Review, 1, 3–30.
Grunfeld, Y. (1958) The Determinants of Corporate Investment. Unpublished PhD Thesis (Chicago).
Zellner, A. (1962). An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias. Journal of the American Statistical Association, 57, 348–368.
SURff
,
http://statmath.wu.ac.at/~zeileis/grunfeld
(the link might now be stale).
str(gew)
str(gew)
Utility function to create a matrix of log-offset values, to help facilitate the Generally-Truncated-Expansion method
goffset(mux, n, a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, par1or2 = 1)
goffset(mux, n, a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, par1or2 = 1)
mux |
Multiplier. Usually a small positive integer. Must be positive. The value 1 means no change. |
n |
Number of rows. A positive integer, it should be the number of rows of the data frame containing the data. |
a.mix , i.mix , d.mix
|
See, e.g., |
a.mlm , i.mlm , d.mlm
|
See, e.g., |
par1or2 |
Number of parameters of the parent distribution.
Set |
This function is intended to make the
Generally-Truncated-Expansion (GTE) method
easier for the user.
It only makes sense if the linear predictors(s) are
log of the mean of the parent distribution,
which is the usual case for
gaitdpoisson
and
gaitdnbinomial
.
However, for gaitdlog
and gaitdzeta
one should be using
logffMlink
and
zetaffMlink
.
Without this function, the user must do quite a lot
of book-keeping to know which columns of the offset
matrix is to be assigned log(mux)
.
This can be rather laborious.
In the fictitional example below the response is underdispersed with respect to a Poisson distribution and doubling the response achieves approximate equidispersion.
A matrix with n
rows and the same number of
columns that a GAITD regression would produce for
its matrix of linear predictors.
The matrix can be inputted into vglm
by assigning the offset
argument.
This function is still in a developmental stage. The order of the arguments might change, hence it's safest to invoke it with full specification.
gaitdpoisson
,
gaitdlog
,
gaitdzeta
,
gaitdnbinomial
,
Trunc
,
offset
.
i.mix <- c(5, 10, 15, 20); a.mlm <- 13; mymux <- 2 goffset(mymux, 10, i.mix = i.mix, a.mlm = a.mlm) ## Not run: org1 <- with(gdata, range(y)) # Original range of the data vglm(mymux * y ~ 1, offset = goffset(mymux, nrow(gdata), i.mix = i.mix, a.mlm = a.mlm), gaitdpoisson(a.mlm = mymux * a.mlm, i.mix = mymux * i.mix, truncate = Trunc(org1, mymux)), data = gdata) ## End(Not run)
i.mix <- c(5, 10, 15, 20); a.mlm <- 13; mymux <- 2 goffset(mymux, 10, i.mix = i.mix, a.mlm = a.mlm) ## Not run: org1 <- with(gdata, range(y)) # Original range of the data vglm(mymux * y ~ 1, offset = goffset(mymux, nrow(gdata), i.mix = i.mix, a.mlm = a.mlm), gaitdpoisson(a.mlm = mymux * a.mlm, i.mix = mymux * i.mix, truncate = Trunc(org1, mymux)), data = gdata) ## End(Not run)
Maximum likelihood estimation of the 2-parameter Gompertz distribution.
gompertz(lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL, nsimEIM = 500, zero = NULL, nowarning = FALSE)
gompertz(lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL, nsimEIM = 500, zero = NULL, nowarning = FALSE)
nowarning |
Logical. Suppress a warning? Ignored for VGAM 0.9-7 and higher. |
lshape , lscale
|
Parameter link functions applied to the
shape parameter |
ishape , iscale
|
Optional initial values.
A |
nsimEIM , zero
|
The Gompertz distribution has a cumulative distribution function
which leads to a probability density function
for ,
,
.
Here,
is called the scale parameter
scale
,
and is called the shape parameter
(one could refer to
as a location parameter and
as
a shape parameter—see Lenart (2014)).
The mean is involves an exponential integral function.
Simulated Fisher scoring is used and multiple responses are handled.
The Makeham distibution has an additional parameter compared to
the Gompertz distribution.
If is defined to be the result of sampling from a Gumbel
distribution until a negative value
is produced,
then
has a Gompertz distribution.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The same warnings in makeham
apply here too.
T. W. Yee
Lenart, A. (2014). The moments of the Gompertz distribution and maximum likelihood estimation of its parameters. Scandinavian Actuarial Journal, 2014, 255–277.
dgompertz
,
makeham
,
simulate.vlm
.
## Not run: gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, eta1 = -1, eta2 = -1 + 0.2 * x2, ceta1 = 1, ceta2 = -1 + 0.2 * x2) gdata <- transform(gdata, shape1 = exp(eta1), shape2 = exp(eta2), scale1 = exp(ceta1), scale2 = exp(ceta2)) gdata <- transform(gdata, y1 = rgompertz(nn, scale = scale1, shape = shape1), y2 = rgompertz(nn, scale = scale2, shape = shape2)) fit1 <- vglm(y1 ~ 1, gompertz, data = gdata, trace = TRUE) fit2 <- vglm(y2 ~ x2, gompertz, data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) summary(fit1) coef(fit2, matrix = TRUE) summary(fit2) ## End(Not run)
## Not run: gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, eta1 = -1, eta2 = -1 + 0.2 * x2, ceta1 = 1, ceta2 = -1 + 0.2 * x2) gdata <- transform(gdata, shape1 = exp(eta1), shape2 = exp(eta2), scale1 = exp(ceta1), scale2 = exp(ceta2)) gdata <- transform(gdata, y1 = rgompertz(nn, scale = scale1, shape = shape1), y2 = rgompertz(nn, scale = scale2, shape = shape2)) fit1 <- vglm(y1 ~ 1, gompertz, data = gdata, trace = TRUE) fit2 <- vglm(y2 ~ x2, gompertz, data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) summary(fit1) coef(fit2, matrix = TRUE) summary(fit2) ## End(Not run)
Density, cumulative distribution function, quantile function and random generation for the Gompertz distribution.
dgompertz(x, scale = 1, shape, log = FALSE) pgompertz(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qgompertz(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rgompertz(n, scale = 1, shape)
dgompertz(x, scale = 1, shape, log = FALSE) pgompertz(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qgompertz(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rgompertz(n, scale = 1, shape)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as in |
log |
Logical.
If |
lower.tail , log.p
|
|
scale , shape
|
positive scale and shape parameters. |
See gompertz
for details.
dgompertz
gives the density,
pgompertz
gives the cumulative distribution function,
qgompertz
gives the quantile function, and
rgompertz
generates random deviates.
T. W. Yee and Kai Huang
probs <- seq(0.01, 0.99, by = 0.01) Shape <- exp(1); Scale <- exp(1) max(abs(pgompertz(qgompertz(p = probs, Scale, shape = Shape), Scale, shape = Shape) - probs)) # Should be 0 ## Not run: x <- seq(-0.1, 1.0, by = 0.001) plot(x, dgompertz(x, Scale,shape = Shape), type = "l", las = 1, main = "Blue is density, orange is the CDF", col = "blue", sub = "Purple lines are the 10,20,...,90 percentiles", ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, pgompertz(x, Scale, shape = Shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qgompertz(probs, Scale, shape = Shape) lines(Q, dgompertz(Q, Scale, shape = Shape), col = "purple", lty = 3, type = "h") pgompertz(Q, Scale, shape = Shape) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) ## End(Not run)
probs <- seq(0.01, 0.99, by = 0.01) Shape <- exp(1); Scale <- exp(1) max(abs(pgompertz(qgompertz(p = probs, Scale, shape = Shape), Scale, shape = Shape) - probs)) # Should be 0 ## Not run: x <- seq(-0.1, 1.0, by = 0.001) plot(x, dgompertz(x, Scale,shape = Shape), type = "l", las = 1, main = "Blue is density, orange is the CDF", col = "blue", sub = "Purple lines are the 10,20,...,90 percentiles", ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, pgompertz(x, Scale, shape = Shape), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qgompertz(probs, Scale, shape = Shape) lines(Q, dgompertz(Q, Scale, shape = Shape), col = "purple", lty = 3, type = "h") pgompertz(Q, Scale, shape = Shape) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) ## End(Not run)
Maximum likelihood estimation of the 2-parameter generalized Pareto distribution (GPD).
gpd(threshold = 0, lscale = "loglink", lshape = logofflink(offset = 0.5), percentiles = c(90, 95), iscale = NULL, ishape = NULL, tolshape0 = 0.001, type.fitted = c("percentiles", "mean"), imethod = 1, zero = "shape")
gpd(threshold = 0, lscale = "loglink", lshape = logofflink(offset = 0.5), percentiles = c(90, 95), iscale = NULL, ishape = NULL, tolshape0 = 0.001, type.fitted = c("percentiles", "mean"), imethod = 1, zero = "shape")
threshold |
Numeric, values are recycled if necessary.
The threshold value(s), called |
lscale |
Parameter link function for the scale parameter |
lshape |
Parameter link function for the shape parameter For the shape parameter,
the default |
percentiles |
Numeric vector of percentiles used
for the fitted values. Values should be between 0 and 100.
See the example below for illustration.
This argument is ignored if |
type.fitted |
See |
iscale , ishape
|
Numeric. Optional initial values for |
tolshape0 |
Passed into |
imethod |
Method of initialization, either 1 or 2. The first is the method of
moments, and the second is a variant of this. If neither work, try
assigning values to arguments |
zero |
Can be an integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
For one response, the value should be from the set {1,2}
corresponding respectively to |
The distribution function of the GPD can be written
where
is the location parameter
(known, with value
threshold
),
is the scale parameter,
is the shape parameter, and
.
The function
is known as the survivor function.
The limit
gives the shifted exponential as a special case:
The support is for
,
and
for
.
Smith (1985) showed that if then
this is known as the nonregular case and problems/difficulties
can arise both theoretically and numerically. For the (regular)
case
the classical asymptotic
theory of maximum likelihood estimators is applicable; this is
the default.
Although for the usual asymptotic properties
do not apply, the maximum likelihood estimator generally exists and
is superefficient for
, so it is
“better” than normal.
When
the maximum
likelihood estimator generally does not exist as it effectively becomes
a two parameter problem.
The mean of does not exist unless
, and
the variance does not exist unless
. So if
you want to fit a model with finite variance use
lshape = "extlogitlink"
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
However, for this VGAM family function, vglm
is probably preferred over vgam
when there is smoothing.
Fitting the GPD by maximum likelihood estimation can be numerically
fraught. If then some crude evasive action is taken but the estimation process
can still fail. This is particularly the case if
vgam
with s
is used. Then smoothing is best done with
vglm
with regression splines (bs
or ns
) because vglm
implements
half-stepsizing whereas vgam
doesn't. Half-stepsizing
helps handle the problem of straying outside the parameter space.
The response in the formula of vglm
and vgam
is .
Internally,
is computed.
This VGAM family function can handle a multiple
responses, which is inputted as a matrix.
The response stored on the object is the original uncentred data.
With functions rgpd
, dgpd
, etc., the
argument location
matches with the argument threshold
here.
T. W. Yee
Yee, T. W. and Stephenson, A. G. (2007). Vector generalized linear and additive extreme value models. Extremes, 10, 1–19.
Coles, S. (2001). An Introduction to Statistical Modeling of Extreme Values. London: Springer-Verlag.
Smith, R. L. (1985). Maximum likelihood estimation in a class of nonregular cases. Biometrika, 72, 67–90.
rgpd
,
meplot
,
gev
,
paretoff
,
vglm
,
vgam
,
s
.
# Simulated data from an exponential distribution (xi = 0) Threshold <- 0.5 gdata <- data.frame(y1 = Threshold + rexp(n = 3000, rate = 2)) fit <- vglm(y1 ~ 1, gpd(threshold = Threshold), data = gdata, trace = TRUE) head(fitted(fit)) summary(depvar(fit)) # The original uncentred data coef(fit, matrix = TRUE) # xi should be close to 0 Coef(fit) summary(fit) head(fit@extra$threshold) # Note the threshold is stored here # Check the 90 percentile ii <- depvar(fit) < fitted(fit)[1, "90%"] 100 * table(ii) / sum(table(ii)) # Should be 90% # Check the 95 percentile ii <- depvar(fit) < fitted(fit)[1, "95%"] 100 * table(ii) / sum(table(ii)) # Should be 95% ## Not run: plot(depvar(fit), col = "blue", las = 1, main = "Fitted 90% and 95% quantiles") matlines(1:length(depvar(fit)), fitted(fit), lty = 2:3, lwd = 2) ## End(Not run) # Another example gdata <- data.frame(x2 = runif(nn <- 2000)) Threshold <- 0; xi <- exp(-0.8) - 0.5 gdata <- transform(gdata, y2 = rgpd(nn, scale = exp(1 + 0.1*x2), shape = xi)) fit <- vglm(y2 ~ x2, gpd(Threshold), data = gdata, trace = TRUE) coef(fit, matrix = TRUE) ## Not run: # Nonparametric fits # Not so recommended: fit1 <- vgam(y2 ~ s(x2), gpd(Threshold), data = gdata, trace = TRUE) par(mfrow = c(2, 1)) plot(fit1, se = TRUE, scol = "blue") # More recommended: fit2 <- vglm(y2 ~ sm.bs(x2), gpd(Threshold), data = gdata, trace = TRUE) plot(as(fit2, "vgam"), se = TRUE, scol = "blue") ## End(Not run)
# Simulated data from an exponential distribution (xi = 0) Threshold <- 0.5 gdata <- data.frame(y1 = Threshold + rexp(n = 3000, rate = 2)) fit <- vglm(y1 ~ 1, gpd(threshold = Threshold), data = gdata, trace = TRUE) head(fitted(fit)) summary(depvar(fit)) # The original uncentred data coef(fit, matrix = TRUE) # xi should be close to 0 Coef(fit) summary(fit) head(fit@extra$threshold) # Note the threshold is stored here # Check the 90 percentile ii <- depvar(fit) < fitted(fit)[1, "90%"] 100 * table(ii) / sum(table(ii)) # Should be 90% # Check the 95 percentile ii <- depvar(fit) < fitted(fit)[1, "95%"] 100 * table(ii) / sum(table(ii)) # Should be 95% ## Not run: plot(depvar(fit), col = "blue", las = 1, main = "Fitted 90% and 95% quantiles") matlines(1:length(depvar(fit)), fitted(fit), lty = 2:3, lwd = 2) ## End(Not run) # Another example gdata <- data.frame(x2 = runif(nn <- 2000)) Threshold <- 0; xi <- exp(-0.8) - 0.5 gdata <- transform(gdata, y2 = rgpd(nn, scale = exp(1 + 0.1*x2), shape = xi)) fit <- vglm(y2 ~ x2, gpd(Threshold), data = gdata, trace = TRUE) coef(fit, matrix = TRUE) ## Not run: # Nonparametric fits # Not so recommended: fit1 <- vgam(y2 ~ s(x2), gpd(Threshold), data = gdata, trace = TRUE) par(mfrow = c(2, 1)) plot(fit1, se = TRUE, scol = "blue") # More recommended: fit2 <- vglm(y2 ~ sm.bs(x2), gpd(Threshold), data = gdata, trace = TRUE) plot(as(fit2, "vgam"), se = TRUE, scol = "blue") ## End(Not run)
Density, distribution function, quantile function and random
generation for the generalized Pareto distribution (GPD) with
location parameter location
, scale parameter scale
and shape parameter shape
.
dgpd(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = sqrt(.Machine$double.eps)) pgpd(q, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) qgpd(p, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) rgpd(n, location = 0, scale = 1, shape = 0)
dgpd(x, location = 0, scale = 1, shape = 0, log = FALSE, tolshape0 = sqrt(.Machine$double.eps)) pgpd(q, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) qgpd(p, location = 0, scale = 1, shape = 0, lower.tail = TRUE, log.p = FALSE) rgpd(n, location = 0, scale = 1, shape = 0)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
location |
the location parameter |
scale |
the (positive) scale parameter |
shape |
the shape parameter |
log |
Logical.
If |
lower.tail , log.p
|
|
tolshape0 |
Positive numeric.
Threshold/tolerance value for resting whether |
See gpd
, the VGAM family function
for estimating the two parameters by maximum likelihood estimation,
for formulae and other details.
Apart from n
, all the above arguments may be vectors and
are recyled to the appropriate length if necessary.
dgpd
gives the density,
pgpd
gives the distribution function,
qgpd
gives the quantile function, and
rgpd
generates random deviates.
The default values of all three parameters, especially
, means the default distribution is the
exponential.
Currently, these functions have different argument names compared with those in the evd package.
T. W. Yee and Kai Huang
Coles, S. (2001). An Introduction to Statistical Modeling of Extreme Values. London: Springer-Verlag.
## Not run: loc <- 2; sigma <- 1; xi <- -0.4 x <- seq(loc - 0.2, loc + 3, by = 0.01) plot(x, dgpd(x, loc, sigma, xi), type = "l", col = "blue", main = "Blue is density, red is the CDF", ylim = c(0, 1), sub = "Purple are 5,10,...,95 percentiles", ylab = "", las = 1) abline(h = 0, col = "blue", lty = 2) lines(qgpd(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), dgpd(qgpd(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi), col = "purple", lty = 3, type = "h") lines(x, pgpd(x, loc, sigma, xi), type = "l", col = "red") abline(h = 0, lty = 2) pgpd(qgpd(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi) ## End(Not run)
## Not run: loc <- 2; sigma <- 1; xi <- -0.4 x <- seq(loc - 0.2, loc + 3, by = 0.01) plot(x, dgpd(x, loc, sigma, xi), type = "l", col = "blue", main = "Blue is density, red is the CDF", ylim = c(0, 1), sub = "Purple are 5,10,...,95 percentiles", ylab = "", las = 1) abline(h = 0, col = "blue", lty = 2) lines(qgpd(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), dgpd(qgpd(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi), col = "purple", lty = 3, type = "h") lines(x, pgpd(x, loc, sigma, xi), type = "l", col = "red") abline(h = 0, lty = 2) pgpd(qgpd(seq(0.05, 0.95, by = 0.05), loc, sigma, xi), loc, sigma, xi) ## End(Not run)
A 4-column matrix.
data(grain.us)
data(grain.us)
The columns are:
numeric
numeric
numeric
numeric
Monthly averages of grain prices in the United States for wheat flour, corn, wheat, and rye for the period January 1961 through October 1972. The units are US dollars per 100 pound sack for wheat flour, and per bushel for corn, wheat and rye.
Ahn and Reinsel (1988).
Ahn, S. K and Reinsel, G. C. (1988). Nested reduced-rank autoregressive models for multiple time series. Journal of the American Statistical Association, 83, 849–856.
## Not run: cgrain <- scale(grain.us, scale = FALSE) # Center the time series only fit <- vglm(cgrain ~ 1, rrar(Rank = c(4, 1)), epsilon = 1e-3, stepsize = 0.5, trace = TRUE, maxit = 50) summary(fit) ## End(Not run)
## Not run: cgrain <- scale(grain.us, scale = FALSE) # Center the time series only fit <- vglm(cgrain ~ 1, rrar(Rank = c(4, 1)), epsilon = 1e-3, stepsize = 0.5, trace = TRUE, maxit = 50) summary(fit) ## End(Not run)
Fits a Goodman's RC association model (GRC) to a matrix of counts, and more generally, row-column interaction models (RCIMs). RCIMs allow for unconstrained quadratic ordination (UQO).
grc(y, Rank = 1, Index.corner = 2:(1 + Rank), str0 = 1, summary.arg = FALSE, h.step = 1e-04, ...) rcim(y, family = poissonff, Rank = 0, M1 = NULL, weights = NULL, which.linpred = 1, Index.corner = ifelse(is.null(str0), 0, max(str0)) + 1:Rank, rprefix = "Row.", cprefix = "Col.", iprefix = "X2.", offset = 0, str0 = if (Rank) 1 else NULL, summary.arg = FALSE, h.step = 0.0001, rbaseline = 1, cbaseline = 1, has.intercept = TRUE, M = NULL, rindex = 2:nrow(y), cindex = 2:ncol(y), iindex = 2:nrow(y), ...)
grc(y, Rank = 1, Index.corner = 2:(1 + Rank), str0 = 1, summary.arg = FALSE, h.step = 1e-04, ...) rcim(y, family = poissonff, Rank = 0, M1 = NULL, weights = NULL, which.linpred = 1, Index.corner = ifelse(is.null(str0), 0, max(str0)) + 1:Rank, rprefix = "Row.", cprefix = "Col.", iprefix = "X2.", offset = 0, str0 = if (Rank) 1 else NULL, summary.arg = FALSE, h.step = 0.0001, rbaseline = 1, cbaseline = 1, has.intercept = TRUE, M = NULL, rindex = 2:nrow(y), cindex = 2:ncol(y), iindex = 2:nrow(y), ...)
y |
For |
family |
A VGAM family function.
By default, the first linear/additive predictor
is fitted
using main effects plus an optional rank- |
Rank |
An integer from the set
{0,..., |
weights |
|
which.linpred |
Single integer.
Specifies which linear predictor is modelled as the sum of an
intercept, row effect, column effect plus an optional interaction
term. It should be one value from the set |
Index.corner |
A vector of |
rprefix , cprefix , iprefix
|
Character, for rows and columns and interactions respectively. For labelling the indicator variables. |
offset |
Numeric. Either a matrix of the right dimension, else a single numeric expanded into such a matrix. |
str0 |
Ignored if |
summary.arg |
Logical. If |
h.step |
A small positive value that is passed into
|
... |
Arguments that are passed
into |
M1 |
The number of linear predictors of the VGAM |
rbaseline , cbaseline
|
Baseline reference levels for the rows and columns. Currently stored on the object but not used. |
has.intercept |
Logical. Include an intercept? |
M , cindex
|
For |
rindex , iindex
|
|
Goodman's RC association model fits a reduced-rank approximation
to a table of counts.
A Poisson model is assumed.
The log of each cell mean is decomposed as an
intercept plus a row effect plus a column effect plus a reduced-rank
component. The latter can be collectively written A %*% t(C)
,
the product of two ‘thin’ matrices.
Indeed, A
and C
have Rank
columns.
By default, the first column and row of the interaction matrix
A %*% t(C)
is chosen
to be structural zeros, because str0 = 1
.
This means the first row of A
are all zeros.
This function uses options()$contrasts
to set up the row and
column indicator variables.
In particular, Equation (4.5) of Yee and Hastie (2003) is used.
These are called Row.
and Col.
(by default) followed
by the row or column number.
The function rcim()
is more general than grc()
.
Its default is a no-interaction model of grc()
, i.e.,
rank-0 and a Poisson distribution. This means that each
row and column has a dummy variable associated with it.
The first row and first column are baseline.
The power of rcim()
is that many VGAM family functions
can be assigned to its family
argument.
For example,
uninormal
fits something in between a 2-way
ANOVA with and without interactions,
alaplace2
with Rank = 0
is something like
medpolish
.
Others include
zipoissonff
and
negbinomial
.
Hopefully one day all VGAM family functions will
work when assigned to the family
argument, although the
result may not have meaning.
Unconstrained quadratic ordination (UQO) can be performed
using rcim()
and grc()
.
This has been called unconstrained Gaussian ordination
in the literature, however the word Gaussian has two
meanings which is confusing; it is better to use
quadratic because the bell-shape response surface is meant.
UQO is similar to CQO (cqo
) except there are
no environmental/explanatory variables.
Here, a GLM is fitted to each column (species)
that is a quadratic function of hypothetical latent variables
or gradients.
Thus each row of the response has an associated site score,
and each column of the response has an associated optimum
and tolerance matrix.
UQO can be performed here under the assumption that all species
have the same tolerance matrices.
See Yee and Hadi (2014) for details.
It is not recommended that presence/absence data be inputted
because the information content is so low for each site-species
cell.
The example below uses Poisson counts.
An object of class "grc"
, which currently is the same as
an "rrvglm"
object.
Currently,
a rank-0 rcim()
object is of class rcim0-class
,
else of class "rcim"
(this may change in the future).
The function rcim()
is experimental at this stage and
may have bugs.
Quite a lot of expertise is needed when fitting and in its
interpretion thereof. For example, the constraint
matrices applies the reduced-rank regression to the first
(see which.linpred
)
linear predictor and the other linear predictors are intercept-only
and have a common value throughout the entire data set.
This means that, by default,
family =
zipoissonff
is
appropriate but not
family =
zipoisson
.
Else set family =
zipoisson
and
which.linpred = 2
.
To understand what is going on, do examine the constraint
matrices of the fitted object, and reconcile this with
Equations (4.3) to (4.5) of Yee and Hastie (2003).
The functions temporarily create a permanent data frame
called .grc.df
or .rcim.df
, which used
to be needed by summary.rrvglm()
. Then these
data frames are deleted before exiting the function.
If an error occurs then the data frames may be present
in the workspace.
These functions set up the indicator variables etc. before calling
rrvglm
or
vglm
.
The ...
is passed into rrvglm.control
or
vglm.control
,
This means, e.g., Rank = 1
is default for grc()
.
The data should be labelled with rownames
and
colnames
.
Setting trace = TRUE
is recommended to monitor
convergence.
Using criterion = "coefficients"
can result in slow convergence.
If summary = TRUE
then y
can be a
"grc"
object, in which case a summary can be returned.
That is, grc(y, summary = TRUE)
is
equivalent to summary(grc(y))
.
It is not possible to plot a
grc(y, summary = TRUE)
or
rcim(y, summary = TRUE)
object.
Thomas W. Yee, with assistance from Alfian F. Hadi.
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
Yee, T. W. and Hadi, A. F. (2014). Row-column interaction models, with an R implementation. Computational Statistics, 29, 1427–1445.
Goodman, L. A. (1981). Association models and canonical correlation in the analysis of cross-classifications having ordered categories. Journal of the American Statistical Association, 76, 320–334.
rrvglm
,
rrvglm.control
,
rrvglm-class
,
summary.grc
,
moffset
,
Rcim
,
Select
,
Qvar
,
plotrcim0
,
cqo
,
multinomial
,
alcoff
,
crashi
,
auuc
,
olym08
,
olym12
,
poissonff
,
medpolish
.
# Example 1: Undergraduate enrolments at Auckland University in 1990 fitted(grc1 <- grc(auuc)) summary(grc1) grc2 <- grc(auuc, Rank = 2, Index.corner = c(2, 5)) fitted(grc2) summary(grc2) model3 <- rcim(auuc, Rank = 1, fam = multinomial, M = ncol(auuc)-1, cindex = 2:(ncol(auuc)-1), trace = TRUE) fitted(model3) summary(model3) # Median polish but not 100 percent reliable. Maybe call alaplace2()... ## Not run: rcim0 <- rcim(auuc, fam = alaplace1(tau = 0.5), trace=FALSE, maxit = 500) round(fitted(rcim0), digits = 0) round(100 * (fitted(rcim0) - auuc) / auuc, digits = 0) # Discrepancy depvar(rcim0) round(coef(rcim0, matrix = TRUE), digits = 2) Coef(rcim0, matrix = TRUE) # constraints(rcim0) names(constraints(rcim0)) # Compare with medpolish(): (med.a <- medpolish(auuc)) fv <- med.a$overall + outer(med.a$row, med.a$col, "+") round(100 * (fitted(rcim0) - fv) / fv) # Hopefully should be all 0s ## End(Not run) # Example 2: 2012 Summer Olympic Games in London ## Not run: top10 <- head(olym12, 10) grc1.oly12 <- with(top10, grc(cbind(gold, silver, bronze))) round(fitted(grc1.oly12)) round(resid(grc1.oly12, type = "response"), digits = 1) # Resp. resids summary(grc1.oly12) Coef(grc1.oly12) ## End(Not run) # Example 3: UQO; see Yee and Hadi (2014) ## Not run: n <- 100; p <- 5; S <- 10 pdata <- rcqo(n, p, S, es.opt = FALSE, eq.max = FALSE, eq.tol = TRUE, sd.latvar = 0.75) # Poisson counts true.nu <- attr(pdata, "latvar") # The 'truth'; site scores attr(pdata, "tolerances") # The 'truth'; tolerances Y <- Select(pdata, "y", sort = FALSE) # Y matrix (n x S); the "y" vars uqo.rcim1 <- rcim(Y, Rank = 1, str0 = NULL, # Delta covers entire n x M matrix iindex = 1:nrow(Y), # RRR covers the entire Y has.intercept = FALSE) # Suppress the intercept # Plot 1 par(mfrow = c(2, 2)) plot(attr(pdata, "optimums"), Coef(uqo.rcim1)@A, col = "blue", type = "p", main = "(a) UQO optimums", xlab = "True optimums", ylab = "Estimated (UQO) optimums") mylm <- lm(Coef(uqo.rcim1)@A ~ attr(pdata, "optimums")) abline(coef = coef(mylm), col = "orange", lty = "dashed") # Plot 2 fill.val <- NULL # Choose this for the new parameterization plot(attr(pdata, "latvar"), c(fill.val, concoef(uqo.rcim1)), las = 1, col = "blue", type = "p", main = "(b) UQO site scores", xlab = "True site scores", ylab = "Estimated (UQO) site scores" ) mylm <- lm(c(fill.val, concoef(uqo.rcim1)) ~ attr(pdata, "latvar")) abline(coef = coef(mylm), col = "orange", lty = "dashed") # Plots 3 and 4 myform <- attr(pdata, "formula") p1ut <- cqo(myform, family = poissonff, eq.tol = FALSE, trace = FALSE, data = pdata) c1ut <- cqo(Select(pdata, "y", sort = FALSE) ~ scale(latvar(uqo.rcim1)), family = poissonff, eq.tol = FALSE, trace = FALSE, data = pdata) lvplot(p1ut, lcol = 1:S, y = TRUE, pcol = 1:S, pch = 1:S, pcex = 0.5, main = "(c) CQO fitted to the original data", xlab = "Estimated (CQO) site scores") lvplot(c1ut, lcol = 1:S, y = TRUE, pcol = 1:S, pch = 1:S, pcex = 0.5, main = "(d) CQO fitted to the scaled UQO site scores", xlab = "Estimated (UQO) site scores") ## End(Not run)
# Example 1: Undergraduate enrolments at Auckland University in 1990 fitted(grc1 <- grc(auuc)) summary(grc1) grc2 <- grc(auuc, Rank = 2, Index.corner = c(2, 5)) fitted(grc2) summary(grc2) model3 <- rcim(auuc, Rank = 1, fam = multinomial, M = ncol(auuc)-1, cindex = 2:(ncol(auuc)-1), trace = TRUE) fitted(model3) summary(model3) # Median polish but not 100 percent reliable. Maybe call alaplace2()... ## Not run: rcim0 <- rcim(auuc, fam = alaplace1(tau = 0.5), trace=FALSE, maxit = 500) round(fitted(rcim0), digits = 0) round(100 * (fitted(rcim0) - auuc) / auuc, digits = 0) # Discrepancy depvar(rcim0) round(coef(rcim0, matrix = TRUE), digits = 2) Coef(rcim0, matrix = TRUE) # constraints(rcim0) names(constraints(rcim0)) # Compare with medpolish(): (med.a <- medpolish(auuc)) fv <- med.a$overall + outer(med.a$row, med.a$col, "+") round(100 * (fitted(rcim0) - fv) / fv) # Hopefully should be all 0s ## End(Not run) # Example 2: 2012 Summer Olympic Games in London ## Not run: top10 <- head(olym12, 10) grc1.oly12 <- with(top10, grc(cbind(gold, silver, bronze))) round(fitted(grc1.oly12)) round(resid(grc1.oly12, type = "response"), digits = 1) # Resp. resids summary(grc1.oly12) Coef(grc1.oly12) ## End(Not run) # Example 3: UQO; see Yee and Hadi (2014) ## Not run: n <- 100; p <- 5; S <- 10 pdata <- rcqo(n, p, S, es.opt = FALSE, eq.max = FALSE, eq.tol = TRUE, sd.latvar = 0.75) # Poisson counts true.nu <- attr(pdata, "latvar") # The 'truth'; site scores attr(pdata, "tolerances") # The 'truth'; tolerances Y <- Select(pdata, "y", sort = FALSE) # Y matrix (n x S); the "y" vars uqo.rcim1 <- rcim(Y, Rank = 1, str0 = NULL, # Delta covers entire n x M matrix iindex = 1:nrow(Y), # RRR covers the entire Y has.intercept = FALSE) # Suppress the intercept # Plot 1 par(mfrow = c(2, 2)) plot(attr(pdata, "optimums"), Coef(uqo.rcim1)@A, col = "blue", type = "p", main = "(a) UQO optimums", xlab = "True optimums", ylab = "Estimated (UQO) optimums") mylm <- lm(Coef(uqo.rcim1)@A ~ attr(pdata, "optimums")) abline(coef = coef(mylm), col = "orange", lty = "dashed") # Plot 2 fill.val <- NULL # Choose this for the new parameterization plot(attr(pdata, "latvar"), c(fill.val, concoef(uqo.rcim1)), las = 1, col = "blue", type = "p", main = "(b) UQO site scores", xlab = "True site scores", ylab = "Estimated (UQO) site scores" ) mylm <- lm(c(fill.val, concoef(uqo.rcim1)) ~ attr(pdata, "latvar")) abline(coef = coef(mylm), col = "orange", lty = "dashed") # Plots 3 and 4 myform <- attr(pdata, "formula") p1ut <- cqo(myform, family = poissonff, eq.tol = FALSE, trace = FALSE, data = pdata) c1ut <- cqo(Select(pdata, "y", sort = FALSE) ~ scale(latvar(uqo.rcim1)), family = poissonff, eq.tol = FALSE, trace = FALSE, data = pdata) lvplot(p1ut, lcol = 1:S, y = TRUE, pcol = 1:S, pch = 1:S, pcex = 0.5, main = "(c) CQO fitted to the original data", xlab = "Estimated (CQO) site scores") lvplot(c1ut, lcol = 1:S, y = TRUE, pcol = 1:S, pch = 1:S, pcex = 0.5, main = "(d) CQO fitted to the scaled UQO site scores", xlab = "Estimated (UQO) site scores") ## End(Not run)
Maximum likelihood estimation of the 2-parameter Gumbel distribution.
gumbel(llocation = "identitylink", lscale = "loglink", iscale = NULL, R = NA, percentiles = c(95, 99), mpv = FALSE, zero = NULL) gumbelff(llocation = "identitylink", lscale = "loglink", iscale = NULL, R = NA, percentiles = c(95, 99), zero = "scale", mpv = FALSE)
gumbel(llocation = "identitylink", lscale = "loglink", iscale = NULL, R = NA, percentiles = c(95, 99), mpv = FALSE, zero = NULL) gumbelff(llocation = "identitylink", lscale = "loglink", iscale = NULL, R = NA, percentiles = c(95, 99), zero = "scale", mpv = FALSE)
llocation , lscale
|
Parameter link functions for |
iscale |
Numeric and positive.
Optional initial value for |
R |
Numeric. Maximum number of values possible. See Details for more details. |
percentiles |
Numeric vector of percentiles used
for the fitted values. Values should be between 0 and 100.
This argument uses the argument |
mpv |
Logical. If |
zero |
A vector specifying which linear/additive predictors
are modelled as intercepts only. The value (possibly values) can
be from the set {1, 2} corresponding respectively to |
The Gumbel distribution is a generalized extreme value (GEV)
distribution with shape parameter .
Consequently it is more easily estimated than the GEV.
See
gev
for more details.
The quantity is the maximum number of observations possible,
for example, in the Venice data below, the top 10 daily values
are recorded for each year, therefore
because there are
about 365 days per year.
The MPV is the value of the response such that the probability
of obtaining a value greater than the MPV is 0.5 out of
observations.
For the Venice data, the MPV is the sea level such that there
is an even chance that the highest level for a particular year
exceeds the MPV.
If
mpv = TRUE
then the column labelled "MPV"
contains
the MPVs when fitted()
is applied to the fitted object.
The formula for the mean of a response is
where
is a constant
that has value approximately equal to 0.5772.
The formula for the percentiles are (if
R
is not given)
where
is the
percentile
argument value(s).
If R
is given then the percentiles are
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
When R
is not given (the default) the fitted percentiles are
that of the data, and not of the
overall population. For example, in the example below, the 50
percentile is approximately the running median through the data,
however, the data are the highest sea level measurements recorded each
year (it therefore equates to the median predicted value or MPV).
Like many other usual VGAM family functions,
gumbelff()
handles (independent) multiple responses.
gumbel()
can handle
more of a
multivariate response, i.e., a
matrix with more than one column. Each row of the matrix is
sorted into descending order.
Missing values in the response are allowed but require
na.action = na.pass
. The response matrix needs to be
padded with any missing values. With a multivariate response
one has a matrix y
, say, where
y[, 2]
contains the second order statistics, etc.
T. W. Yee
Yee, T. W. and Stephenson, A. G. (2007). Vector generalized linear and additive extreme value models. Extremes, 10, 1–19.
Smith, R. L. (1986). Extreme value theory based on the r largest annual events. Journal of Hydrology, 86, 27–43.
Rosen, O. and Cohen, A. (1996). Extreme percentile regression. In: Haerdle, W. and Schimek, M. G. (eds.), Statistical Theory and Computational Aspects of Smoothing: Proceedings of the COMPSTAT '94 Satellite Meeting held in Semmering, Austria, 27–28 August 1994, pp.200–214, Heidelberg: Physica-Verlag.
Coles, S. (2001). An Introduction to Statistical Modeling of Extreme Values. London: Springer-Verlag.
rgumbel
,
dgumbelII
,
cens.gumbel
,
guplot
,
gev
,
gevff
,
venice
.
# Example 1: Simulated data gdata <- data.frame(y1 = rgumbel(n = 1000, loc = 100, scale = exp(1))) fit1 <- vglm(y1 ~ 1, gumbelff(perc = NULL), data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) with(gdata, mean(y1)) # Example 2: Venice data (fit2 <- vglm(cbind(r1, r2, r3, r4, r5) ~ year, data = venice, gumbel(R = 365, mpv = TRUE), trace = TRUE)) head(fitted(fit2)) coef(fit2, matrix = TRUE) sqrt(diag(vcov(summary(fit2)))) # Standard errors # Example 3: Try a nonparametric fit --------------------- # Use the entire data set, including missing values # Same as as.matrix(venice[, paste0("r", 1:10)]): Y <- Select(venice, "r", sort = FALSE) fit3 <- vgam(Y ~ s(year, df = 3), gumbel(R = 365, mpv = TRUE), data = venice, trace = TRUE, na.action = na.pass) depvar(fit3)[4:5, ] # NAs used to pad the matrix ## Not run: # Plot the component functions par(mfrow = c(2, 3), mar = c(6, 4, 1, 2) + 0.3, xpd = TRUE) plot(fit3, se = TRUE, lcol = "blue", scol = "limegreen", lty = 1, lwd = 2, slwd = 2, slty = "dashed") # Quantile plot --- plots all the fitted values qtplot(fit3, mpv = TRUE, lcol = c(1, 2, 5), tcol = c(1, 2, 5), lwd = 2, pcol = "blue", tadj = 0.1, ylab = "Sea level (cm)") # Plot the 99 percentile only year <- venice[["year"]] matplot(year, Y, ylab = "Sea level (cm)", type = "n") matpoints(year, Y, pch = "*", col = "blue") lines(year, fitted(fit3)[, "99%"], lwd = 2, col = "orange") # Check the 99 percentiles with a smoothing spline. # Nb. (1-0.99) * 365 = 3.65 is approx. 4, meaning the 4th order # statistic is approximately the 99 percentile. plot(year, Y[, 4], ylab = "Sea level (cm)", type = "n", main = "Orange is 99 percentile, Green is a smoothing spline") points(year, Y[, 4], pch = "4", col = "blue") lines(year, fitted(fit3)[, "99%"], lty = 1, col = "orange") lines(smooth.spline(year, Y[, 4], df = 4), col = "limegreen", lty = 2) ## End(Not run)
# Example 1: Simulated data gdata <- data.frame(y1 = rgumbel(n = 1000, loc = 100, scale = exp(1))) fit1 <- vglm(y1 ~ 1, gumbelff(perc = NULL), data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) with(gdata, mean(y1)) # Example 2: Venice data (fit2 <- vglm(cbind(r1, r2, r3, r4, r5) ~ year, data = venice, gumbel(R = 365, mpv = TRUE), trace = TRUE)) head(fitted(fit2)) coef(fit2, matrix = TRUE) sqrt(diag(vcov(summary(fit2)))) # Standard errors # Example 3: Try a nonparametric fit --------------------- # Use the entire data set, including missing values # Same as as.matrix(venice[, paste0("r", 1:10)]): Y <- Select(venice, "r", sort = FALSE) fit3 <- vgam(Y ~ s(year, df = 3), gumbel(R = 365, mpv = TRUE), data = venice, trace = TRUE, na.action = na.pass) depvar(fit3)[4:5, ] # NAs used to pad the matrix ## Not run: # Plot the component functions par(mfrow = c(2, 3), mar = c(6, 4, 1, 2) + 0.3, xpd = TRUE) plot(fit3, se = TRUE, lcol = "blue", scol = "limegreen", lty = 1, lwd = 2, slwd = 2, slty = "dashed") # Quantile plot --- plots all the fitted values qtplot(fit3, mpv = TRUE, lcol = c(1, 2, 5), tcol = c(1, 2, 5), lwd = 2, pcol = "blue", tadj = 0.1, ylab = "Sea level (cm)") # Plot the 99 percentile only year <- venice[["year"]] matplot(year, Y, ylab = "Sea level (cm)", type = "n") matpoints(year, Y, pch = "*", col = "blue") lines(year, fitted(fit3)[, "99%"], lwd = 2, col = "orange") # Check the 99 percentiles with a smoothing spline. # Nb. (1-0.99) * 365 = 3.65 is approx. 4, meaning the 4th order # statistic is approximately the 99 percentile. plot(year, Y[, 4], ylab = "Sea level (cm)", type = "n", main = "Orange is 99 percentile, Green is a smoothing spline") points(year, Y[, 4], pch = "4", col = "blue") lines(year, fitted(fit3)[, "99%"], lty = 1, col = "orange") lines(smooth.spline(year, Y[, 4], df = 4), col = "limegreen", lty = 2) ## End(Not run)
Density, cumulative distribution function, quantile function and random generation for the Gumbel-II distribution.
dgumbelII(x, scale = 1, shape, log = FALSE) pgumbelII(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qgumbelII(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rgumbelII(n, scale = 1, shape)
dgumbelII(x, scale = 1, shape, log = FALSE) pgumbelII(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qgumbelII(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rgumbelII(n, scale = 1, shape)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as in |
log |
Logical.
If |
lower.tail , log.p
|
|
shape , scale
|
positive shape and scale parameters. |
See gumbelII
for details.
dgumbelII
gives the density,
pgumbelII
gives the cumulative distribution function,
qgumbelII
gives the quantile function, and
rgumbelII
generates random deviates.
T. W. Yee and Kai Huang
probs <- seq(0.01, 0.99, by = 0.01) Scale <- exp(1); Shape <- exp( 0.5); max(abs(pgumbelII(qgumbelII(p = probs, shape = Shape, Scale), shape = Shape, Scale) - probs)) # Should be 0 ## Not run: x <- seq(-0.1, 10, by = 0.01); plot(x, dgumbelII(x, shape = Shape, Scale), type = "l", col = "blue", main = "Blue is density, orange is the CDF", las = 1, sub = "Red lines are the 10,20,...,90 percentiles", ylab = "", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(x, pgumbelII(x, shape = Shape, Scale), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qgumbelII(probs, shape = Shape, Scale) lines(Q, dgumbelII(Q, Scale, Shape), col = "red", lty = 3, type = "h") pgumbelII(Q, shape = Shape, Scale) - probs # Should be all zero abline(h = probs, col = "red", lty = 3) ## End(Not run)
probs <- seq(0.01, 0.99, by = 0.01) Scale <- exp(1); Shape <- exp( 0.5); max(abs(pgumbelII(qgumbelII(p = probs, shape = Shape, Scale), shape = Shape, Scale) - probs)) # Should be 0 ## Not run: x <- seq(-0.1, 10, by = 0.01); plot(x, dgumbelII(x, shape = Shape, Scale), type = "l", col = "blue", main = "Blue is density, orange is the CDF", las = 1, sub = "Red lines are the 10,20,...,90 percentiles", ylab = "", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(x, pgumbelII(x, shape = Shape, Scale), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qgumbelII(probs, shape = Shape, Scale) lines(Q, dgumbelII(Q, Scale, Shape), col = "red", lty = 3, type = "h") pgumbelII(Q, shape = Shape, Scale) - probs # Should be all zero abline(h = probs, col = "red", lty = 3) ## End(Not run)
Maximum likelihood estimation of the 2-parameter Gumbel-II distribution.
gumbelII(lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL, probs.y = c(0.2, 0.5, 0.8), perc.out = NULL, imethod = 1, zero = "shape", nowarning = FALSE)
gumbelII(lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL, probs.y = c(0.2, 0.5, 0.8), perc.out = NULL, imethod = 1, zero = "shape", nowarning = FALSE)
nowarning |
Logical. Suppress a warning? |
lshape , lscale
|
Parameter link functions applied to the
(positive) shape parameter (called |
Parameter link functions applied to the
ishape , iscale
|
Optional initial values for the shape and scale parameters. |
imethod |
See |
zero , probs.y
|
Details at |
perc.out |
If the fitted values are to be quantiles then set this argument to be the percentiles of these, e.g., 50 for median. |
The Gumbel-II density for a response is
for ,
,
.
The cumulative distribution function is
The mean of is
(returned as the fitted values)
when
,
and the variance is
when
.
This distribution looks similar to
weibullR
, and is
due to Gumbel (1954).
This VGAM family function currently does not handle censored data. Fisher scoring is used to estimate the two parameters. Probably similar regularity conditions hold for this distribution compared to the Weibull distribution.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
See weibullR
.
This VGAM family function handles multiple responses.
T. W. Yee
Gumbel, E. J. (1954). Statistical theory of extreme values and some practical applications. Applied Mathematics Series, volume 33, U.S. Department of Commerce, National Bureau of Standards, USA.
gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, heta1 = +1, heta2 = -1 + 0.1 * x2, ceta1 = 0, ceta2 = 1) gdata <- transform(gdata, shape1 = exp(heta1), shape2 = exp(heta2), scale1 = exp(ceta1), scale2 = exp(ceta2)) gdata <- transform(gdata, y1 = rgumbelII(nn, scale = scale1, shape = shape1), y2 = rgumbelII(nn, scale = scale2, shape = shape2)) fit <- vglm(cbind(y1, y2) ~ x2, gumbelII(zero = c(1, 2, 3)), data = gdata, trace = TRUE) coef(fit, matrix = TRUE) vcov(fit) summary(fit)
gdata <- data.frame(x2 = runif(nn <- 1000)) gdata <- transform(gdata, heta1 = +1, heta2 = -1 + 0.1 * x2, ceta1 = 0, ceta2 = 1) gdata <- transform(gdata, shape1 = exp(heta1), shape2 = exp(heta2), scale1 = exp(ceta1), scale2 = exp(ceta2)) gdata <- transform(gdata, y1 = rgumbelII(nn, scale = scale1, shape = shape1), y2 = rgumbelII(nn, scale = scale2, shape = shape2)) fit <- vglm(cbind(y1, y2) ~ x2, gumbelII(zero = c(1, 2, 3)), data = gdata, trace = TRUE) coef(fit, matrix = TRUE) vcov(fit) summary(fit)
Density, distribution function, quantile function and random
generation for the Gumbel distribution with
location parameter location
and
scale parameter scale
.
dgumbel(x, location = 0, scale = 1, log = FALSE) pgumbel(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) qgumbel(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) rgumbel(n, location = 0, scale = 1)
dgumbel(x, location = 0, scale = 1, log = FALSE) pgumbel(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) qgumbel(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) rgumbel(n, location = 0, scale = 1)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
location |
the location parameter |
scale |
the scale parameter |
log |
Logical.
If |
lower.tail , log.p
|
The Gumbel distribution is a special case of the
generalized extreme value (GEV) distribution where
the shape parameter = 0.
The latter has 3 parameters, so the Gumbel distribution has two.
The Gumbel distribution function is
where ,
and
.
Its mean is
and its variance is
where is Euler's constant (which can be
obtained as
-digamma(1)
).
See gumbel
, the VGAM family function
for estimating the two parameters by maximum likelihood estimation,
for formulae and other details.
Apart from n
, all the above arguments may be vectors and
are recyled to the appropriate length if necessary.
dgumbel
gives the density,
pgumbel
gives the distribution function,
qgumbel
gives the quantile function, and
rgumbel
generates random deviates.
The VGAM family function gumbel
can estimate the parameters of a Gumbel distribution using
maximum likelihood estimation.
T. W. Yee
Coles, S. (2001). An Introduction to Statistical Modeling of Extreme Values. London: Springer-Verlag.
gumbel
,
gumbelff
,
gev
,
dgompertz
.
mu <- 1; sigma <- 2; y <- rgumbel(n = 100, loc = mu, scale = sigma) c(mean(y), mu - sigma * digamma(1)) # Sample and population means c(var(y), sigma^2 * pi^2 / 6) # Sample and population variances ## Not run: x <- seq(-2.5, 3.5, by = 0.01) loc <- 0; sigma <- 1 plot(x, dgumbel(x, loc, sigma), type = "l", col = "blue", main = "Blue is density, red is the CDF", ylim = c(0, 1), sub = "Purple are 5,10,...,95 percentiles", ylab = "", las = 1) abline(h = 0, col = "blue", lty = 2) lines(qgumbel(seq(0.05, 0.95, by = 0.05), loc, sigma), dgumbel(qgumbel(seq(0.05, 0.95, by = 0.05), loc, sigma), loc, sigma), col = "purple", lty = 3, type = "h") lines(x, pgumbel(x, loc, sigma), type = "l", col = "red") abline(h = 0, lty = 2) ## End(Not run)
mu <- 1; sigma <- 2; y <- rgumbel(n = 100, loc = mu, scale = sigma) c(mean(y), mu - sigma * digamma(1)) # Sample and population means c(var(y), sigma^2 * pi^2 / 6) # Sample and population variances ## Not run: x <- seq(-2.5, 3.5, by = 0.01) loc <- 0; sigma <- 1 plot(x, dgumbel(x, loc, sigma), type = "l", col = "blue", main = "Blue is density, red is the CDF", ylim = c(0, 1), sub = "Purple are 5,10,...,95 percentiles", ylab = "", las = 1) abline(h = 0, col = "blue", lty = 2) lines(qgumbel(seq(0.05, 0.95, by = 0.05), loc, sigma), dgumbel(qgumbel(seq(0.05, 0.95, by = 0.05), loc, sigma), loc, sigma), col = "purple", lty = 3, type = "h") lines(x, pgumbel(x, loc, sigma), type = "l", col = "red") abline(h = 0, lty = 2) ## End(Not run)
Produces a Gumbel plot, a diagnostic plot for checking whether the data appears to be from a Gumbel distribution.
guplot(object, ...) guplot.default(y, main = "Gumbel Plot", xlab = "Reduced data", ylab = "Observed data", type = "p", ...) guplot.vlm(object, ...)
guplot(object, ...) guplot.default(y, main = "Gumbel Plot", xlab = "Reduced data", ylab = "Observed data", type = "p", ...) guplot.vlm(object, ...)
y |
A numerical vector. |
main |
Character. Overall title for the plot. |
xlab |
Character. Title for the x axis. |
ylab |
Character. Title for the y axis. |
type |
Type of plot. The default means points are plotted. |
object |
An object that inherits class |
... |
Graphical argument passed into
|
If has a Gumbel distribution then plotting the sorted
values
versus the reduced values
should
appear linear. The reduced values are given by
where is the
th plotting position, taken
here to be
.
Here,
is the number of observations.
Curvature upwards/downwards may indicate a Frechet/Weibull
distribution, respectively. Outliers may also be detected
using this plot.
The function guplot
is generic, and
guplot.default
and guplot.vlm
are some
methods functions for Gumbel plots.
A list is returned invisibly with the following components.
x |
The reduced data. |
y |
The sorted y data. |
The Gumbel distribution is a special case of the GEV distribution with shape parameter equal to zero.
T. W. Yee
Coles, S. (2001). An Introduction to Statistical Modeling of Extreme Values. London: Springer-Verlag.
Gumbel, E. J. (1958). Statistics of Extremes. New York, USA: Columbia University Press.
gumbel
,
gumbelff
,
gev
,
venice
.
## Not run: guplot(rnorm(500), las = 1) -> ii names(ii) guplot(with(venice, r1), col = "blue") # Venice sea levels data ## End(Not run)
## Not run: guplot(rnorm(500), las = 1) -> ii names(ii) guplot(with(venice, r1), col = "blue") # Venice sea levels data ## End(Not run)
Looks at the formula
to
see if it has an intercept term.
has.intercept(object, ...) has.interceptvlm(object, form.number = 1, ...)
has.intercept(object, ...) has.interceptvlm(object, form.number = 1, ...)
object |
A fitted model object. |
form.number |
Formula number, is 1 or 2.
which correspond to the arguments |
... |
Arguments that are might be passed from one function to another. |
This methods function is a simple way to determine whether a
fitted vglm
object etc. has an intercept term
or not.
It is not entirely foolproof because one might suppress the
intercept from the formula and then add in a variable in the
formula that has a constant value.
Returns a single logical.
Thomas W. Yee
formulavlm
,
termsvlm
.
# Example: this is based on a glm example counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3, 1, 9); treatment <- gl(3, 3) pdata <- data.frame(counts, outcome, treatment) # Better style vglm.D93 <- vglm(counts ~ outcome + treatment, poissonff, data = pdata) formula(vglm.D93) term.names(vglm.D93) responseName(vglm.D93) has.intercept(vglm.D93)
# Example: this is based on a glm example counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3, 1, 9); treatment <- gl(3, 3) pdata <- data.frame(counts, outcome, treatment) # Better style vglm.D93 <- vglm(counts ~ outcome + treatment, poissonff, data = pdata) formula(vglm.D93) term.names(vglm.D93) responseName(vglm.D93) has.intercept(vglm.D93)
When complete, a suite of functions that can be used to compute some of the regression (leave-one-out deletion) diagnostics, for the VGLM class.
hatvalues(model, ...) hatvaluesvlm(model, type = c("diagonal", "matrix", "centralBlocks"), ...) hatplot(model, ...) hatplot.vlm(model, multiplier = c(2, 3), lty = "dashed", xlab = "Observation", ylab = "Hat values", ylim = NULL, ...) dfbetavlm(model, maxit.new = 1, trace.new = FALSE, smallno = 1.0e-8, ...)
hatvalues(model, ...) hatvaluesvlm(model, type = c("diagonal", "matrix", "centralBlocks"), ...) hatplot(model, ...) hatplot.vlm(model, multiplier = c(2, 3), lty = "dashed", xlab = "Observation", ylab = "Hat values", ylim = NULL, ...) dfbetavlm(model, maxit.new = 1, trace.new = FALSE, smallno = 1.0e-8, ...)
model |
an R object, typically returned by |
type |
Character.
The default is the first choice, which is
a |
multiplier |
Numeric, the multiplier. The usual rule-of-thumb is that values greater than two or three times the average leverage (at least for the linear model) should be checked. |
lty , xlab , ylab , ylim
|
Graphical parameters, see
|
maxit.new , trace.new , smallno
|
Having |
... |
further arguments,
for example, graphical parameters for |
The invocation hatvalues(vglmObject)
should return a
matrix of the diagonal elements of the
hat (projection) matrix of a
vglm
object.
To do this,
the QR decomposition of the object is retrieved or
reconstructed, and then straightforward calculations
are performed.
The invocation hatplot(vglmObject)
should plot
the diagonal of the hat matrix for each of the
linear/additive predictors.
By default, two horizontal dashed lines are added;
hat values higher than these ought to be checked.
It is hoped, soon, that the full suite of functions described at
influence.measures
will be written for VGLMs.
This will enable general regression deletion diagnostics to be
available for the entire VGLM class.
T. W. Yee.
vglm
,
cumulative
,
influence.measures
.
# Proportional odds model, p.179, in McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative, data = pneumo) hatvalues(fit) # n x M matrix, with positive values all.equal(sum(hatvalues(fit)), fit@rank) # Should be TRUE ## Not run: par(mfrow = c(1, 2)) hatplot(fit, ylim = c(0, 1), las = 1, col = "blue") ## End(Not run)
# Proportional odds model, p.179, in McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative, data = pneumo) hatvalues(fit) # n x M matrix, with positive values all.equal(sum(hatvalues(fit)), fit@rank) # Should be TRUE ## Not run: par(mfrow = c(1, 2)) hatplot(fit, ylim = c(0, 1), las = 1, col = "blue") ## End(Not run)
A detection test for the Hauck-Donner effect on each regression coefficient of a VGLM regression or 2 x 2 table.
hdeff(object, ...) hdeff.vglm(object, derivative = NULL, se.arg = FALSE, subset = NULL, theta0 = 0, hstep = 0.005, fd.only = FALSE, ...) hdeff.numeric(object, byrow = FALSE, ...) hdeff.matrix(object, ...)
hdeff(object, ...) hdeff.vglm(object, derivative = NULL, se.arg = FALSE, subset = NULL, theta0 = 0, hstep = 0.005, fd.only = FALSE, ...) hdeff.numeric(object, byrow = FALSE, ...) hdeff.matrix(object, ...)
object |
Usually a Alternatively Another alternative is that |
derivative |
Numeric. Either 1 or 2.
Currently only a few models having
one linear predictor are handled
analytically for |
se.arg |
Logical. If |
subset |
Logical or vector of indices,
to select the regression coefficients of interest.
The default is to select all coefficients.
Recycled if necessary if logical.
If numeric then they should comprise
elements from |
theta0 |
Numeric. Vector recycled to the necessary length which is
the number of regression coefficients.
The null hypotheses for the regression coefficients are that
they equal those respective values, and the alternative
hypotheses are all two-sided.
It is not recommended that argument |
hstep |
Positive numeric and recycled to length 2;
it is the so-called step size when using
finite-differences and is often called |
fd.only |
Logical;
if It is possible that |
byrow |
Logical;
fed into |
... |
currently unused but may be used in the future for further arguments passed into the other methods functions. |
Almost all of statistical inference based on the likelihood assumes that the parameter estimates are located in the interior of the parameter space. The nonregular case of being located on the boundary is not considered very much and leads to very different results from the regular case. Practically, an important question is: how close is close to the boundary? One might answer this as: the parameter estimates are too close to the boundary when the Hauck-Donner effect (HDE) is present, whereby the Wald statistic becomes aberrant.
Hauck and Donner (1977) first observed an aberration of the
Wald test statistic not monotonically increasing as a function
of increasing distance between the parameter estimate and the
null value. This "disturbing" and "undesirable" underappreciated
effect has since been observed in other regression models by
various authors. This function computes the first, and possibly
second, derivative of the Wald statistic for each regression
coefficient. A negative value of the first derivative is
indicative of the HDE being present. More information can be
obtained from hdeffsev
regarding HDE severity:
there may be none, faint, weak, moderate, strong and extreme
amounts of HDE present.
In general, most models have derivatives that are computed
numerically using finite-difference
approximations. The reason is that it takes a lot of work
to program in the analytical solution
(this includes a few very common models, such as
poissonff
and
binomialff
,
where the first two derivatives have been implemented).
By default this function returns a labelled logical vector;
a TRUE
means the HDE is affirmative for that coefficient
(negative slope).
Hence ideally all values are FALSE
.
Any TRUE
values suggests that the MLE is
too near the boundary of the parameter space,
and that the p-value for that regression coefficient
is biased upwards.
When present
a highly significant variable might be deemed nonsignificant,
and thus the HDE can create havoc for variable selection.
If the HDE is present then more accurate
p-values can generally be obtained by conducting a
likelihood ratio test
(see lrt.stat.vlm
)
or Rao's score test
(see score.stat.vlm
);
indeed the default of
wald.stat.vlm
does not suffer from the HDE.
Setting deriv = 1
returns a numerical vector of first
derivatives of the Wald statistics.
Setting deriv = 2
returns a 2-column matrix of first
and second derivatives of the Wald statistics.
Then setting se.arg = TRUE
returns an additional 1 or
2 columns.
Some 2nd derivatives are NA
if only a partial analytic
solution has been programmed in.
For those VGAM family functions whose HDE test has not
yet been implemented explicitly (the vast majority of them),
finite-difference approximations to the derivatives will
be used—see the arguments hstep
and fd.only
for getting some control on them.
The function summaryvglm
conducts the HDE
detection test if possible and prints out a line at the bottom
if the HDE is detected for some regression coefficients.
By “if possible”, only a few family functions are exempt and they
have an infos
slot with component hadof = FALSE
;
such as
normal.vcm
,
rec.normal
because it
uses the BFGS-IRLS method for computing the working weights.
For these few a NULL
is returned by hdeff
.
If the second derivatives are of interest then
it is recommended that crit = "c"
be added to the
fitting so that a slightly more accurate model results
(usually one more IRLS iteration).
This is because the FD approximation is very sensitive to
values of the working weights, so they need to be computed
accurately.
Occasionally, if the coefficient is close to 0,
then its Wald statistic's
second derivative may be unusually large in magnitude
(this could be due to something such as roundoff error).
This function is currently under development
and may change a little in the short future.
For HDE severity measures see hdeffsev
.
Thomas W. Yee.
Hauck, J. W. W. and A. Donner (1977). Wald's test as applied to hypotheses in logit analysis. Journal of the American Statistical Association, 72, 851–853.
Yee, T. W. (2022). On the Hauck-Donner effect in Wald tests: Detection, tipping points and parameter space characterization, Journal of the American Statistical Association, 117, 1763–1774. doi:10.1080/01621459.2021.1886936.
Yee, T. W. (2021). Some new results concerning the Hauck-Donner effect. Manuscript in preparation.
summaryvglm
,
hdeffsev
,
alogitlink
,
asinlink
,
vglm
,
lrt.stat
,
score.stat
,
wald.stat
,
confintvglm
,
profilevglm
.
pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, data = pneumo, trace = TRUE, crit = "c", # Get some more accuracy cumulative(reverse = TRUE, parallel = TRUE)) cumulative()@infos()$hadof # Analytical solution implemented hdeff(fit) hdeff(fit, deriv = 1) # Analytical solution hdeff(fit, deriv = 2) # It is a partial analytical solution hdeff(fit, deriv = 2, se.arg = TRUE, fd.only = TRUE) # All derivatives solved numerically by FDs # 2 x 2 table of counts R0 <- 25; N0 <- 100 # Hauck Donner (1977) data set mymat <- c(N0-R0, R0, 8, 92) # HDE present (mymat <- matrix(mymat, 2, 2, byrow = TRUE)) hdeff(mymat) hdeff(c(mymat)) # Input is a vector hdeff(c(t(mymat)), byrow = TRUE) # Reordering of the data
pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, data = pneumo, trace = TRUE, crit = "c", # Get some more accuracy cumulative(reverse = TRUE, parallel = TRUE)) cumulative()@infos()$hadof # Analytical solution implemented hdeff(fit) hdeff(fit, deriv = 1) # Analytical solution hdeff(fit, deriv = 2) # It is a partial analytical solution hdeff(fit, deriv = 2, se.arg = TRUE, fd.only = TRUE) # All derivatives solved numerically by FDs # 2 x 2 table of counts R0 <- 25; N0 <- 100 # Hauck Donner (1977) data set mymat <- c(N0-R0, R0, 8, 92) # HDE present (mymat <- matrix(mymat, 2, 2, byrow = TRUE)) hdeff(mymat) hdeff(c(mymat)) # Input is a vector hdeff(c(t(mymat)), byrow = TRUE) # Reordering of the data
Computes the severity of the Hauck-Donner effect for each regression coefficient of a VGLM regression.
hdeffsev(x, y, dy, ddy, allofit = FALSE, eta0 = 0, COPS0 = eta0, severity.table = c("None", "Faint", "Weak", "Moderate", "Strong", "Extreme", "Undetermined")) hdeffsev2(x, y, dy, ddy, allofit = FALSE, ndepends = FALSE, eta0 = 0, severity.table = c("None", "Faint", "Weak", "Moderate", "Strong", "Extreme", "Undetermined")[if (ndepends) TRUE else c(1, 4, 6, 7)], tol0 = 0.1)
hdeffsev(x, y, dy, ddy, allofit = FALSE, eta0 = 0, COPS0 = eta0, severity.table = c("None", "Faint", "Weak", "Moderate", "Strong", "Extreme", "Undetermined")) hdeffsev2(x, y, dy, ddy, allofit = FALSE, ndepends = FALSE, eta0 = 0, severity.table = c("None", "Faint", "Weak", "Moderate", "Strong", "Extreme", "Undetermined")[if (ndepends) TRUE else c(1, 4, 6, 7)], tol0 = 0.1)
x , y
|
Numeric vectors;
|
dy , ddy
|
Numeric vectors;
the first and second derivatives of the Wald statistics.
They can be computed by |
allofit |
Logical. If |
severity.table |
Character vector with 6 values plus the last value for initialization. Usually users should not assign anything to this argument. |
eta0 |
Numeric. The hypothesized value.
The default is appropriate for most symmetric
|
ndepends |
Logical. Use boundaries that depend on the
sample size |
COPS0 |
Numeric. See Yee (2023). |
tol0 |
Numeric. Any estimate whose absolute value is less than
|
Note: The function
hdeffsev
has a bug or two in it but
they should be fixed later this year (2024).
Function hdeffsev
is currently rough-and-ready.
It is possible to use the first two derivatives obtained
from hdeff
to categorize the severity of the
the Hauck-Donner effect (HDE).
It is effectively assumed that, starting at
the origin
and going right,
the curve is made up of a convex segment followed by
a concave segment and then the convex segment.
Midway in the concave segment the first
derivative is 0, and
beyond that the HDE is really manifest because the
derivative remains negative.
For "None"
the estimate lies on the convex
part of the curve near the origin, hence there is
very little HDE at all.
For "Weak"
the estimate lies on the
concave part of the curve but the Wald statistic is still
increasing as estimate gets away from 0, hence it is only
a mild form of the HDE.
For "Moderate"
,
"Strong"
and "Extreme"
the Wald statistic is
decreasing as the estimate gets away from eta0
,
hence it
really does exhibit the HDE.
It is recommended that lrt.stat
be used
to compute
LRT p-values, as they do not suffer from the HDE.
By default this function
(hdeffsev
)
returns a labelled vector with
elements selected from
severity.table
.
If allofit = TRUE
then Yee (2022) gives details
about some of the other list components,
e.g., a quantity called
zeta
is the normal line projected onto the x-axis,
and its first derivative gives additional
information about the position
of the estimate along the curve.
These functions are likely to change in the short future because it is experimental and far from complete. Improvements are intended.
The severity measures ideally should be based on
tangent lines rather than normal lines so that the
boundaries are independent of the sample size
. Hence such boundaries differ a little
from Yee (2022) which had a mixture of such.
The functions were written specifically for
binomialff
, but they should work
for some other family functions.
Currently,
in order for "Strong"
to be assigned correctly,
at least one such value is needed on the
LHS and/or RHS each. From those, two other boundary
points are obtained so that it creates two intervals.
Thomas W. Yee.
Yee, T. W. (2022). On the Hauck-Donner effect in Wald tests: Detection, tipping points and parameter space characterization, Journal of the American Statistical Association, 117, 1763–1774. doi:10.1080/01621459.2021.1886936.
Yee, T. W. (2023). Some new results concerning the Wald tests and the parameter space. In review.
deg <- 4 # myfun is a function that approximates the HDE myfun <- function(x, deriv = 0) switch(as.character(deriv), '0' = x^deg * exp(-x), '1' = (deg * x^(deg-1) - x^deg) * exp(-x), '2' = (deg*(deg-1)*x^(deg-2) - 2*deg*x^(deg-1) + x^deg)*exp(-x)) xgrid <- seq(0, 10, length = 101) ansm <- hdeffsev(xgrid, myfun(xgrid), myfun(xgrid, deriv = 1), myfun(xgrid, deriv = 2), allofit = TRUE) digg <- 4 cbind(severity = ansm$sev, fun = round(myfun(xgrid), digg), deriv1 = round(myfun(xgrid, deriv = 1), digg), deriv2 = round(myfun(xgrid, deriv = 2), digg), zderiv1 = round(1 + (myfun(xgrid, deriv = 1))^2 + myfun(xgrid, deriv = 2) * myfun(xgrid), digg))
deg <- 4 # myfun is a function that approximates the HDE myfun <- function(x, deriv = 0) switch(as.character(deriv), '0' = x^deg * exp(-x), '1' = (deg * x^(deg-1) - x^deg) * exp(-x), '2' = (deg*(deg-1)*x^(deg-2) - 2*deg*x^(deg-1) + x^deg)*exp(-x)) xgrid <- seq(0, 10, length = 101) ansm <- hdeffsev(xgrid, myfun(xgrid), myfun(xgrid, deriv = 1), myfun(xgrid, deriv = 2), allofit = TRUE) digg <- 4 cbind(severity = ansm$sev, fun = round(myfun(xgrid), digg), deriv1 = round(myfun(xgrid, deriv = 1), digg), deriv2 = round(myfun(xgrid, deriv = 2), digg), zderiv1 = round(1 + (myfun(xgrid, deriv = 1))^2 + myfun(xgrid, deriv = 2) * myfun(xgrid), digg))
A hormone assay data set from Carroll and Ruppert (1988).
data(hormone)
data(hormone)
A data frame with 85 observations on the following 2 variables.
X
a numeric vector, suitable as the x-axis in a scatter plot. The reference method.
Y
a numeric vector, suitable as the y-axis in a scatter plot. The test method.
The data is given in Table 2.4 of
Carroll and Ruppert (1988), and was downloaded
from http://www.stat.tamu.edu/~carroll
prior to 2019.
The book describes the data as follows.
The data are the results of two assay methods for hormone
data; the scale of the data as presented is not
particularly meaningful, and the original source
of the data refused permission to divulge further
information. As in a similar example of
Leurgans (1980), the old or reference method is
being used to predict the new or test method.
The overall goal is to see whether we can reproduce
the test-method measurements with the reference-method
measurements.
Thus calibration might be of interest for the data.
Carroll, R. J. and Ruppert, D. (1988). Transformation and Weighting in Regression. New York, USA: Chapman & Hall.
Leurgans, S. (1980). Evaluating laboratory measurement techniques. Biostatistics Casebook. Eds.: Miller, R. G. Jr., and Efron, B. and Brown, B. W. Jr., and Moses, L. New York, USA: Wiley.
Yee, T. W. (2014). Reduced-rank vector generalized linear models with two linear predictors. Computational Statistics and Data Analysis, 71, 889–902.
## Not run: data(hormone) summary(hormone) modelI <-rrvglm(Y ~ 1 + X, data = hormone, trace = TRUE, uninormal(zero = NULL, lsd = "identitylink", imethod = 2)) # Alternative way to fit modelI modelI.other <- vglm(Y ~ 1 + X, data = hormone, trace = TRUE, uninormal(zero = NULL, lsd = "identitylink")) # Inferior to modelI modelII <- vglm(Y ~ 1 + X, data = hormone, trace = TRUE, family = uninormal(zero = NULL)) logLik(modelI) logLik(modelII) # Less than logLik(modelI) # Reproduce the top 3 equations on p.65 of Carroll and Ruppert (1988). # They are called Equations (1)--(3) here. # Equation (1) hormone <- transform(hormone, rX = 1 / X) clist <- list("(Intercept)" = diag(2), X = diag(2), rX = rbind(0, 1)) fit1 <- vglm(Y ~ 1 + X + rX, family = uninormal(zero = NULL), constraints = clist, data = hormone, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) # Actually, the intercepts do not seem significant plot(Y ~ X, hormone, col = "blue") lines(fitted(fit1) ~ X, hormone, col = "orange") # Equation (2) fit2 <- rrvglm(Y ~ 1 + X, uninormal(zero = NULL), hormone, trace = TRUE) coef(fit2, matrix = TRUE) plot(Y ~ X, hormone, col = "blue") lines(fitted(fit2) ~ X, hormone, col = "red") # Add +- 2 SEs lines(fitted(fit2) + 2 * exp(predict(fit2)[, "loglink(sd)"]) ~ X, hormone, col = "orange") lines(fitted(fit2) - 2 * exp(predict(fit2)[, "loglink(sd)"]) ~ X, hormone, col = "orange") # Equation (3) # Does not fit well because the loglink link for the mean is not good. fit3 <- rrvglm(Y ~ 1 + X, maxit = 300, data = hormone, trace = TRUE, uninormal(lmean = "loglink", zero = NULL)) coef(fit3, matrix = TRUE) plot(Y ~ X, hormone, col = "blue") # Does not look okay. lines(exp(predict(fit3)[, 1]) ~ X, hormone, col = "red") # Add +- 2 SEs lines(fitted(fit3) + 2 * exp(predict(fit3)[, "loglink(sd)"]) ~ X, hormone, col = "orange") lines(fitted(fit3) - 2 * exp(predict(fit3)[, "loglink(sd)"]) ~ X, hormone, col = "orange") ## End(Not run)
## Not run: data(hormone) summary(hormone) modelI <-rrvglm(Y ~ 1 + X, data = hormone, trace = TRUE, uninormal(zero = NULL, lsd = "identitylink", imethod = 2)) # Alternative way to fit modelI modelI.other <- vglm(Y ~ 1 + X, data = hormone, trace = TRUE, uninormal(zero = NULL, lsd = "identitylink")) # Inferior to modelI modelII <- vglm(Y ~ 1 + X, data = hormone, trace = TRUE, family = uninormal(zero = NULL)) logLik(modelI) logLik(modelII) # Less than logLik(modelI) # Reproduce the top 3 equations on p.65 of Carroll and Ruppert (1988). # They are called Equations (1)--(3) here. # Equation (1) hormone <- transform(hormone, rX = 1 / X) clist <- list("(Intercept)" = diag(2), X = diag(2), rX = rbind(0, 1)) fit1 <- vglm(Y ~ 1 + X + rX, family = uninormal(zero = NULL), constraints = clist, data = hormone, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) # Actually, the intercepts do not seem significant plot(Y ~ X, hormone, col = "blue") lines(fitted(fit1) ~ X, hormone, col = "orange") # Equation (2) fit2 <- rrvglm(Y ~ 1 + X, uninormal(zero = NULL), hormone, trace = TRUE) coef(fit2, matrix = TRUE) plot(Y ~ X, hormone, col = "blue") lines(fitted(fit2) ~ X, hormone, col = "red") # Add +- 2 SEs lines(fitted(fit2) + 2 * exp(predict(fit2)[, "loglink(sd)"]) ~ X, hormone, col = "orange") lines(fitted(fit2) - 2 * exp(predict(fit2)[, "loglink(sd)"]) ~ X, hormone, col = "orange") # Equation (3) # Does not fit well because the loglink link for the mean is not good. fit3 <- rrvglm(Y ~ 1 + X, maxit = 300, data = hormone, trace = TRUE, uninormal(lmean = "loglink", zero = NULL)) coef(fit3, matrix = TRUE) plot(Y ~ X, hormone, col = "blue") # Does not look okay. lines(exp(predict(fit3)[, 1]) ~ X, hormone, col = "red") # Add +- 2 SEs lines(fitted(fit3) + 2 * exp(predict(fit3)[, "loglink(sd)"]) ~ X, hormone, col = "orange") lines(fitted(fit3) - 2 * exp(predict(fit3)[, "loglink(sd)"]) ~ X, hormone, col = "orange") ## End(Not run)
Abundance of hunting spiders in a Dutch dune area.
data(hspider)
data(hspider)
A data frame with 28 observations (sites) on the following 18 variables.
Log percentage of soil dry mass.
Log percentage cover of bare sand.
Log percentage cover of fallen leaves and twigs.
Log percentage cover of the moss layer.
Log percentage cover of the herb layer.
Reflection of the soil surface with cloudless sky.
Abundance of Alopecosa accentuata.
Abundance of Alopecosa cuneata.
Abundance of Alopecosa fabrilis.
Abundance of Arctosa lutetiana.
Abundance of Arctosa perita.
Abundance of Aulonia albimana.
Abundance of Pardosa lugubris.
Abundance of Pardosa monticola.
Abundance of Pardosa nigriceps.
Abundance of Pardosa pullata.
Abundance of Trochosa terricola.
Abundance of Zora spinimana.
The data, which originally came from Van der Aart and Smeek-Enserink (1975) consists of abundances (numbers trapped over a 60 week period) and 6 environmental variables. There were 28 sites.
This data set has been often used to illustrate
ordination, e.g., using
canonical correspondence analysis (CCA).
In the example below, the
data is used for constrained quadratic ordination
(CQO; formerly called
canonical Gaussian ordination or CGO),
a numerically intensive method
that has many superior qualities.
See cqo
for details.
Van der Aart, P. J. M. and Smeek-Enserink, N. (1975). Correlations between distributions of hunting spiders (Lycosidae, Ctenidae) and environmental characteristics in a dune area. Netherlands Journal of Zoology, 25, 1–45.
summary(hspider) ## Not run: # Standardize the environmental variables: hspider[, 1:6] <- scale(subset(hspider, select = WaterCon:ReflLux)) # Fit a rank-1 binomial CAO hsbin <- hspider # Binary species data hsbin[, -(1:6)] <- as.numeric(hsbin[, -(1:6)] > 0) set.seed(123) ahsb1 <- cao(cbind(Alopcune, Arctlute, Auloalbi, Zoraspin) ~ WaterCon + ReflLux, family = binomialff(multiple.responses = TRUE), df1.nl = 2.2, Bestof = 3, data = hsbin) par(mfrow = 2:1, las = 1) lvplot(ahsb1, type = "predictors", llwd = 2, ylab = "logitlink(p)", lcol = 1:9) persp(ahsb1, rug = TRUE, col = 1:10, lwd = 2) coef(ahsb1) ## End(Not run)
summary(hspider) ## Not run: # Standardize the environmental variables: hspider[, 1:6] <- scale(subset(hspider, select = WaterCon:ReflLux)) # Fit a rank-1 binomial CAO hsbin <- hspider # Binary species data hsbin[, -(1:6)] <- as.numeric(hsbin[, -(1:6)] > 0) set.seed(123) ahsb1 <- cao(cbind(Alopcune, Arctlute, Auloalbi, Zoraspin) ~ WaterCon + ReflLux, family = binomialff(multiple.responses = TRUE), df1.nl = 2.2, Bestof = 3, data = hsbin) par(mfrow = 2:1, las = 1) lvplot(ahsb1, type = "predictors", llwd = 2, ylab = "logitlink(p)", lcol = 1:9) persp(ahsb1, rug = TRUE, col = 1:10, lwd = 2) coef(ahsb1) ## End(Not run)
M-estimation of the two parameters of Huber's least favourable distribution. The one parameter case is also implemented.
huber1(llocation = "identitylink", k = 0.862, imethod = 1) huber2(llocation = "identitylink", lscale = "loglink", k = 0.862, imethod = 1, zero = "scale")
huber1(llocation = "identitylink", k = 0.862, imethod = 1) huber2(llocation = "identitylink", lscale = "loglink", k = 0.862, imethod = 1, zero = "scale")
llocation , lscale
|
Link functions applied to the location and scale parameters.
See |
k |
Tuning constant.
See |
imethod , zero
|
See |
Huber's least favourable distribution family function is popular for resistant/robust regression. The center of the distribution is normal and its tails are double exponential.
By default, the mean is the first linear/additive predictor (returned as the fitted values; this is the location parameter), and the log of the scale parameter is the second linear/additive predictor. The Fisher information matrix is diagonal; Fisher scoring is implemented.
The VGAM family function huber1()
estimates only the
location parameter. It assumes a scale parameter of unit value.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
Warning: actually, huber2()
may be erroneous since the
first derivative is not continuous when there are two parameters
to estimate. huber1()
is fine in this respect.
The response should be univariate.
T. W. Yee. Help was given by Arash Ardalan.
Huber, P. J. and Ronchetti, E. (2009). Robust Statistics, 2nd ed. New York: Wiley.
rhuber
,
uninormal
,
laplace
,
CommonVGAMffArguments
.
set.seed(1231); NN <- 30; coef1 <- 1; coef2 <- 10 hdata <- data.frame(x2 = sort(runif(NN))) hdata <- transform(hdata, y = rhuber(NN, mu = coef1 + coef2 * x2)) hdata$x2[1] <- 0.0 # Add an outlier hdata$y[1] <- 10 fit.huber2 <- vglm(y ~ x2, huber2(imethod = 3), hdata, trace = TRUE) fit.huber1 <- vglm(y ~ x2, huber1(imethod = 3), hdata, trace = TRUE) coef(fit.huber2, matrix = TRUE) summary(fit.huber2) ## Not run: # Plot the results plot(y ~ x2, data = hdata, col = "blue", las = 1) lines(fitted(fit.huber2) ~ x2, data = hdata, col = "darkgreen", lwd = 2) fit.lm <- lm(y ~ x2, hdata) # Compare to a LM: lines(fitted(fit.lm) ~ x2, data = hdata, col = "lavender", lwd = 3) # Compare to truth: lines(coef1 + coef2 * x2 ~ x2, data = hdata, col = "orange", lwd = 2, lty = "dashed") legend("bottomright", legend = c("truth", "huber", "lm"), col = c("orange", "darkgreen", "lavender"), lty = c("dashed", "solid", "solid"), lwd = c(2, 2, 3)) ## End(Not run)
set.seed(1231); NN <- 30; coef1 <- 1; coef2 <- 10 hdata <- data.frame(x2 = sort(runif(NN))) hdata <- transform(hdata, y = rhuber(NN, mu = coef1 + coef2 * x2)) hdata$x2[1] <- 0.0 # Add an outlier hdata$y[1] <- 10 fit.huber2 <- vglm(y ~ x2, huber2(imethod = 3), hdata, trace = TRUE) fit.huber1 <- vglm(y ~ x2, huber1(imethod = 3), hdata, trace = TRUE) coef(fit.huber2, matrix = TRUE) summary(fit.huber2) ## Not run: # Plot the results plot(y ~ x2, data = hdata, col = "blue", las = 1) lines(fitted(fit.huber2) ~ x2, data = hdata, col = "darkgreen", lwd = 2) fit.lm <- lm(y ~ x2, hdata) # Compare to a LM: lines(fitted(fit.lm) ~ x2, data = hdata, col = "lavender", lwd = 3) # Compare to truth: lines(coef1 + coef2 * x2 ~ x2, data = hdata, col = "orange", lwd = 2, lty = "dashed") legend("bottomright", legend = c("truth", "huber", "lm"), col = c("orange", "darkgreen", "lavender"), lty = c("dashed", "solid", "solid"), lwd = c(2, 2, 3)) ## End(Not run)
Simulated capture data set for the linear logistic model depending on an occasion covariate and an individual covariate for 10 trapping occasions and 20 individuals.
data(Huggins89table1) data(Huggins89.t1)
data(Huggins89table1) data(Huggins89.t1)
The format is a data frame.
Table 1 of Huggins (1989) gives this toy data set.
Note that variables t1
,...,t10
are
occasion-specific variables. They correspond to the
response variables y1
,...,y10
which
have values 1 for capture and 0 for not captured.
Both Huggins89table1
and Huggins89.t1
are identical.
The latter used variables beginning with z
,
not t
, and may be withdrawn very soon.
Huggins, R. M. (1989). On the statistical analysis of capture experiments. Biometrika, 76, 133–140.
## Not run: Huggins89table1 <- transform(Huggins89table1, x3.tij = t01, T02 = t02, T03 = t03, T04 = t04, T05 = t05, T06 = t06, T07 = t07, T08 = t08, T09 = t09, T10 = t10) small.table1 <- subset(Huggins89table1, y01 + y02 + y03 + y04 + y05 + y06 + y07 + y08 + y09 + y10 > 0) # fit.tbh is the bottom equation on p.133. # It is a M_tbh model. fit.tbh <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2 + x3.tij, xij = list(x3.tij ~ t01 + t02 + t03 + t04 + t05 + t06 + t07 + t08 + t09 + t10 + T02 + T03 + T04 + T05 + T06 + T07 + T08 + T09 + T10 - 1), posbernoulli.tb(parallel.t = TRUE ~ x2 + x3.tij), data = small.table1, trace = TRUE, form2 = ~ x2 + x3.tij + t01 + t02 + t03 + t04 + t05 + t06 + t07 + t08 + t09 + t10 + T02 + T03 + T04 + T05 + T06 + T07 + T08 + T09 + T10) # These results differ a bit from Huggins (1989), probably because # two animals had to be removed here (they were never caught): coef(fit.tbh) # First element is the behavioural effect sqrt(diag(vcov(fit.tbh))) # SEs constraints(fit.tbh, matrix = TRUE) summary(fit.tbh, presid = FALSE) fit.tbh@extra$N.hat # Estimate of the population site N; cf. 20.86 fit.tbh@extra$SE.N.hat # Its standard error; cf. 1.87 or 4.51 fit.th <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2, posbernoulli.t, data = small.table1, trace = TRUE) coef(fit.th) constraints(fit.th) coef(fit.th, matrix = TRUE) # M_th model summary(fit.th, presid = FALSE) fit.th@extra$N.hat # Estimate of the population size N fit.th@extra$SE.N.hat # Its standard error fit.bh <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2, posbernoulli.b(I2 = FALSE), data = small.table1, trace = TRUE) coef(fit.bh) constraints(fit.bh) coef(fit.bh, matrix = TRUE) # M_bh model summary(fit.bh, presid = FALSE) fit.bh@extra$N.hat fit.bh@extra$SE.N.hat fit.h <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2, posbernoulli.b, data = small.table1, trace = TRUE) coef(fit.h, matrix = TRUE) # M_h model (version 1) coef(fit.h) summary(fit.h, presid = FALSE) fit.h@extra$N.hat fit.h@extra$SE.N.hat Fit.h <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2, posbernoulli.t(parallel.t = TRUE ~ x2), data = small.table1, trace = TRUE) coef(Fit.h) coef(Fit.h, matrix = TRUE) # M_h model (version 2) summary(Fit.h, presid = FALSE) Fit.h@extra$N.hat Fit.h@extra$SE.N.hat ## End(Not run)
## Not run: Huggins89table1 <- transform(Huggins89table1, x3.tij = t01, T02 = t02, T03 = t03, T04 = t04, T05 = t05, T06 = t06, T07 = t07, T08 = t08, T09 = t09, T10 = t10) small.table1 <- subset(Huggins89table1, y01 + y02 + y03 + y04 + y05 + y06 + y07 + y08 + y09 + y10 > 0) # fit.tbh is the bottom equation on p.133. # It is a M_tbh model. fit.tbh <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2 + x3.tij, xij = list(x3.tij ~ t01 + t02 + t03 + t04 + t05 + t06 + t07 + t08 + t09 + t10 + T02 + T03 + T04 + T05 + T06 + T07 + T08 + T09 + T10 - 1), posbernoulli.tb(parallel.t = TRUE ~ x2 + x3.tij), data = small.table1, trace = TRUE, form2 = ~ x2 + x3.tij + t01 + t02 + t03 + t04 + t05 + t06 + t07 + t08 + t09 + t10 + T02 + T03 + T04 + T05 + T06 + T07 + T08 + T09 + T10) # These results differ a bit from Huggins (1989), probably because # two animals had to be removed here (they were never caught): coef(fit.tbh) # First element is the behavioural effect sqrt(diag(vcov(fit.tbh))) # SEs constraints(fit.tbh, matrix = TRUE) summary(fit.tbh, presid = FALSE) fit.tbh@extra$N.hat # Estimate of the population site N; cf. 20.86 fit.tbh@extra$SE.N.hat # Its standard error; cf. 1.87 or 4.51 fit.th <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2, posbernoulli.t, data = small.table1, trace = TRUE) coef(fit.th) constraints(fit.th) coef(fit.th, matrix = TRUE) # M_th model summary(fit.th, presid = FALSE) fit.th@extra$N.hat # Estimate of the population size N fit.th@extra$SE.N.hat # Its standard error fit.bh <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2, posbernoulli.b(I2 = FALSE), data = small.table1, trace = TRUE) coef(fit.bh) constraints(fit.bh) coef(fit.bh, matrix = TRUE) # M_bh model summary(fit.bh, presid = FALSE) fit.bh@extra$N.hat fit.bh@extra$SE.N.hat fit.h <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2, posbernoulli.b, data = small.table1, trace = TRUE) coef(fit.h, matrix = TRUE) # M_h model (version 1) coef(fit.h) summary(fit.h, presid = FALSE) fit.h@extra$N.hat fit.h@extra$SE.N.hat Fit.h <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2, posbernoulli.t(parallel.t = TRUE ~ x2), data = small.table1, trace = TRUE) coef(Fit.h) coef(Fit.h, matrix = TRUE) # M_h model (version 2) summary(Fit.h, presid = FALSE) Fit.h@extra$N.hat Fit.h@extra$SE.N.hat ## End(Not run)
The hunua
data frame has 392 rows and 18 columns.
Altitude is explanatory, and there are binary responses
(presence/absence = 1/0 respectively) for 17 plant species.
data(hunua)
data(hunua)
This data frame contains the following columns:
Agathis australis, or Kauri
Beilschmiedia tawa, or Tawa
Corynocarpus laevigatus
Cyathea dealbata
Cyathea medullaris
Dacrydium cupressinum
Dacrycarpus dacrydioides
Elaecarpus dentatus
Hedycarya arborea
Species name unknown
Knightia excelsa, or Rewarewa
Kunzea ericoides
Leptospermum scoparium
Metrosideros robusta
Nestegis lanceolata
Rhopalostylis sapida
Vitex lucens, or Puriri
meters above sea level
These were collected from the Hunua Ranges, a small forest in southern
Auckland, New Zealand. At 392 sites in the forest, the presence/absence
of 17 plant species was recorded, as well as the altitude.
Each site was of area size 200.
Dr Neil Mitchell, University of Auckland.
# Fit a GAM using vgam() and compare it with the Waitakere Ranges one fit.h <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) ## Not run: plot(fit.h, se = TRUE, lcol = "orange", scol = "orange", llwd = 2, slwd = 2, main = "Orange is Hunua, Blue is Waitakere") ## End(Not run) head(predict(fit.h, hunua, type = "response")) fit.w <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = waitakere) ## Not run: plot(fit.w, se = TRUE, lcol = "blue", scol = "blue", add = TRUE) ## End(Not run) head(predict(fit.w, hunua, type = "response")) # Same as above?
# Fit a GAM using vgam() and compare it with the Waitakere Ranges one fit.h <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) ## Not run: plot(fit.h, se = TRUE, lcol = "orange", scol = "orange", llwd = 2, slwd = 2, main = "Orange is Hunua, Blue is Waitakere") ## End(Not run) head(predict(fit.h, hunua, type = "response")) fit.w <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = waitakere) ## Not run: plot(fit.w, se = TRUE, lcol = "blue", scol = "blue", add = TRUE) ## End(Not run) head(predict(fit.w, hunua, type = "response")) # Same as above?
Estimating the parameter of the Husler-Reiss angular surface distribution by maximum likelihood estimation.
hurea(lshape = "loglink", zero = NULL, nrfs = 1, gshape = exp(3 * ppoints(5) - 1), parallel = FALSE)
hurea(lshape = "loglink", zero = NULL, nrfs = 1, gshape = exp(3 * ppoints(5) - 1), parallel = FALSE)
lshape , gshape
|
Details at |
nrfs , zero , parallel
|
Details at |
The Husler-Reiss angular surface distribution has a probability density function that can be written
for and positive shape parameter
.
The mean of
is currently unknown to me,
as well as its quantiles.
Hence
is currently returned as the
fitted values.
Fisher-scoring is implemented.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
This VGAM family function handles multiple responses.
It may struggle and/or fail
when is close to 0.
Some comments about “u”-shaped versus unimodal
densities accommodated by this distribution
are at
dhurea
.
T. W. Yee
Mhalla, L. and de Carvalho, M. and Chavez-Demoulin, V. (2019). Regression-type models for extremal dependence. Scandinavian Journal of Statistics, 46, 1141–1167.
nn <- 100; set.seed(1) hdata <- data.frame(x2 = runif(nn)) hdata <- transform(hdata, # Cannot generate proper random variates! y1 = rbeta(nn, shape1 = 0.5, shape2 = 0.5), # "U" shaped y2 = rnorm(nn, 0.65, sd = exp(-3 - 4 * x2))) # Multiple responses: hfit <- vglm(cbind(y1, y2) ~ x2, hurea, hdata, trace = TRUE) coef(hfit, matrix = TRUE) summary(hfit)
nn <- 100; set.seed(1) hdata <- data.frame(x2 = runif(nn)) hdata <- transform(hdata, # Cannot generate proper random variates! y1 = rbeta(nn, shape1 = 0.5, shape2 = 0.5), # "U" shaped y2 = rnorm(nn, 0.65, sd = exp(-3 - 4 * x2))) # Multiple responses: hfit <- vglm(cbind(y1, y2) ~ x2, hurea, hdata, trace = TRUE) coef(hfit, matrix = TRUE) summary(hfit)
Density for the Husler-Reiss angular surface distribution.
dhurea(x, shape, log = FALSE)
dhurea(x, shape, log = FALSE)
x |
Same as |
shape |
the positive (shape) parameter.
It is often called |
log |
Logical.
If |
See hurea
, the VGAM
family function for
estimating the (shape) parameter by
maximum likelihood
estimation, for the formula of the
probability density function.
dhurea
gives the density.
The cases x == 0
, x == 1
,
shape == 0
and
shape == Inf
may not be handled correctly.
Difficulties are encountered as
the shape parameter approaches 0 with
respect to integrate
because the density converges to a degenerate
distribution with probability mass at 0 and 1.
That is, when is around 0.5 the
density is “u” shaped and the area around the
endpoints becomes concentrated at the
two points.
See the examples below.
Approximately, the
density is “u” shaped for
and unimodal for
.
T. W. Yee
integrate(dhurea, 0, 1, shape = 0.20) # Incorrect integrate(dhurea, 0, 1, shape = 0.35) # struggling but okay ## Not run: x <- seq(0, 1, length = 501) par(mfrow = c(2, 2)) plot(x, dhurea(x, 0.7), col = "blue", type = "l") plot(x, dhurea(x, 1.1), col = "blue", type = "l") plot(x, dhurea(x, 1.4), col = "blue", type = "l") plot(x, dhurea(x, 3.0), col = "blue", type = "l") ## End(Not run)
integrate(dhurea, 0, 1, shape = 0.20) # Incorrect integrate(dhurea, 0, 1, shape = 0.35) # struggling but okay ## Not run: x <- seq(0, 1, length = 501) par(mfrow = c(2, 2)) plot(x, dhurea(x, 0.7), col = "blue", type = "l") plot(x, dhurea(x, 1.1), col = "blue", type = "l") plot(x, dhurea(x, 1.4), col = "blue", type = "l") plot(x, dhurea(x, 3.0), col = "blue", type = "l") ## End(Not run)
Family function for a hypergeometric distribution where either the number of white balls or the total number of white and black balls are unknown.
hyperg(N = NULL, D = NULL, lprob = "logitlink", iprob = NULL)
hyperg(N = NULL, D = NULL, lprob = "logitlink", iprob = NULL)
N |
Total number of white and black balls in the urn.
Must be a vector with positive values, and is recycled, if necessary,
to the same length as the response.
One of |
D |
Number of white balls in the urn.
Must be a vector with positive values, and is recycled, if necessary,
to the same length as the response.
One of |
lprob |
Link function for the probabilities.
See |
iprob |
Optional initial value for the probabilities. The default is to choose initial values internally. |
Consider the scenario from
dhyper
where there
are balls in an urn, where
are white and
are black. A simple random sample (i.e., without replacement) of
balls is taken.
The response here is the sample proportion of white balls.
In this document,
N
is ,
D
is (for the number of “defectives”, in quality
control terminology, or equivalently, the number of marked individuals).
The parameter to be estimated is the population proportion of
white balls, viz.
.
Depending on which one of N
and D
is inputted, the
estimate of the other parameter can be obtained from the equation
, or equivalently,
prob = D/N
. However,
the log-factorials are computed using lgamma
and both and
are not restricted to being integer.
Thus if an integer
is to be estimated, it will be necessary to
evaluate the likelihood function at integer values about the estimate,
i.e., at
trunc(Nhat)
and ceiling(Nhat)
where Nhat
is the (real) estimate of .
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as
vglm
,
vgam
,
rrvglm
,
cqo
,
and cao
.
No checking is done to ensure that certain values are within range,
e.g., .
The response can be of one of three formats: a factor (first
level taken as success), a vector of proportions of success,
or a 2-column matrix (first column = successes) of counts.
The argument weights
in the modelling function can also be
specified. In particular, for a general vector of proportions,
you will need to specify weights
because the number of
trials is needed.
Thomas W. Yee
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
nn <- 100 m <- 5 # Number of white balls in the population k <- rep(4, len = nn) # Sample sizes n <- 4 # Number of black balls in the population y <- rhyper(nn = nn, m = m, n = n, k = k) yprop <- y / k # Sample proportions # N is unknown, D is known. Both models are equivalent: fit <- vglm(cbind(y,k-y) ~ 1, hyperg(D = m), trace = TRUE, crit = "c") fit <- vglm(yprop ~ 1, hyperg(D = m), weight = k, trace = TRUE, crit = "c") # N is known, D is unknown. Both models are equivalent: fit <- vglm(cbind(y, k-y) ~ 1, hyperg(N = m+n), trace = TRUE, crit = "l") fit <- vglm(yprop ~ 1, hyperg(N = m+n), weight = k, trace = TRUE, crit = "l") coef(fit, matrix = TRUE) Coef(fit) # Should be equal to the true population proportion unique(m / (m+n)) # The true population proportion fit@extra head(fitted(fit)) summary(fit)
nn <- 100 m <- 5 # Number of white balls in the population k <- rep(4, len = nn) # Sample sizes n <- 4 # Number of black balls in the population y <- rhyper(nn = nn, m = m, n = n, k = k) yprop <- y / k # Sample proportions # N is unknown, D is known. Both models are equivalent: fit <- vglm(cbind(y,k-y) ~ 1, hyperg(D = m), trace = TRUE, crit = "c") fit <- vglm(yprop ~ 1, hyperg(D = m), weight = k, trace = TRUE, crit = "c") # N is known, D is unknown. Both models are equivalent: fit <- vglm(cbind(y, k-y) ~ 1, hyperg(N = m+n), trace = TRUE, crit = "l") fit <- vglm(yprop ~ 1, hyperg(N = m+n), weight = k, trace = TRUE, crit = "l") coef(fit, matrix = TRUE) Coef(fit) # Should be equal to the true population proportion unique(m / (m+n)) # The true population proportion fit@extra head(fitted(fit)) summary(fit)
Estimation of the parameter of the hyperbolic secant distribution.
hypersecant(link.theta = extlogitlink(min = -pi/2, max = pi/2), init.theta = NULL) hypersecant01(link.theta = extlogitlink(min = -pi/2, max = pi/2), init.theta = NULL)
hypersecant(link.theta = extlogitlink(min = -pi/2, max = pi/2), init.theta = NULL) hypersecant01(link.theta = extlogitlink(min = -pi/2, max = pi/2), init.theta = NULL)
link.theta |
Parameter link function applied to the
parameter |
init.theta |
Optional initial value for |
The probability density function of the hyperbolic secant distribution is given by
for parameter
and all real
.
The mean of
is
(returned as the fitted values).
Morris (1982) calls this model NEF-HS
(Natural Exponential Family-Hyperbolic Secant).
It is used to generate NEFs, giving rise to the class of NEF-GHS
(G for Generalized).
Another parameterization is used for hypersecant01()
:
let .
Then this uses
for
parameter
and
.
Then the mean of
is
(returned as the fitted values) and the variance is
.
For both parameterizations Newton-Raphson is same as Fisher scoring.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
T. W. Yee
Jorgensen, B. (1997). The Theory of Dispersion Models. London: Chapman & Hall.
Morris, C. N. (1982). Natural exponential families with quadratic variance functions. The Annals of Statistics, 10(1), 65–80.
hdata <- data.frame(x2 = rnorm(nn <- 200)) hdata <- transform(hdata, y = rnorm(nn)) # Not very good data! fit1 <- vglm(y ~ x2, hypersecant, hdata, trace = TRUE, crit = "c") coef(fit1, matrix = TRUE) fit1@misc$earg # Not recommended: fit2 <- vglm(y ~ x2, hypersecant(link = "identitylink"), hdata) coef(fit2, matrix = TRUE) fit2@misc$earg
hdata <- data.frame(x2 = rnorm(nn <- 200)) hdata <- transform(hdata, y = rnorm(nn)) # Not very good data! fit1 <- vglm(y ~ x2, hypersecant, hdata, trace = TRUE, crit = "c") coef(fit1, matrix = TRUE) fit1@misc$earg # Not recommended: fit2 <- vglm(y ~ x2, hypersecant(link = "identitylink"), hdata) coef(fit2, matrix = TRUE) fit2@misc$earg
Estimating the parameter of Haight's zeta distribution
hzeta(lshape = "logloglink", ishape = NULL, nsimEIM = 100)
hzeta(lshape = "logloglink", ishape = NULL, nsimEIM = 100)
lshape |
Parameter link function for the parameter,
called |
ishape , nsimEIM
|
See |
The probability function is
where the parameter
and
.
The function
dhzeta
computes this probability function.
The mean of , which is returned as fitted values, is
provided
, where
is
Riemann's zeta function.
The mean is a decreasing function of
.
The mean is infinite if
, and
the variance is infinite if
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
T. W. Yee
Johnson N. L., Kemp, A. W. and Kotz S. (2005). Univariate Discrete Distributions, 3rd edition, pp.533–4. Hoboken, New Jersey: Wiley.
Hzeta
,
zeta
,
zetaff
,
loglog
,
simulate.vlm
.
shape <- exp(exp(-0.1)) # The parameter hdata <- data.frame(y = rhzeta(n = 1000, shape)) fit <- vglm(y ~ 1, hzeta, data = hdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) # Useful for intercept-only models; should be same as shape c(with(hdata, mean(y)), head(fitted(fit), 1)) summary(fit)
shape <- exp(exp(-0.1)) # The parameter hdata <- data.frame(y = rhzeta(n = 1000, shape)) fit <- vglm(y ~ 1, hzeta, data = hdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) # Useful for intercept-only models; should be same as shape c(with(hdata, mean(y)), head(fitted(fit), 1)) summary(fit)
Density, distribution function, quantile function and random
generation for Haight's zeta distribution with parameter
shape
.
dhzeta(x, shape, log = FALSE) phzeta(q, shape, log.p = FALSE) qhzeta(p, shape) rhzeta(n, shape)
dhzeta(x, shape, log = FALSE) phzeta(q, shape, log.p = FALSE) qhzeta(p, shape) rhzeta(n, shape)
x , q , p , n
|
Same meaning as |
shape |
The positive shape parameter.
Called |
log , log.p
|
The probability function is
where and
.
dhzeta
gives the density,
phzeta
gives the distribution function,
qhzeta
gives the quantile function, and
rhzeta
generates random deviates.
Given some response data, the VGAM family function
hzeta
estimates the parameter shape
.
T. W. Yee and Kai Huang
hzeta
,
zeta
,
zetaff
,
simulate.vlm
.
dhzeta(1:20, 2.1) rhzeta(20, 2.1) round(1000 * dhzeta(1:8, 2)) table(rhzeta(1000, 2)) ## Not run: shape <- 1.1; x <- 1:10 plot(x, dhzeta(x, shape = shape), type = "h", ylim = 0:1, sub = paste("shape =", shape), las = 1, col = "blue", ylab = "Probability", lwd = 2, main = "Haight's zeta: blue = density; orange = CDF") lines(x+0.1, phzeta(x, shape = shape), col = "orange", lty = 3, lwd = 2, type = "h") ## End(Not run)
dhzeta(1:20, 2.1) rhzeta(20, 2.1) round(1000 * dhzeta(1:8, 2)) table(rhzeta(1000, 2)) ## Not run: shape <- 1.1; x <- 1:10 plot(x, dhzeta(x, shape = shape), type = "h", ylim = 0:1, sub = paste("shape =", shape), las = 1, col = "blue", ylab = "Probability", lwd = 2, main = "Haight's zeta: blue = density; orange = CDF") lines(x+0.1, phzeta(x, shape = shape), col = "orange", lty = 3, lwd = 2, type = "h") ## End(Not run)
Maps the elements of an array containing symmetric positive-definite matrices to a matrix with sufficient columns to hold them (called matrix-band format.)
iam(j, k, M, both = FALSE, diag = TRUE)
iam(j, k, M, both = FALSE, diag = TRUE)
j |
Usually an integer from the set { |
k |
An integer from the set { |
M |
The number of linear/additive predictors. This is the dimension of each positive-definite symmetric matrix. |
both |
Logical. Return both the row and column indices? See below for more details. |
diag |
Logical. Return the indices for the diagonal elements?
If |
Suppose we have symmetric positive-definite square
matrices,
each
by
, and
these are stored in an
array
of dimension c(n,M,M)
.
Then these can be more compactly represented by a matrix
of dimension c(n,K)
where K
is an integer between
M
and M*(M+1)/2
inclusive. The mapping between
these two representations is given by this function.
It firstly enumerates by the diagonal elements, followed by
the band immediately above the diagonal, then the band above
that one, etc.
The last element is (1,M)
.
This function performs the mapping from elements (j,k)
of symmetric positive-definite square matrices to the columns
of another matrix representing such. This is called the
matrix-band format and is used by the VGAM package.
This function has a dual purpose depending on the value of
both
. If both = FALSE
then the column number
corresponding to the j
-k
element of the matrix is
returned. If both = TRUE
then j
and k
are
ignored and a list with the following components are returned.
row.index |
The row indices of the upper triangular part of the
matrix (This may or may not include the diagonal elements,
depending on the argument |
col.index |
The column indices of the upper triangular part of the
matrix (This may or may not include the diagonal elements,
depending on the argument |
This function is used in the weight
slot of many
VGAM family functions
(see vglmff-class
),
especially those whose is determined by the data,
e.g.,
dirichlet
, multinomial
.
T. W. Yee
iam(1, 2, M = 3) # The 4th coln represents elt (1,2) of a 3x3 matrix iam(NULL, NULL, M = 3, both = TRUE) # Return the row & column indices dirichlet()@weight M <- 4 temp1 <- iam(NA, NA, M = M, both = TRUE) mat1 <- matrix(NA, M, M) mat1[cbind(temp1$row, temp1$col)] = 1:length(temp1$row) mat1 # More commonly used temp2 <- iam(NA, NA, M = M, both = TRUE, diag = FALSE) mat2 <- matrix(NA, M, M) mat2[cbind(temp2$row, temp2$col)] = 1:length(temp2$row) mat2 # Rarely used
iam(1, 2, M = 3) # The 4th coln represents elt (1,2) of a 3x3 matrix iam(NULL, NULL, M = 3, both = TRUE) # Return the row & column indices dirichlet()@weight M <- 4 temp1 <- iam(NA, NA, M = M, both = TRUE) mat1 <- matrix(NA, M, M) mat1[cbind(temp1$row, temp1$col)] = 1:length(temp1$row) mat1 # More commonly used temp2 <- iam(NA, NA, M = M, both = TRUE, diag = FALSE) mat2 <- matrix(NA, M, M) mat2[cbind(temp2$row, temp2$col)] = 1:length(temp2$row) mat2 # Rarely used
Computes the identity transformation, including its inverse and the first two derivatives.
identitylink(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) negidentitylink(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
identitylink(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) negidentitylink(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
inverse , deriv , short , tag
|
Details at |
The identity link function
should be available to every parameter estimated by the
VGAM library. However, it usually results in numerical
problems because the estimates lie outside the permitted
range. Consequently, the result may contain
Inf
,
-Inf
, NA
or NaN
.
The function negidentitylink
is the
negative-identity link function and corresponds to
. This is useful
for some models, e.g., in the literature supporting the
gevff
function it seems that half of the authors
use for the shape parameter and the other
half use
instead of
.
For identitylink()
:
for deriv = 0
, the identity of theta
, i.e.,
theta
when inverse = FALSE
,
and if inverse = TRUE
then theta
.
For deriv = 1
, then the function returns
d eta
/ d theta
as a function of
theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns the reciprocal.
For negidentitylink()
: the results are similar to
identitylink()
except for a sign change in most cases.
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Links
,
loglink
,
logitlink
,
probitlink
,
powerlink
.
identitylink((-5):5) identitylink((-5):5, deriv = 1) identitylink((-5):5, deriv = 2) negidentitylink((-5):5) negidentitylink((-5):5, deriv = 1) negidentitylink((-5):5, deriv = 2)
identitylink((-5):5) identitylink((-5):5, deriv = 1) identitylink((-5):5, deriv = 2) negidentitylink((-5):5) negidentitylink((-5):5, deriv = 1) negidentitylink((-5):5, deriv = 2)
Returns a matrix containing the influence function of a fitted model, e.g., a "vglm" object.
Influence(object, ...) Influence.vglm(object, weighted = TRUE, ...)
Influence(object, ...) Influence.vglm(object, weighted = TRUE, ...)
object |
an object, especially that of class |
weighted |
Logical. Include the prior weights?
Currently only |
... |
any additional arguments such as to allow or disallow the prior weights. |
Influence functions are useful in fields such
as sample survey theory,
e.g.,
survey,
svyVGAM.
For each ,
the formula is approximately
where
is the weighted Fisher
information matrix and U is
the
th score vector.
An n
by p.vlm
matrix.
This function is currently experimental and
defaults may change.
Use with caution!
The functions here should not be confused with
lm.influence
.
vglm
,
vglm-class
,
survey.
pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, acat, data = pneumo) coef(fit) # 8-vector Influence(fit) # 8 x 4 all(abs(colSums(Influence(fit))) < 1e-6) # TRUE
pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, acat, data = pneumo) coef(fit) # 8-vector Influence(fit) # 8 x 4 all(abs(colSums(Influence(fit))) < 1e-6) # TRUE
Estimates the two parameters of an inverse binomial distribution by maximum likelihood estimation.
inv.binomial(lrho = extlogitlink(min = 0.5, max = 1), llambda = "loglink", irho = NULL, ilambda = NULL, zero = NULL)
inv.binomial(lrho = extlogitlink(min = 0.5, max = 1), llambda = "loglink", irho = NULL, ilambda = NULL, zero = NULL)
lrho , llambda
|
Link function for the |
irho , ilambda
|
Numeric.
Optional initial values for |
zero |
The inverse binomial distribution of Yanagimoto (1989) has density function
where and
,
and
.
The first two moments exist for
;
then the mean
is
(returned as the fitted values) and the
variance is
.
The inverse binomial distribution is a special
case of the generalized negative binomial distribution of
Jain and Consul (1971).
It holds that
so that the
inverse binomial distribution
is overdispersed compared with the Poisson distribution.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
This VGAM family function only works reasonably well with
intercept-only models.
Good initial values are needed; if convergence failure occurs
use irho
and/or ilambda
.
Some elements of the working weight matrices use the expected
information matrix while other elements use the observed
information matrix.
Yet to do: using the mean and the reciprocal of
results in an EIM that is diagonal.
T. W. Yee
Yanagimoto, T. (1989). The inverse binomial distribution as a statistical model. Communications in Statistics: Theory and Methods, 18, 3625–3633.
Jain, G. C. and Consul, P. C. (1971). A generalized negative binomial distribution. SIAM Journal on Applied Mathematics, 21, 501–513.
Jorgensen, B. (1997). The Theory of Dispersion Models. London: Chapman & Hall
idata <- data.frame(y = rnbinom(n <- 1000, mu = exp(3), size = exp(1))) fit <- vglm(y ~ 1, inv.binomial, data = idata, trace = TRUE) with(idata, c(mean(y), head(fitted(fit), 1))) summary(fit) coef(fit, matrix = TRUE) Coef(fit) sum(weights(fit)) # Sum of the prior weights sum(weights(fit, type = "work")) # Sum of the working weights
idata <- data.frame(y = rnbinom(n <- 1000, mu = exp(3), size = exp(1))) fit <- vglm(y ~ 1, inv.binomial, data = idata, trace = TRUE) with(idata, c(mean(y), head(fitted(fit), 1))) summary(fit) coef(fit, matrix = TRUE) Coef(fit) sum(weights(fit)) # Sum of the prior weights sum(weights(fit, type = "work")) # Sum of the working weights
Density, distribution function and random generation for the inverse Gaussian distribution.
dinv.gaussian(x, mu, lambda, log = FALSE) pinv.gaussian(q, mu, lambda) rinv.gaussian(n, mu, lambda)
dinv.gaussian(x, mu, lambda, log = FALSE) pinv.gaussian(q, mu, lambda) rinv.gaussian(n, mu, lambda)
x , q
|
vector of quantiles. |
n |
number of observations.
If |
mu |
the mean parameter. |
lambda |
the |
log |
Logical.
If |
See inv.gaussianff
, the VGAM family function
for estimating both parameters by maximum likelihood estimation,
for the formula of the probability density function.
dinv.gaussian
gives the density,
pinv.gaussian
gives the distribution function, and
rinv.gaussian
generates random deviates.
Currently qinv.gaussian
is unavailable.
T. W. Yee
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994). Continuous Univariate Distributions, 2nd edition, Volume 1, New York: Wiley.
Taraldsen, G. and Lindqvist, B. H. (2005). The multiple roots simulation algorithm, the inverse Gaussian distribution, and the sufficient conditional Monte Carlo method. Preprint Statistics No. 4/2005, Norwegian University of Science and Technology, Trondheim, Norway.
## Not run: x <- seq(-0.05, 4, len = 300) plot(x, dinv.gaussian(x, mu = 1, lambda = 1), type = "l", col = "blue",las = 1, main = "blue is density, orange is cumulative distribution function") abline(h = 0, col = "gray", lty = 2) lines(x, pinv.gaussian(x, mu = 1, lambda = 1), type = "l", col = "orange") ## End(Not run)
## Not run: x <- seq(-0.05, 4, len = 300) plot(x, dinv.gaussian(x, mu = 1, lambda = 1), type = "l", col = "blue",las = 1, main = "blue is density, orange is cumulative distribution function") abline(h = 0, col = "gray", lty = 2) lines(x, pinv.gaussian(x, mu = 1, lambda = 1), type = "l", col = "orange") ## End(Not run)
Estimates the two parameters of the inverse Gaussian distribution by maximum likelihood estimation.
inv.gaussianff(lmu = "loglink", llambda = "loglink", imethod = 1, ilambda = NULL, parallel = FALSE, ishrinkage = 0.99, zero = NULL)
inv.gaussianff(lmu = "loglink", llambda = "loglink", imethod = 1, ilambda = NULL, parallel = FALSE, ishrinkage = 0.99, zero = NULL)
lmu , llambda
|
Parameter link functions for the |
ilambda , parallel
|
See |
imethod , ishrinkage , zero
|
See |
The standard (“canonical”) form of the inverse Gaussian distribution has a density that can be written as
where ,
, and
.
The mean of
is
and its variance is
.
By default,
and
.
The mean is returned as the fitted values.
This VGAM family function can handle multiple
responses (inputted as a matrix).
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
The inverse Gaussian distribution can be fitted (to a certain extent) using the usual GLM framework involving a scale parameter. This family function is different from that approach in that it estimates both parameters by full maximum likelihood estimation.
T. W. Yee
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994). Continuous Univariate Distributions, 2nd edition, Volume 1, New York: Wiley.
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
The R package SuppDists has several functions for evaluating the density, distribution function, quantile function and generating random numbers from the inverse Gaussian distribution.
idata <- data.frame(x2 = runif(nn <- 1000)) idata <- transform(idata, mymu = exp(2 + 1 * x2), Lambda = exp(2 + 1 * x2)) idata <- transform(idata, y = rinv.gaussian(nn, mu = mymu, Lambda)) fit1 <- vglm(y ~ x2, inv.gaussianff, data = idata, trace = TRUE) rrig <- rrvglm(y ~ x2, inv.gaussianff, data = idata, trace = TRUE) coef(fit1, matrix = TRUE) coef(rrig, matrix = TRUE) Coef(rrig) summary(fit1)
idata <- data.frame(x2 = runif(nn <- 1000)) idata <- transform(idata, mymu = exp(2 + 1 * x2), Lambda = exp(2 + 1 * x2)) idata <- transform(idata, y = rinv.gaussian(nn, mu = mymu, Lambda)) fit1 <- vglm(y ~ x2, inv.gaussianff, data = idata, trace = TRUE) rrig <- rrvglm(y ~ x2, inv.gaussianff, data = idata, trace = TRUE) coef(fit1, matrix = TRUE) coef(rrig, matrix = TRUE) Coef(rrig) summary(fit1)
Maximum likelihood estimation of the 2-parameter inverse Lomax distribution.
inv.lomax(lscale = "loglink", lshape2.p = "loglink", iscale = NULL, ishape2.p = NULL, imethod = 1, gscale = exp(-5:5), gshape2.p = exp(-5:5), probs.y = c(0.25, 0.5, 0.75), zero = "shape2.p")
inv.lomax(lscale = "loglink", lshape2.p = "loglink", iscale = NULL, ishape2.p = NULL, imethod = 1, gscale = exp(-5:5), gshape2.p = exp(-5:5), probs.y = c(0.25, 0.5, 0.75), zero = "shape2.p")
lscale , lshape2.p
|
Parameter link functions applied to the
(positive) parameters |
iscale , ishape2.p , imethod , zero
|
See |
gscale , gshape2.p
|
See |
probs.y |
See |
The 2-parameter inverse Lomax distribution is the 4-parameter
generalized beta II distribution with shape parameters
.
It is also the 3-parameter Dagum distribution
with shape parameter
, as well as the
beta distribution of the second kind with
.
More details can be found in Kleiber and Kotz (2003).
The inverse Lomax distribution has density
for ,
,
.
Here,
is the scale parameter
scale
,
and p
is a shape parameter.
The mean does not seem to exist; the median is returned
as the fitted values.
This family function handles multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
See the notes in genbetaII
.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
inv.lomax
,
genbetaII
,
betaII
,
dagum
,
sinmad
,
fisk
,
lomax
,
paralogistic
,
inv.paralogistic
,
simulate.vlm
.
idata <- data.frame(y = rinv.lomax(2000, sc = exp(2), exp(1))) fit <- vglm(y ~ 1, inv.lomax, data = idata, trace = TRUE) fit <- vglm(y ~ 1, inv.lomax(iscale = exp(3)), data = idata, trace = TRUE, epsilon = 1e-8, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit)
idata <- data.frame(y = rinv.lomax(2000, sc = exp(2), exp(1))) fit <- vglm(y ~ 1, inv.lomax, data = idata, trace = TRUE) fit <- vglm(y ~ 1, inv.lomax(iscale = exp(3)), data = idata, trace = TRUE, epsilon = 1e-8, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit)
Density, distribution function, quantile function and random
generation for the inverse Lomax distribution with shape
parameter p
and scale parameter scale
.
dinv.lomax(x, scale = 1, shape2.p, log = FALSE) pinv.lomax(q, scale = 1, shape2.p, lower.tail = TRUE, log.p = FALSE) qinv.lomax(p, scale = 1, shape2.p, lower.tail = TRUE, log.p = FALSE) rinv.lomax(n, scale = 1, shape2.p)
dinv.lomax(x, scale = 1, shape2.p, log = FALSE) pinv.lomax(q, scale = 1, shape2.p, lower.tail = TRUE, log.p = FALSE) qinv.lomax(p, scale = 1, shape2.p, lower.tail = TRUE, log.p = FALSE) rinv.lomax(n, scale = 1, shape2.p)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations. If |
shape2.p |
shape parameter. |
scale |
scale parameter. |
log |
Logical.
If |
lower.tail , log.p
|
See inv.lomax
, which is the VGAM family
function for estimating the parameters by maximum likelihood
estimation.
dinv.lomax
gives the density,
pinv.lomax
gives the distribution function,
qinv.lomax
gives the quantile function, and
rinv.lomax
generates random deviates.
The inverse Lomax distribution is a special case of the 4-parameter generalized beta II distribution.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
idata <- data.frame(y = rinv.lomax(n = 1000, exp(2), exp(1))) fit <- vglm(y ~ 1, inv.lomax, idata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit)
idata <- data.frame(y = rinv.lomax(n = 1000, exp(2), exp(1))) fit <- vglm(y ~ 1, inv.lomax, idata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit)
Maximum likelihood estimation of the 2-parameter inverse paralogistic distribution.
inv.paralogistic(lscale = "loglink", lshape1.a = "loglink", iscale = NULL, ishape1.a = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
inv.paralogistic(lscale = "loglink", lshape1.a = "loglink", iscale = NULL, ishape1.a = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
lss |
See |
lshape1.a , lscale
|
Parameter link functions applied to the
(positive) parameters |
iscale , ishape1.a , imethod , zero
|
See |
gscale , gshape1.a
|
See |
probs.y |
See |
The 2-parameter inverse paralogistic distribution is the
4-parameter generalized beta II distribution with shape parameter
and
.
It is the 3-parameter Dagum distribution with
.
More details can be found in Kleiber and Kotz (2003).
The inverse paralogistic distribution has density
for ,
,
.
Here,
is the scale parameter
scale
,
and is the shape parameter.
The mean is
provided ; these are returned as the fitted values.
This family function handles multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
See the notes in genbetaII
.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
Inv.paralogistic
,
genbetaII
,
betaII
,
dagum
,
sinmad
,
fisk
,
inv.lomax
,
lomax
,
paralogistic
,
simulate.vlm
.
## Not run: idata <- data.frame(y = rinv.paralogistic(3000, exp(1), sc = exp(2))) fit <- vglm(y ~ 1, inv.paralogistic(lss = FALSE), idata, trace = TRUE) fit <- vglm(y ~ 1, inv.paralogistic(imethod = 2, ishape1.a = 4), data = idata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
## Not run: idata <- data.frame(y = rinv.paralogistic(3000, exp(1), sc = exp(2))) fit <- vglm(y ~ 1, inv.paralogistic(lss = FALSE), idata, trace = TRUE) fit <- vglm(y ~ 1, inv.paralogistic(imethod = 2, ishape1.a = 4), data = idata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
Density, distribution function, quantile function and random
generation for the inverse paralogistic distribution with
shape parameters a
and p
, and scale parameter
scale
.
dinv.paralogistic(x, scale = 1, shape1.a, log = FALSE) pinv.paralogistic(q, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) qinv.paralogistic(p, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) rinv.paralogistic(n, scale = 1, shape1.a)
dinv.paralogistic(x, scale = 1, shape1.a, log = FALSE) pinv.paralogistic(q, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) qinv.paralogistic(p, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) rinv.paralogistic(n, scale = 1, shape1.a)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations. If |
shape1.a |
shape parameter. |
scale |
scale parameter. |
log |
Logical.
If |
lower.tail , log.p
|
See inv.paralogistic
, which is the VGAM
family function for estimating the parameters by maximum
likelihood estimation.
dinv.paralogistic
gives the density,
pinv.paralogistic
gives the distribution function,
qinv.paralogistic
gives the quantile function, and
rinv.paralogistic
generates random deviates.
The inverse paralogistic distribution is a special case of the 4-parameter generalized beta II distribution.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
idata <- data.frame(y = rinv.paralogistic(3000, exp(1), sc = exp(2))) fit <- vglm(y ~ 1, inv.paralogistic(lss = FALSE, ishape1.a = 2.1), data = idata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit)
idata <- data.frame(y = rinv.paralogistic(3000, exp(1), sc = exp(2))) fit <- vglm(y ~ 1, inv.paralogistic(lss = FALSE, ishape1.a = 2.1), data = idata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit)
Checks to see if a fitted object suffers from some known bug.
is.buggy(object, ...) is.buggy.vlm(object, each.term = FALSE, ...)
is.buggy(object, ...) is.buggy.vlm(object, each.term = FALSE, ...)
object |
A fitted VGAM object, e.g., from
|
each.term |
Logical. If |
... |
Unused for now. |
It is known that vgam
with s
terms
do not correctly handle constraint matrices (cmat
, say)
when crossprod(cmat)
is not diagonal.
This function detects whether this is so or not.
Note that probably all VGAM family functions have defaults
where all crossprod(cmat)
s are diagonal, therefore do
not suffer from this bug. It is more likely to occur if the
user inputs constraint matrices using the constraints
argument (and setting zero = NULL
if necessary).
Second-generation VGAMs based on sm.ps
are a
modern alternative to using s
. It does not
suffer from this bug. However, G2-VGAMs require a reasonably
large sample size in order to work more reliably.
The default is a single logical (TRUE
if any term is
TRUE
),
otherwise a vector of such with each element corresponding to
a term. If the value is TRUE
then I suggest replacing
the VGAM by a similar model fitted by vglm
and
using regression splines, e.g., bs
,
ns
.
When the bug is fixed this function may be withdrawn, otherwise
always return FALSE
s!
T. W. Yee
fit1 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(3, 4)), binomialff(multiple.responses = TRUE), data = hunua) is.buggy(fit1) # Okay is.buggy(fit1, each.term = TRUE) # No terms are buggy fit2 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(3, 4)), binomialff(multiple.responses = TRUE), data = hunua, constraints = list("(Intercept)" = diag(2), "s(altitude, df = c(3, 4))" = matrix(c(1, 1, 0, 1), 2, 2))) is.buggy(fit2) # TRUE is.buggy(fit2, each.term = TRUE) constraints(fit2) # fit2b is an approximate alternative to fit2: fit2b <- vglm(cbind(agaaus, kniexc) ~ bs(altitude, df=3) + bs(altitude, df=4), binomialff(multiple.responses = TRUE), data = hunua, constraints = list("(Intercept)" = diag(2), "bs(altitude, df = 3)" = rbind(1, 1), "bs(altitude, df = 4)" = rbind(0, 1))) is.buggy(fit2b) # Okay is.buggy(fit2b, each.term = TRUE) constraints(fit2b)
fit1 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(3, 4)), binomialff(multiple.responses = TRUE), data = hunua) is.buggy(fit1) # Okay is.buggy(fit1, each.term = TRUE) # No terms are buggy fit2 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(3, 4)), binomialff(multiple.responses = TRUE), data = hunua, constraints = list("(Intercept)" = diag(2), "s(altitude, df = c(3, 4))" = matrix(c(1, 1, 0, 1), 2, 2))) is.buggy(fit2) # TRUE is.buggy(fit2, each.term = TRUE) constraints(fit2) # fit2b is an approximate alternative to fit2: fit2b <- vglm(cbind(agaaus, kniexc) ~ bs(altitude, df=3) + bs(altitude, df=4), binomialff(multiple.responses = TRUE), data = hunua, constraints = list("(Intercept)" = diag(2), "bs(altitude, df = 3)" = rbind(1, 1), "bs(altitude, df = 4)" = rbind(0, 1))) is.buggy(fit2b) # Okay is.buggy(fit2b, each.term = TRUE) constraints(fit2b)
Returns a logical from testing whether an object such as an extlogF1() VGLM object has crossing quantiles.
is.crossing.vglm(object, ...)
is.crossing.vglm(object, ...)
object |
an object such as
a |
... |
additional optional arguments. Currently unused. |
This function was specifically written for
a vglm
with family function extlogF1
.
It examines the fitted quantiles to see if any cross.
Note that if one uses regression splines such as
bs
and
ns
then it is possible that they cross at values of the
covariate space that are not represented by actual data.
One could use linear interpolation between fitted values
to get around this problem.
A logical.
If TRUE
then one can try fit a similar model by
combining columns of the constraint matrices so that
crossing no longer holds; see fix.crossing
.
For LMS-Box-Cox type quantile regression models
it is impossible for the quantiles to cross, by definition,
hence FALSE
is returned;
see lms.bcn
.
extlogF1
,
fix.crossing
,
lms.bcn
.
vglm
.
## Not run: ooo <- with(bmi.nz, order(age)) bmi.nz <- bmi.nz[ooo, ] # Sort by age with(bmi.nz, plot(age, BMI, col = "blue")) mytau <- c(50, 93, 95, 97) / 100 # Some quantiles are quite close fit1 <- vglm(BMI ~ ns(age, 7), extlogF1(mytau), bmi.nz, trace = TRUE) plot(BMI ~ age, bmi.nz, col = "blue", las = 1, main = "Partially parallel (darkgreen) & nonparallel quantiles", sub = "Crossing quantiles are orange") is.crossing(fit1) matlines(with(bmi.nz, age), fitted(fit1), lty = 1, col = "orange") ## End(Not run)
## Not run: ooo <- with(bmi.nz, order(age)) bmi.nz <- bmi.nz[ooo, ] # Sort by age with(bmi.nz, plot(age, BMI, col = "blue")) mytau <- c(50, 93, 95, 97) / 100 # Some quantiles are quite close fit1 <- vglm(BMI ~ ns(age, 7), extlogF1(mytau), bmi.nz, trace = TRUE) plot(BMI ~ age, bmi.nz, col = "blue", las = 1, main = "Partially parallel (darkgreen) & nonparallel quantiles", sub = "Crossing quantiles are orange") is.crossing(fit1) matlines(with(bmi.nz, age), fitted(fit1), lty = 1, col = "orange") ## End(Not run)
Returns a logical vector from a test of whether an object such as a matrix or VGLM object corresponds to a parallelism assumption.
is.parallel.matrix(object, ...) is.parallel.vglm(object, type = c("term", "lm"), ...)
is.parallel.matrix(object, ...) is.parallel.vglm(object, type = c("term", "lm"), ...)
object |
an object such as a constraint matrix or
a |
type |
passed into |
... |
additional optional arguments. Currently unused. |
These functions may be useful for categorical models
such as
propodds
,
cumulative
,
acat
,
cratio
,
sratio
,
multinomial
.
A vector of logicals, testing whether each constraint matrix is a one-column matrix of ones. Note that parallelism can still be thought of as holding if the constraint matrix has a non-zero but constant values, however, this is currently not implemented. No checking is done that the constraint matrices have the same number of rows.
## Not run: require("VGAMdata") fit <- vglm(educ ~ sm.bs(age) * sex + ethnicity, cumulative(parallel = TRUE), head(xs.nz, 200)) is.parallel(fit) is.parallel(fit, type = "lm") # For each column of the LM matrix ## End(Not run)
## Not run: require("VGAMdata") fit <- vglm(educ ~ sm.bs(age) * sex + ethnicity, cumulative(parallel = TRUE), head(xs.nz, 200)) is.parallel(fit) is.parallel(fit, type = "lm") # For each column of the LM matrix ## End(Not run)
Tests an object to see if it is smart.
is.smart(object)
is.smart(object)
object |
a function or a fitted model. |
If object
is a function then this function looks to see whether
object
has the logical attribute "smart"
. If so then
this is returned, else FALSE
.
If object
is a fitted model then this function looks to see whether
[email protected]
or
object\$smart.prediction
exists.
If it does and it is not equal to list(smart.arg=FALSE)
then
a TRUE
is returned, else FALSE
.
The reason for this is because, e.g., lm(...,smart=FALSE)
and vglm(...,smart=FALSE)
, will return such a specific list.
Writers of smart functions manually have to assign this attribute to their smart function after it has been written.
Returns TRUE
or FALSE
, according to whether the object
is smart or not.
is.smart(sm.min1) # TRUE is.smart(sm.poly) # TRUE library(splines) is.smart(sm.bs) # TRUE is.smart(sm.ns) # TRUE is.smart(tan) # FALSE ## Not run: udata <- data.frame(x2 = rnorm(9)) fit1 <- vglm(rnorm(9) ~ x2, uninormal, data = udata) is.smart(fit1) # TRUE fit2 <- vglm(rnorm(9) ~ x2, uninormal, data = udata, smart = FALSE) is.smart(fit2) # FALSE [email protected] ## End(Not run)
is.smart(sm.min1) # TRUE is.smart(sm.poly) # TRUE library(splines) is.smart(sm.bs) # TRUE is.smart(sm.ns) # TRUE is.smart(tan) # FALSE ## Not run: udata <- data.frame(x2 = rnorm(9)) fit1 <- vglm(rnorm(9) ~ x2, uninormal, data = udata) is.smart(fit1) # TRUE fit2 <- vglm(rnorm(9) ~ x2, uninormal, data = udata, smart = FALSE) is.smart(fit2) # FALSE fit2@smart.prediction ## End(Not run)
Returns a logical vector from a test of whether an object such as a matrix or VGLM object corresponds to a 'zero' assumption.
is.zero.matrix(object, ...) is.zero.vglm(object, ...)
is.zero.matrix(object, ...) is.zero.vglm(object, ...)
object |
an object such as a coefficient matrix of a |
... |
additional optional arguments. Currently unused. |
These functions test the effect of the zero
argument
on a vglm
object or the coefficient matrix
of a vglm
object. The latter is obtained by
coef(vglmObject, matrix = TRUE)
.
A vector of logicals,
testing whether each linear/additive predictor
has the zero
argument applied to it.
It is TRUE
if that linear/additive predictor is
intercept-only, i.e., all other regression coefficients
are set to zero.
No checking is done for the intercept term at all, i.e., that it was estimated in the first place.
constraints
,
vglm
,
CommonVGAMffArguments
.
coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age, binom2.or(zero = NULL), data = coalminers) is.zero(fit) is.zero(coef(fit, matrix = TRUE))
coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age, binom2.or(zero = NULL), data = coalminers) is.zero(fit) is.zero(coef(fit, matrix = TRUE))
Computes Kendall's Tau, which is a rank-based correlation measure, between two vectors.
kendall.tau(x, y, exact = FALSE, max.n = 3000)
kendall.tau(x, y, exact = FALSE, max.n = 3000)
x , y
|
Numeric vectors. Must be of equal length.
Ideally their values are continuous and not too discrete.
Let |
exact |
Logical. If |
max.n |
Numeric. If |
Kendall's tau is a measure of dependency in a
bivariate distribution.
Loosely, two random variables are concordant
if large values
of one random variable are associated with large
values of the
other random variable.
Similarly, two random variables are disconcordant
if large values
of one random variable are associated with small values of the
other random variable.
More formally, if (x[i] - x[j])*(y[i] - y[j]) > 0
then
that comparison is concordant .
And if
(x[i] - x[j])*(y[i] - y[j]) < 0
then
that comparison is disconcordant .
Out of
choose(N, 2
) comparisons,
let and
be the
number of concordant and disconcordant pairs.
Then Kendall's tau can be estimated by
.
If there are ties then half the ties are deemed concordant and
half disconcordant so that
is used.
Kendall's tau, which lies between and
.
If length(x)
is large then
the cost is , which is expensive!
Under these circumstances
it is not advisable to set
exact = TRUE
or max.n
to a very
large number.
N <- 5000; x <- 1:N; y <- runif(N) true.rho <- -0.8 ymat <- rbinorm(N, cov12 = true.rho) # Bivariate normal, aka N_2 x <- ymat[, 1] y <- ymat[, 2] ## Not run: plot(x, y, col = "blue") kendall.tau(x, y) # A random sample is taken here kendall.tau(x, y) # A random sample is taken here kendall.tau(x, y, exact = TRUE) # Costly if length(x) is large kendall.tau(x, y, max.n = N) # Same as exact = TRUE (rhohat <- sin(kendall.tau(x, y) * pi / 2)) # Holds for N_2 actually true.rho # rhohat should be near this value
N <- 5000; x <- 1:N; y <- runif(N) true.rho <- -0.8 ymat <- rbinorm(N, cov12 = true.rho) # Bivariate normal, aka N_2 x <- ymat[, 1] y <- ymat[, 2] ## Not run: plot(x, y, col = "blue") kendall.tau(x, y) # A random sample is taken here kendall.tau(x, y) # A random sample is taken here kendall.tau(x, y, exact = TRUE) # Costly if length(x) is large kendall.tau(x, y, max.n = N) # Same as exact = TRUE (rhohat <- sin(kendall.tau(x, y) * pi / 2)) # Holds for N_2 actually true.rho # rhohat should be near this value
Calculates the Kullback-Leibler divergence for certain fitted model objects
KLD(object, ...) KLDvglm(object, ...)
KLD(object, ...) KLDvglm(object, ...)
object |
Some VGAM object, for example, having
class |
... |
Other possible arguments fed into
|
The Kullback-Leibler divergence (KLD),
or relative entropy,
is a measure of how one probability distribution differs
from a second reference probability distribution.
Currently the VGAM package computes the KLD
for GAITD regression models
(e.g., see gaitdpoisson
and
gaitdnbinomial
) where the reference distribution
is the (unscaled) parent or base distribution.
For such, the formula for the KLD simplifies somewhat.
Hence one can obtain a quantitative measure for the overall
effect of altering, inflating, truncating and deflating certain
(special) values.
Returns a numeric nonnegative value with the corresponding KLD. A 0 value means no difference between an ordinary parent or base distribution.
Numerical problems might occur if any of the evaluated probabilities of the unscaled parent distribution are very close to 0.
T. W. Yee.
Kullback, S. and Leibler, R. A. (1951). On information and sufficiency. Annals of Mathematical Statistics, 22, 79–86.
M'Kendrick, A. G. (1925). Applications of mathematics to medical problems. Proc. Edinb. Math. Soc., 44, 98–130.
# McKendrick (1925): Data from 223 Indian village households cholera <- data.frame(ncases = 0:4, # Number of cholera cases, wfreq = c(168, 32, 16, 6, 1)) # Frequencies fit7 <- vglm(ncases ~ 1, gaitdpoisson(i.mlm = 0, ilambda.p = 1), weight = wfreq, data = cholera, trace = TRUE) coef(fit7, matrix = TRUE) KLD(fit7)
# McKendrick (1925): Data from 223 Indian village households cholera <- data.frame(ncases = 0:4, # Number of cholera cases, wfreq = c(168, 32, 16, 6, 1)) # Frequencies fit7 <- vglm(ncases ~ 1, gaitdpoisson(i.mlm = 0, ilambda.p = 1), weight = wfreq, data = cholera, trace = TRUE) coef(fit7, matrix = TRUE) KLD(fit7)
Estimates the two parameters of the Kumaraswamy distribution by maximum likelihood estimation.
kumar(lshape1 = "loglink", lshape2 = "loglink", ishape1 = NULL, ishape2 = NULL, gshape1 = exp(2*ppoints(5) - 1), tol12 = 1.0e-4, zero = NULL)
kumar(lshape1 = "loglink", lshape2 = "loglink", ishape1 = NULL, ishape2 = NULL, gshape1 = exp(2*ppoints(5) - 1), tol12 = 1.0e-4, zero = NULL)
lshape1 , lshape2
|
Link function for the two positive shape parameters,
respectively, called |
ishape1 , ishape2
|
Numeric. Optional initial values for the two positive shape parameters. |
tol12 |
Numeric and positive. Tolerance for testing whether the second shape parameter is either 1 or 2. If so then the working weights need to handle these singularities. |
gshape1 |
Values for a grid search for the first shape parameter.
See |
zero |
The Kumaraswamy distribution has density function
where and the two shape parameters,
and
, are positive.
The mean is
(returned as the fitted values) and the variance is
.
Applications of the Kumaraswamy distribution include
the storage volume of a water reservoir.
Fisher scoring is implemented.
Handles multiple responses (matrix input).
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
T. W. Yee
Kumaraswamy, P. (1980). A generalized probability density function for double-bounded random processes. Journal of Hydrology, 46, 79–88.
Jones, M. C. (2009). Kumaraswamy's distribution: A beta-type distribution with some tractability advantages. Statistical Methodology, 6, 70–81.
shape1 <- exp(1); shape2 <- exp(2) kdata <- data.frame(y = rkumar(n = 1000, shape1, shape2)) fit <- vglm(y ~ 1, kumar, data = kdata, trace = TRUE) c(with(kdata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit)
shape1 <- exp(1); shape2 <- exp(2) kdata <- data.frame(y = rkumar(n = 1000, shape1, shape2)) fit <- vglm(y ~ 1, kumar, data = kdata, trace = TRUE) c(with(kdata, mean(y)), head(fitted(fit), 1)) coef(fit, matrix = TRUE) Coef(fit) summary(fit)
Density, distribution function, quantile function and random generation for the Kumaraswamy distribution.
dkumar(x, shape1, shape2, log = FALSE) pkumar(q, shape1, shape2, lower.tail = TRUE, log.p = FALSE) qkumar(p, shape1, shape2, lower.tail = TRUE, log.p = FALSE) rkumar(n, shape1, shape2)
dkumar(x, shape1, shape2, log = FALSE) pkumar(q, shape1, shape2, lower.tail = TRUE, log.p = FALSE) qkumar(p, shape1, shape2, lower.tail = TRUE, log.p = FALSE) rkumar(n, shape1, shape2)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
shape1 , shape2
|
positive shape parameters. |
log |
Logical.
If |
lower.tail , log.p
|
See kumar
, the VGAM family function
for estimating the parameters,
for the formula of the probability density function and other
details.
dkumar
gives the density,
pkumar
gives the distribution function,
qkumar
gives the quantile function, and
rkumar
generates random deviates.
T. W. Yee and Kai Huang
## Not run: shape1 <- 2; shape2 <- 2; nn <- 201; # shape1 <- shape2 <- 0.5; x <- seq(-0.05, 1.05, len = nn) plot(x, dkumar(x, shape1, shape2), type = "l", las = 1, ylab = paste("dkumar(shape1 = ", shape1, ", shape2 = ", shape2, ")"), col = "blue", cex.main = 0.8, ylim = c(0,1.5), main = "Blue is density, orange is the CDF", sub = "Red lines are the 10,20,...,90 percentiles") lines(x, pkumar(x, shape1, shape2), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qkumar(probs, shape1, shape2) lines(Q, dkumar(Q, shape1, shape2), col = "red", lty = 3, type = "h") lines(Q, pkumar(Q, shape1, shape2), col = "red", lty = 3, type = "h") abline(h = probs, col = "red", lty = 3) max(abs(pkumar(Q, shape1, shape2) - probs)) # Should be 0 ## End(Not run)
## Not run: shape1 <- 2; shape2 <- 2; nn <- 201; # shape1 <- shape2 <- 0.5; x <- seq(-0.05, 1.05, len = nn) plot(x, dkumar(x, shape1, shape2), type = "l", las = 1, ylab = paste("dkumar(shape1 = ", shape1, ", shape2 = ", shape2, ")"), col = "blue", cex.main = 0.8, ylim = c(0,1.5), main = "Blue is density, orange is the CDF", sub = "Red lines are the 10,20,...,90 percentiles") lines(x, pkumar(x, shape1, shape2), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qkumar(probs, shape1, shape2) lines(Q, dkumar(Q, shape1, shape2), col = "red", lty = 3, type = "h") lines(Q, pkumar(Q, shape1, shape2), col = "red", lty = 3, type = "h") abline(h = probs, col = "red", lty = 3) max(abs(pkumar(Q, shape1, shape2) - probs)) # Should be 0 ## End(Not run)
Rainbow and brown trout catches by a Mr Swainson at Lake Otamangakau in the central North Island of New Zealand during the 1970s and 1980s.
data(lakeO)
data(lakeO)
A data frame with 15 observations on the following 5 variables.
year
a numeric vector, the season began on 1 October of the year and ended 12 months later.
total.fish
a numeric vector, the total number of fish caught during the season. Simply the sum of brown and rainbow trout.
brown
a numeric vector, the number of brown trout (Salmo trutta) caught.
rainbow
a numeric vector, the number of rainbow trout (Oncorhynchus mykiss) caught.
visits
a numeric vector, the number of visits during the season that the angler made to the lake. It is necessary to assume that the visits were of an equal time length in order to interpret the usual Poisson regressions.
The data was extracted from the season summaries at Lake Otamangakau by Anthony Swainson for the seasons 1974–75 to 1988–89. Mr Swainson was one of a small group of regular fly fishing anglers and kept a diary of his catches. Lake Otamangakau is a lake of area 1.8 squared km and has a maximum depth of about 12m, and is located in the central North Island of New Zealand. It is trout-infested and known for its trophy-sized fish.
See also trapO
.
Table 7.2 of the reference below. Thanks to Dr Michel Dedual for a copy of the report and for help reading the final year's data. The report is available from TWY on request.
Dedual, M. and MacLean, G. and Rowe, D. and Cudby, E., The Trout Population and Fishery of Lake Otamangakau—Interim Report. National Institute of Water and Atmospheric Research, Hamilton, New Zealand. Consultancy Report Project No. ELE70207, (Dec 1996).
data(lakeO) lakeO summary(lakeO)
data(lakeO) lakeO summary(lakeO)
Computes the Lambert W function for real values.
lambertW(x, tolerance = 1e-10, maxit = 50)
lambertW(x, tolerance = 1e-10, maxit = 50)
x |
A vector of reals. |
tolerance |
Accuracy desired. |
maxit |
Maximum number of iterations of third-order Halley's method. |
The Lambert function is the root of the equation
for complex
.
If
is real and
then
it has two possible real values,
and currently only the upper branch
(often called
)
is computed so that
a value that is
is returned.
This function returns the principal branch of the function
for real
.
It returns
,
and
NA
for .
If convergence does not occur then increase the value of
maxit
and/or tolerance
.
Yet to do: add an argument lbranch = TRUE
to return
the lower branch
(often called )
for real
;
this would give
.
T. W. Yee
Corless, R. M. and Gonnet, G. H. and
Hare, D. E. G. and Jeffrey, D. J. and Knuth, D. E. (1996).
On the Lambert function.
Advances in Computational Mathematics,
5(4), 329–359.
log
,
exp
,
bell
.
There is also a package called LambertW.
## Not run: curve(lambertW, -exp(-1), 3, xlim = c(-1, 3), ylim = c(-2, 1), las = 1, col = "orange", n = 1001) abline(v = -exp(-1), h = -1, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") ## End(Not run)
## Not run: curve(lambertW, -exp(-1), 3, xlim = c(-1, 3), ylim = c(-2, 1), las = 1, col = "orange", n = 1001) abline(v = -exp(-1), h = -1, lwd = 2, lty = "dotted", col = "gray") abline(h = 0, v = 0, lty = "dashed", col = "blue") ## End(Not run)
Maximum likelihood estimation of the 2-parameter classical Laplace distribution.
laplace(llocation = "identitylink", lscale = "loglink", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale")
laplace(llocation = "identitylink", lscale = "loglink", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale")
llocation , lscale
|
Character.
Parameter link functions for location parameter |
ilocation , iscale
|
Optional initial values. If given, it must be numeric and values are recycled to the appropriate length. The default is to choose the value internally. |
imethod |
Initialization method. Either the value 1 or 2. |
zero |
See |
The Laplace distribution is often known as the double-exponential distribution and, for modelling, has heavier tail than the normal distribution. The Laplace density function is
where ,
and
.
Its mean is
and its variance is
.
This parameterization is called the classical Laplace
distribution by Kotz et al. (2001), and the density is symmetric
about
.
For y ~ 1
(where y
is the response)
the maximum likelihood estimate (MLE) for the location
parameter is the sample median, and the MLE for is
mean(abs(y-location))
(replace location by its MLE
if unknown).
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
This family function has not been fully tested.
The MLE regularity conditions do not hold for this
distribution, therefore misleading inferences may result, e.g.,
in the summary
and vcov
of the object. Hence this
family function might be withdrawn from VGAM in the future.
This family function uses Fisher scoring. Convergence may be slow for non-intercept-only models; half-stepping is frequently required.
T. W. Yee
Kotz, S., Kozubowski, T. J. and Podgorski, K. (2001). The Laplace distribution and generalizations: a revisit with applications to communications, economics, engineering, and finance, Boston: Birkhauser.
rlaplace
,
alaplace2
(which differs slightly from this parameterization),
exponential
,
median
.
ldata <- data.frame(y = rlaplace(nn <- 100, 2, scale = exp(1))) fit <- vglm(y ~ 1, laplace, ldata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) with(ldata, median(y)) ldata <- data.frame(x = runif(nn <- 1001)) ldata <- transform(ldata, y = rlaplace(nn, 2, scale = exp(-1 + 1*x))) coef(vglm(y ~ x, laplace(iloc = 0.2, imethod = 2, zero = 1), ldata, trace = TRUE), matrix = TRUE)
ldata <- data.frame(y = rlaplace(nn <- 100, 2, scale = exp(1))) fit <- vglm(y ~ 1, laplace, ldata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) with(ldata, median(y)) ldata <- data.frame(x = runif(nn <- 1001)) ldata <- transform(ldata, y = rlaplace(nn, 2, scale = exp(-1 + 1*x))) coef(vglm(y ~ x, laplace(iloc = 0.2, imethod = 2, zero = 1), ldata, trace = TRUE), matrix = TRUE)
Density, distribution function, quantile function and random
generation for the Laplace distribution with location parameter
location
and scale parameter scale
.
dlaplace(x, location = 0, scale = 1, log = FALSE) plaplace(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) qlaplace(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) rlaplace(n, location = 0, scale = 1)
dlaplace(x, location = 0, scale = 1, log = FALSE) plaplace(q, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) qlaplace(p, location = 0, scale = 1, lower.tail = TRUE, log.p = FALSE) rlaplace(n, location = 0, scale = 1)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as in |
location |
the location parameter |
scale |
the scale parameter |
log |
Logical.
If |
lower.tail , log.p
|
The Laplace distribution is often known as the double-exponential distribution and, for modelling, has heavier tail than the normal distribution. The Laplace density function is
where ,
and
.
The mean is
and the variance is
.
See laplace
, the VGAM family function
for estimating the two parameters by maximum likelihood estimation,
for formulae and details.
Apart from n
, all the above arguments may be vectors and
are recyled to the appropriate length if necessary.
dlaplace
gives the density,
plaplace
gives the distribution function,
qlaplace
gives the quantile function, and
rlaplace
generates random deviates.
T. W. Yee and Kai Huang
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
loc <- 1; b <- 2 y <- rlaplace(n = 100, loc = loc, scale = b) mean(y) # sample mean loc # population mean var(y) # sample variance 2 * b^2 # population variance ## Not run: loc <- 0; b <- 1.5; x <- seq(-5, 5, by = 0.01) plot(x, dlaplace(x, loc, b), type = "l", col = "blue", main = "Blue is density, orange is the CDF", ylim = c(0,1), sub = "Purple are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qlaplace(seq(0.05,0.95,by = 0.05), loc, b), dlaplace(qlaplace(seq(0.05, 0.95, by = 0.05), loc, b), loc, b), col = "purple", lty = 3, type = "h") lines(x, plaplace(x, loc, b), type = "l", col = "orange") abline(h = 0, lty = 2) ## End(Not run) plaplace(qlaplace(seq(0.05, 0.95, by = 0.05), loc, b), loc, b)
loc <- 1; b <- 2 y <- rlaplace(n = 100, loc = loc, scale = b) mean(y) # sample mean loc # population mean var(y) # sample variance 2 * b^2 # population variance ## Not run: loc <- 0; b <- 1.5; x <- seq(-5, 5, by = 0.01) plot(x, dlaplace(x, loc, b), type = "l", col = "blue", main = "Blue is density, orange is the CDF", ylim = c(0,1), sub = "Purple are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qlaplace(seq(0.05,0.95,by = 0.05), loc, b), dlaplace(qlaplace(seq(0.05, 0.95, by = 0.05), loc, b), loc, b), col = "purple", lty = 3, type = "h") lines(x, plaplace(x, loc, b), type = "l", col = "orange") abline(h = 0, lty = 2) ## End(Not run) plaplace(qlaplace(seq(0.05, 0.95, by = 0.05), loc, b), loc, b)
Generic function for the latent variables of a model.
latvar(object, ...) lv(object, ...)
latvar(object, ...) lv(object, ...)
object |
An object for which the extraction of latent variables is meaningful. |
... |
Other arguments fed into the specific
methods function of the model. Sometimes they are fed
into the methods function for |
Latent variables occur in reduced-rank regression models, as well as in quadratic and additive ordination models. For the latter two, latent variable values are often called site scores by ecologists. Latent variables are linear combinations of the explanatory variables.
The value returned depends specifically on the methods function invoked.
latvar
and lv
are identical,
but the latter will be deprecated soon.
Latent variables are not really applicable to
vglm
/vgam
models.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
latvar.qrrvglm
,
latvar.rrvglm
,
latvar.cao
,
lvplot
.
## Not run: hspider[, 1:6] <- scale(hspider[, 1:6]) # Standardized environmental vars set.seed(123) p1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, df1.nl = c(Zoraspin = 2.5, 3), Bestof = 3, Crow1positive = TRUE) var(latvar(p1)) # Scaled to unit variance # Scaled to unit variance c(latvar(p1)) # Estimated site scores ## End(Not run)
## Not run: hspider[, 1:6] <- scale(hspider[, 1:6]) # Standardized environmental vars set.seed(123) p1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Rank = 1, df1.nl = c(Zoraspin = 2.5, 3), Bestof = 3, Crow1positive = TRUE) var(latvar(p1)) # Scaled to unit variance # Scaled to unit variance c(latvar(p1)) # Estimated site scores ## End(Not run)
Estimates the two parameters of a (transformed) Leipnik distribution by maximum likelihood estimation.
leipnik(lmu = "logitlink", llambda = logofflink(offset = 1), imu = NULL, ilambda = NULL)
leipnik(lmu = "logitlink", llambda = logofflink(offset = 1), imu = NULL, ilambda = NULL)
lmu , llambda
|
Link function for the |
imu , ilambda
|
Numeric. Optional initial values for |
The (transformed) Leipnik distribution has density function
where and
.
The mean is
(returned as the fitted values)
and the variance is
.
Jorgensen (1997) calls the above the transformed
Leipnik distribution, and if and
, then the distribution of
as a function of
and
is known as the
the (untransformed) Leipnik distribution. Here, both
and
are in
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
Convergence may be slow or fail.
Until better initial value estimates are forthcoming try
assigning the argument ilambda
some numerical value if it
fails to converge. Currently, Newton-Raphson is implemented,
not Fisher scoring. Currently, this family function probably
only really works for intercept-only models, i.e., y ~
1
in the formula.
T. W. Yee
Jorgensen, B. (1997). The Theory of Dispersion Models. London: Chapman & Hall
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995). Continuous Univariate Distributions, 2nd edition, Volume 2, New York: Wiley. (pages 612–617).
ldata <- data.frame(y = rnorm(2000, 0.5, 0.1)) # Improper data fit <- vglm(y ~ 1, leipnik(ilambda = 1), ldata, trace = TRUE) head(fitted(fit)) with(ldata, mean(y)) summary(fit) coef(fit, matrix = TRUE) Coef(fit) sum(weights(fit)) # Sum of the prior weights sum(weights(fit, type = "work")) # Sum of the working weights
ldata <- data.frame(y = rnorm(2000, 0.5, 0.1)) # Improper data fit <- vglm(y ~ 1, leipnik(ilambda = 1), ldata, trace = TRUE) head(fitted(fit)) with(ldata, mean(y)) summary(fit) coef(fit, matrix = TRUE) Coef(fit) sum(weights(fit)) # Sum of the prior weights sum(weights(fit, type = "work")) # Sum of the working weights
Computes the Lerch Phi function.
lerch(x, s, v, tolerance = 1.0e-10, iter = 100)
lerch(x, s, v, tolerance = 1.0e-10, iter = 100)
x , s , v
|
Numeric.
This function recyles values of |
tolerance |
Numeric. Accuracy required, must be positive and less than 0.01. |
iter |
Maximum number of iterations allowed to obtain convergence.
If |
Also known as the Lerch transcendent, it can be defined by an integral involving analytical continuation. An alternative definition is the series
which converges for
as well as for
with
.
The series is undefined for integers
.
Actually,
may be complex but this function only works
for real
.
The algorithm used is based on the relation
See the URL below for more information. This function is a wrapper function for the C code described below.
Returns the value of the function evaluated at the values of
x
, s
, v
.
If the above ranges of and
are not satisfied,
or some numeric problems occur, then
this function will return an
NA
for those values.
(The C code returns 6 possible return codes, but this is
not passed back up to the R level.)
This function has not been thoroughly tested and contains
limitations,
for example,
the zeta function cannot be computed with this function even
though
.
Several numerical problems can arise,
such as lack of convergence, overflow
and underflow, especially near singularities.
If any problems occur then an
NA
will be returned.
For example,
if and
then
convergence may be so slow that
changing
tolerance
and/or iter
may be needed
to get an answer (that is treated cautiously).
There are a number of special cases, e.g.,
the Riemann zeta-function is
.
Another example is the Hurwitz zeta function
.
The special case of
corresponds to the hypergeometric
2F1,
and this is implemented in the gsl package.
The Lerch Phi function should not be confused with the
Lerch zeta function though they are quite similar.
S. V. Aksenov and U. D. Jentschura wrote the C code (called Version 1.00). The R wrapper function was written by T. Yee.
Originally the code was found at
http://aksenov.freeshell.org/lerchphi/source/lerchphi.c
.
Bateman, H. (1953). Higher Transcendental Functions. Volume 1. McGraw-Hill, NY, USA.
zeta
.
## Not run: s <- 2; v <- 1; x <- seq(-1.1, 1.1, length = 201) plot(x, lerch(x, s = s, v = v), type = "l", col = "blue", las = 1, main = paste0("lerch(x, s = ", s,", v = ", v, ")")) abline(v = 0, h = 1, lty = "dashed", col = "gray") ## End(Not run)
## Not run: s <- 2; v <- 1; x <- seq(-1.1, 1.1, length = 201) plot(x, lerch(x, s = s, v = v), type = "l", col = "blue", las = 1, main = paste0("lerch(x, s = ", s,", v = ", v, ")")) abline(v = 0, h = 1, lty = "dashed", col = "gray") ## End(Not run)
Survival in patients with Acute Myelogenous Leukemia
data(leukemia)
data(leukemia)
time: | survival or censoring time |
status: | censoring status |
x: | maintenance chemotherapy given? (factor) |
This data set has been transferred from survival and
renamed from aml
to leukemia
.
Rupert G. Miller (1997). Survival Analysis. John Wiley & Sons.
Estimates the scale parameter of the Levy distribution by maximum likelihood estimation.
levy(location = 0, lscale = "loglink", iscale = NULL)
levy(location = 0, lscale = "loglink", iscale = NULL)
location |
Location parameter. Must have a known value.
Called |
lscale |
Parameter link function for the (positive) scale parameter
|
iscale |
Initial value for the |
The Levy distribution is one of three stable distributions whose density function has a tractable form. The formula for the density is
where and
.
Note that if
is very close to
min(y)
(where y
is the response), then numerical problem will
occur. The mean does not exist. The median is returned as
the fitted values.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
T. W. Yee
Nolan, J. P. (2005). Stable Distributions: Models for Heavy Tailed Data.
The Nolan article was at
http://academic2.american.edu/~jpnolan/stable/chap1.pdf
.
nn <- 1000; loc1 <- 0; loc2 <- 10 myscale <- 1 # log link ==> 0 is the answer ldata <- data.frame(y1 = loc1 + myscale/rnorm(nn)^2, # Levy(myscale, a) y2 = rlevy(nn, loc = loc2, scale = exp(+2))) # Cf. Table 1.1 of Nolan for Levy(1,0) with(ldata, sum(y1 > 1) / length(y1)) # Should be 0.6827 with(ldata, sum(y1 > 2) / length(y1)) # Should be 0.5205 fit1 <- vglm(y1 ~ 1, levy(location = loc1), ldata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) summary(fit1) head(weights(fit1, type = "work")) fit2 <- vglm(y2 ~ 1, levy(location = loc2), ldata, trace = TRUE) coef(fit2, matrix = TRUE) Coef(fit2) c(median = with(ldata, median(y2)), fitted.median = head(fitted(fit2), 1))
nn <- 1000; loc1 <- 0; loc2 <- 10 myscale <- 1 # log link ==> 0 is the answer ldata <- data.frame(y1 = loc1 + myscale/rnorm(nn)^2, # Levy(myscale, a) y2 = rlevy(nn, loc = loc2, scale = exp(+2))) # Cf. Table 1.1 of Nolan for Levy(1,0) with(ldata, sum(y1 > 1) / length(y1)) # Should be 0.6827 with(ldata, sum(y1 > 2) / length(y1)) # Should be 0.5205 fit1 <- vglm(y1 ~ 1, levy(location = loc1), ldata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) summary(fit1) head(weights(fit1, type = "work")) fit2 <- vglm(y2 ~ 1, levy(location = loc2), ldata, trace = TRUE) coef(fit2, matrix = TRUE) Coef(fit2) c(median = with(ldata, median(y2)), fitted.median = head(fitted(fit2), 1))
Estimation of the parameter of the standard and nonstandard log-gamma distribution.
lgamma1(lshape = "loglink", ishape = NULL) lgamma3(llocation = "identitylink", lscale = "loglink", lshape = "loglink", ilocation = NULL, iscale = NULL, ishape = 1, zero = c("scale", "shape"))
lgamma1(lshape = "loglink", ishape = NULL) lgamma3(llocation = "identitylink", lscale = "loglink", lshape = "loglink", ilocation = NULL, iscale = NULL, ishape = 1, zero = c("scale", "shape"))
llocation , lscale
|
Parameter link function applied to the
location parameter |
lshape |
Parameter link function applied to
the positive shape parameter |
ishape |
Initial value for |
ilocation , iscale
|
Initial value for |
zero |
An integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
The values must be from the set {1,2,3}.
The default value means none are modelled as intercept-only terms.
See |
The probability density function of the standard log-gamma distribution is given by
for parameter and all real
.
The mean of
is
digamma(k)
(returned as
the fitted values) and its variance is trigamma(k)
.
For the non-standard log-gamma distribution, one replaces
by
, where
is the location parameter
and
is the positive scale parameter.
Then the density function is
The mean and variance of are
a + b*digamma(k)
(returned as
the fitted values) and b^2 * trigamma(k)
, respectively.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
The standard log-gamma distribution can be viewed as a
generalization of the standard type 1 extreme value density:
when the distribution of
is the standard
type 1 extreme value distribution.
The standard log-gamma distribution is fitted with lgamma1
and the non-standard (3-parameter) log-gamma distribution is
fitted with lgamma3
.
T. W. Yee
Kotz, S. and Nadarajah, S. (2000). Extreme Value Distributions: Theory and Applications, pages 48–49, London: Imperial College Press.
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995). Continuous Univariate Distributions, 2nd edition, Volume 2, p.89, New York: Wiley.
rlgamma
,
gengamma.stacy
,
prentice74
,
gamma1
,
lgamma
.
ldata <- data.frame(y = rlgamma(100, shape = exp(1))) fit <- vglm(y ~ 1, lgamma1, ldata, trace = TRUE, crit = "coef") summary(fit) coef(fit, matrix = TRUE) Coef(fit) ldata <- data.frame(x2 = runif(nn <- 5000)) # Another example ldata <- transform(ldata, loc = -1 + 2 * x2, Scale = exp(1)) ldata <- transform(ldata, y = rlgamma(nn, loc, sc = Scale, sh = exp(0))) fit2 <- vglm(y ~ x2, lgamma3, data = ldata, trace = TRUE, crit = "c") coef(fit2, matrix = TRUE)
ldata <- data.frame(y = rlgamma(100, shape = exp(1))) fit <- vglm(y ~ 1, lgamma1, ldata, trace = TRUE, crit = "coef") summary(fit) coef(fit, matrix = TRUE) Coef(fit) ldata <- data.frame(x2 = runif(nn <- 5000)) # Another example ldata <- transform(ldata, loc = -1 + 2 * x2, Scale = exp(1)) ldata <- transform(ldata, y = rlgamma(nn, loc, sc = Scale, sh = exp(0))) fit2 <- vglm(y ~ x2, lgamma3, data = ldata, trace = TRUE, crit = "c") coef(fit2, matrix = TRUE)
Density, distribution function, quantile function and random
generation for the log-gamma distribution with location
parameter location
, scale parameter scale
and
shape parameter k
.
dlgamma(x, location = 0, scale = 1, shape = 1, log = FALSE) plgamma(q, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qlgamma(p, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) rlgamma(n, location = 0, scale = 1, shape = 1)
dlgamma(x, location = 0, scale = 1, shape = 1, log = FALSE) plgamma(q, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qlgamma(p, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) rlgamma(n, location = 0, scale = 1, shape = 1)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as |
location |
the location parameter |
scale |
the (positive) scale parameter |
shape |
the (positive) shape parameter |
log |
Logical.
If |
lower.tail , log.p
|
See lgamma1
, the VGAM family function for
estimating the one parameter standard log-gamma distribution by
maximum likelihood estimation, for formulae and other details.
Apart from n
, all the above arguments may be vectors
and are recyled to the appropriate length if necessary.
dlgamma
gives the density,
plgamma
gives the distribution function,
qlgamma
gives the quantile function, and
rlgamma
generates random deviates.
The VGAM family function lgamma3
is
for the three parameter (nonstandard) log-gamma distribution.
T. W. Yee and Kai Huang
Kotz, S. and Nadarajah, S. (2000). Extreme Value Distributions: Theory and Applications, pages 48–49, London: Imperial College Press.
## Not run: loc <- 1; Scale <- 1.5; shape <- 1.4 x <- seq(-3.2, 5, by = 0.01) plot(x, dlgamma(x, loc = loc, Scale, shape = shape), type = "l", col = "blue", ylim = 0:1, main = "Blue is density, orange is the CDF", sub = "Red are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qlgamma(seq(0.05, 0.95, by = 0.05), loc = loc, Scale, sh = shape), dlgamma(qlgamma(seq(0.05, 0.95, by = 0.05), loc = loc, sc = Scale, shape = shape), loc = loc, Scale, shape = shape), col = "red", lty = 3, type = "h") lines(x, plgamma(x, loc = loc, Scale, shape = shape), col = "orange") abline(h = 0, lty = 2) ## End(Not run)
## Not run: loc <- 1; Scale <- 1.5; shape <- 1.4 x <- seq(-3.2, 5, by = 0.01) plot(x, dlgamma(x, loc = loc, Scale, shape = shape), type = "l", col = "blue", ylim = 0:1, main = "Blue is density, orange is the CDF", sub = "Red are 5,10,...,95 percentiles", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(qlgamma(seq(0.05, 0.95, by = 0.05), loc = loc, Scale, sh = shape), dlgamma(qlgamma(seq(0.05, 0.95, by = 0.05), loc = loc, sc = Scale, shape = shape), loc = loc, Scale, shape = shape), col = "red", lty = 3, type = "h") lines(x, plgamma(x, loc = loc, Scale, shape = shape), col = "orange") abline(h = 0, lty = 2) ## End(Not run)
Estimates the (1-parameter) Lindley distribution by maximum likelihood estimation.
lindley(link = "loglink", itheta = NULL, zero = NULL)
lindley(link = "loglink", itheta = NULL, zero = NULL)
link |
Link function applied to the (positive) parameter.
See |
itheta , zero
|
See |
The density function is given by
for and
.
The mean of
(returned as the fitted values)
is
.
The variance
is
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
This VGAM family function can handle multiple responses (inputted as a matrix). Fisher scoring is implemented.
T. W. Yee
Lindley, D. V. (1958). Fiducial distributions and Bayes' theorem. Journal of the Royal Statistical Society, Series B, Methodological, 20, 102–107.
Ghitany, M. E. and Atieh, B. and Nadarajah, S. (2008). Lindley distribution and its application. Math. Comput. Simul., 78, 493–506.
ldata <- data.frame(y = rlind(n = 1000, theta = exp(3))) fit <- vglm(y ~ 1, lindley, data = ldata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit)
ldata <- data.frame(y = rlind(n = 1000, theta = exp(3))) fit <- vglm(y ~ 1, lindley, data = ldata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit) summary(fit)
Density, cumulative distribution function, and random generation for the Lindley distribution.
dlind(x, theta, log = FALSE) plind(q, theta, lower.tail = TRUE, log.p = FALSE) rlind(n, theta)
dlind(x, theta, log = FALSE) plind(q, theta, lower.tail = TRUE, log.p = FALSE) rlind(n, theta)
x , q
|
vector of quantiles. |
n |
number of observations.
Same as in |
log |
Logical.
If |
theta |
positive parameter. |
lower.tail , log.p
|
See lindley
for details.
dlind
gives the density,
plind
gives the cumulative distribution function, and
rlind
generates random deviates.
T. W. Yee and Kai Huang
theta <- exp(-1); x <- seq(0.0, 17, length = 700) dlind(0:10, theta) ## Not run: plot(x, dlind(x, theta), type = "l", las = 1, col = "blue", main = "dlind(x, theta = exp(-1))") abline(h = 1, col = "grey", lty = "dashed") ## End(Not run)
theta <- exp(-1); x <- seq(0.0, 17, length = 700) dlind(0:10, theta) ## Not run: plot(x, dlind(x, theta), type = "l", las = 1, col = "blue", main = "dlind(x, theta = exp(-1))") abline(h = 1, col = "grey", lty = "dashed") ## End(Not run)
Returns the link functions, and parameter names, for vector generalized linear models (VGLMs).
linkfun(object, ...) linkfunvlm(object, earg = FALSE, ...)
linkfun(object, ...) linkfunvlm(object, earg = FALSE, ...)
object |
An object which has parameter link functions, e.g.,
has class |
earg |
Logical.
Return the extra arguments associated with each
link function? If |
... |
Arguments that might be used in the future. |
All fitted VGLMs have a link function applied to each parameter. This function returns these, and optionally, the extra arguments associated with them.
Usually just a (named) character string, with the link functions
in order.
It is named with the parameter names.
If earg = TRUE
then a list with the following components.
link |
The default output. |
earg |
The extra arguments, in order. |
Presently, the multinomial logit model has only
one link function, multilogitlink
,
so a warning is not issued for that link.
For other models, if the number of link functions does
not equal then a warning may be issued.
Thomas W. Yee
linkfun
,
multilogitlink
,
vglm
.
pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo) coef(fit1, matrix = TRUE) linkfun(fit1) linkfun(fit1, earg = TRUE) fit2 <- vglm(cbind(normal, mild, severe) ~ let, multinomial, data = pneumo) coef(fit2, matrix = TRUE) linkfun(fit2) linkfun(fit2, earg = TRUE)
pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo) coef(fit1, matrix = TRUE) linkfun(fit1) linkfun(fit1, earg = TRUE) fit2 <- vglm(cbind(normal, mild, severe) ~ let, multinomial, data = pneumo) coef(fit2, matrix = TRUE) linkfun(fit2) linkfun(fit2, earg = TRUE)
The VGAM package provides a number of (parameter) link functions which are described in general here. Collectively, they offer the user considerable choice and flexibility for modelling data.
TypicalVGAMlink(theta, someParameter = 0, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
TypicalVGAMlink(theta, someParameter = 0, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character.
This is usually |
someParameter |
Some parameter, e.g., an offset. |
bvalue |
Boundary value, positive if given.
If |
inverse |
Logical. If |
deriv |
Integer.
Either 0, 1, or 2, specifying the order of
the derivative.
Most link functions handle values up to 3 or 4.
Some links can even handle values up to 9 but may
suffer from catastrophic cancellation near the
boundaries,
be inefficient and slow because they are
based on |
short , tag
|
Logical.
These are used for labelling the |
Almost all VGAM link functions have
something similar to the argument list as
given above. In this help file we have
where
is the link function,
is the parameter and
is the linear/additive
predictor. The link
must be strictly
monotonic and twice-differentiable in its
range.
The following is a brief enumeration of all VGAM link functions.
For parameters lying between 0 and 1 (e.g.,
probabilities):
logitlink
,
probitlink
,
clogloglink
,
cauchitlink
,
foldsqrtlink
,
logclink
.
For positive parameters
(i.e., greater than 0):
loglink
,
negloglink
,
sqrtlink
,
powerlink
.
For parameters greater than 1:
logloglink
,
loglogloglink
(greater than ).
For parameters between and
:
fisherzlink
,
rhobitlink
.
For parameters between finite and
:
extlogitlink
,
logofflink
().
For unrestricted parameters (i.e., any value):
identitylink
,
negidentitylink
,
reciprocallink
,
negreciprocallink
.
Returns one of: the link function value or its first or second derivative, the inverse link or its first or second derivative, or a character description of the link.
Here are the general details.
If inverse = FALSE
and deriv = 0
(default) then the ordinary link function
is returned.
If inverse = TRUE
and deriv =
0
then the inverse link function value
is returned, hence theta
is really
(the only occasion this
happens).
If inverse = FALSE
and deriv
= 1
then it is as a function of
. If
inverse =
FALSE
and deriv = 2
then it is
as a function of
.
If inverse = TRUE
and deriv
= 1
then it is as a function of
. If
inverse = TRUE
and deriv = 2
then it is as a
function of
.
It is only when deriv = 1
that
linkfun(theta, deriv = 1, inverse = TRUE)
and
linkfun(theta, deriv = 1, inverse = FALSE)
are reciprocals of each other.
In particular,
linkfun(theta, deriv = 2, inverse = TRUE)
and
linkfun(theta, deriv = 2, inverse = FALSE)
are not reciprocals of each other
in general.
The output of link functions changed at
VGAM 0.9-9
(date was around
2015-07). Formerly, linkfun(theta,
deriv = 1)
is now linkfun(theta,
deriv = 1, inverse = TRUE)
, or equivalently,
1 / linkfun(theta, deriv = 1, inverse =
TRUE)
. Also, formerly, linkfun(theta,
deriv = 2)
was 1 / linkfun(theta,
deriv = 2, inverse = TRUE)
. This was a bug.
Altogether, these are big changes and the
user should beware!
In VGAM 1.0-7
(January 2019)
all link function names were made to
end in the characters "link"
,
e.g.,
loglink
replaces loge
,
logitlink
replaces logit
.
For this most of them were renamed.
Upward compatability holds for older link
function names, however, users should adopt
the new names immediately.
VGAM link functions are generally
not compatible with other functions outside
the package. In particular, they won't work
with glm
or any other
package for fitting GAMs.
From October 2006 onwards,
all VGAM family functions will only
contain one default value for each link
argument rather than giving a vector
of choices. For example, rather than
binomialff(link = c("logitlink",
"probitlink", "clogloglink", "cauchitlink",
"identitylink"), ...)
it is now
binomialff(link = "logitlink", ...)
.
No checking will be done to see if the user's
choice is reasonable. This means that the
user can write his/her own VGAM link
function and use it within any VGAM
family function. Altogether this provides
greater flexibility. The downside is that
the user must specify the full name of
the link function, by either assigning the
link argument the full name as a character
string, or just the name itself. See the
examples below.
From August 2012 onwards, a major
change in link functions occurred.
Argument esigma
(and the like such
as earg
) used to be in VGAM
prior to version 0.9-0 (released during the
2nd half of 2012).
The major change is that arguments such as
offset
that used to be passed in via
those arguments can done directly through
the link function. For example,
gev(lshape = "logofflink", eshape = list(offset = 0.5))
is replaced by
gev(lshape = logofflink(offset = 0.5))
.
The @misc
slot no longer
has link
and earg
components,
but two other components replace
these. Functions such as
dtheta.deta()
,
d2theta.deta2()
,
d3theta.deta3()
,
eta2theta()
,
theta2eta()
are modified.
From January 2019 onwards, all link function
names ended in "link"
. See above
for details.
T. W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
TypicalVGAMfamilyFunction
,
linkfun
,
vglm
,
vgam
,
rrvglm
.
cqo
,
cao
.
logitlink("a") logitlink("a", short = FALSE) logitlink("a", short = FALSE, tag = TRUE) logofflink(1:5, offset = 1) # Same as log(1:5 + 1) powerlink(1:5, power = 2) # Same as (1:5)^2 ## Not run: # This is old and no longer works: logofflink(1:5, earg = list(offset = 1)) powerlink(1:5, earg = list(power = 2)) ## End(Not run) fit1 <- vgam(agaaus ~ altitude, binomialff(link = "clogloglink"), hunua) # best fit2 <- vgam(agaaus ~ altitude, binomialff(link = clogloglink ), hunua) # okay ## Not run: # This no longer works since "clog" is not a valid VGAM link function: fit3 <- vgam(agaaus ~ altitude, binomialff(link = "clog"), hunua) # not okay # No matter what the link, the estimated var-cov matrix is the same y <- rbeta(n = 1000, shape1 = exp(0), shape2 = exp(1)) fit1 <- vglm(y ~ 1, betaR(lshape1 = "identitylink", lshape2 = "identitylink"), trace = TRUE, crit = "coef") fit2 <- vglm(y ~ 1, betaR(lshape1 = logofflink(offset = 1.1), lshape2 = logofflink(offset = 1.1)), trace=TRUE) vcov(fit1, untransform = TRUE) vcov(fit1, untransform = TRUE) - vcov(fit2, untransform = TRUE) # Should be all 0s \dontrun{ # This is old: fit1@misc$earg # Some 'special' parameters fit2@misc$earg # Some 'special' parameters are here } par(mfrow = c(2, 2)) p <- seq(0.05, 0.95, len = 200) # A rather restricted range x <- seq(-4, 4, len = 200) plot(p, logitlink(p), type = "l", col = "blue") plot(x, logitlink(x, inverse = TRUE), type = "l", col = "blue") plot(p, logitlink(p, deriv=1), type="l", col="blue") # 1 / (p*(1-p)) plot(p, logitlink(p, deriv=2), type="l", col="blue") # (2*p-1)/(p*(1-p))^2 ## End(Not run)
logitlink("a") logitlink("a", short = FALSE) logitlink("a", short = FALSE, tag = TRUE) logofflink(1:5, offset = 1) # Same as log(1:5 + 1) powerlink(1:5, power = 2) # Same as (1:5)^2 ## Not run: # This is old and no longer works: logofflink(1:5, earg = list(offset = 1)) powerlink(1:5, earg = list(power = 2)) ## End(Not run) fit1 <- vgam(agaaus ~ altitude, binomialff(link = "clogloglink"), hunua) # best fit2 <- vgam(agaaus ~ altitude, binomialff(link = clogloglink ), hunua) # okay ## Not run: # This no longer works since "clog" is not a valid VGAM link function: fit3 <- vgam(agaaus ~ altitude, binomialff(link = "clog"), hunua) # not okay # No matter what the link, the estimated var-cov matrix is the same y <- rbeta(n = 1000, shape1 = exp(0), shape2 = exp(1)) fit1 <- vglm(y ~ 1, betaR(lshape1 = "identitylink", lshape2 = "identitylink"), trace = TRUE, crit = "coef") fit2 <- vglm(y ~ 1, betaR(lshape1 = logofflink(offset = 1.1), lshape2 = logofflink(offset = 1.1)), trace=TRUE) vcov(fit1, untransform = TRUE) vcov(fit1, untransform = TRUE) - vcov(fit2, untransform = TRUE) # Should be all 0s \dontrun{ # This is old: fit1@misc$earg # Some 'special' parameters fit2@misc$earg # Some 'special' parameters are here } par(mfrow = c(2, 2)) p <- seq(0.05, 0.95, len = 200) # A rather restricted range x <- seq(-4, 4, len = 200) plot(p, logitlink(p), type = "l", col = "blue") plot(x, logitlink(x, inverse = TRUE), type = "l", col = "blue") plot(p, logitlink(p, deriv=1), type="l", col="blue") # 1 / (p*(1-p)) plot(p, logitlink(p, deriv=2), type="l", col="blue") # (2*p-1)/(p*(1-p))^2 ## End(Not run)
Maximum likelihood estimation of the 3-parameter generalized beta distribution as proposed by Libby and Novick (1982).
lino(lshape1 = "loglink", lshape2 = "loglink", llambda = "loglink", ishape1 = NULL, ishape2 = NULL, ilambda = 1, zero = NULL)
lino(lshape1 = "loglink", lshape2 = "loglink", llambda = "loglink", ishape1 = NULL, ishape2 = NULL, ilambda = 1, zero = NULL)
lshape1 , lshape2
|
Parameter link functions applied to the two
(positive) shape parameters |
llambda |
Parameter link function applied to the
parameter |
ishape1 , ishape2 , ilambda
|
Initial values for the parameters. A |
zero |
Can be an integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
Here, the values must be from the set {1,2,3} which correspond
to |
Proposed by Libby and Novick (1982), this distribution has density
for ,
,
,
.
Here
is the beta function (see
beta
).
The mean is a complicated function involving the Gauss hypergeometric
function.
If has a
lino
distribution with parameters
shape1
, shape2
, lambda
, then
has a standard beta distribution with parameters
shape1
,
shape2
.
Since corresponds to the
standard beta distribution, a
summary
of the fitted model
performs a t-test for whether the data belongs to a standard
beta distribution (provided the loglink
link for
is used; this is the default).
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
The fitted values, which is usually the mean, have not been implemented yet. Currently the median is returned as the fitted values.
Although Fisher scoring is used, the working weight matrices are positive-definite only in a certain region of the parameter space. Problems with this indicate poor initial values or an ill-conditioned model or insufficient data etc.
This model is can be difficult to fit. A reasonably good value of
ilambda
seems to be needed so if the self-starting initial
values fail, try experimenting with the initial value arguments.
Experience suggests ilambda
is better a little larger,
rather than smaller, compared to the true value.
T. W. Yee
Libby, D. L. and Novick, M. R. (1982). Multivariate generalized beta distributions with applications to utility assessment. Journal of Educational Statistics, 7, 271–294.
Gupta, A. K. and Nadarajah, S. (2004). Handbook of Beta Distribution and Its Applications, NY: Marcel Dekker, Inc.
ldata <- data.frame(y1 = rbeta(n = 1000, exp(0.5), exp(1))) # Std beta fit <- vglm(y1 ~ 1, lino, data = ldata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) summary(fit) # Nonstandard beta distribution ldata <- transform(ldata, y2 = rlino(1000, shape1 = exp(1), shape2 = exp(2), lambda = exp(1))) fit2 <- vglm(y2 ~ 1, lino(lshape1 = "identitylink", lshape2 = "identitylink", ilamb = 10), data = ldata, trace = TRUE) coef(fit2, matrix = TRUE)
ldata <- data.frame(y1 = rbeta(n = 1000, exp(0.5), exp(1))) # Std beta fit <- vglm(y1 ~ 1, lino, data = ldata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) head(fitted(fit)) summary(fit) # Nonstandard beta distribution ldata <- transform(ldata, y2 = rlino(1000, shape1 = exp(1), shape2 = exp(2), lambda = exp(1))) fit2 <- vglm(y2 ~ 1, lino(lshape1 = "identitylink", lshape2 = "identitylink", ilamb = 10), data = ldata, trace = TRUE) coef(fit2, matrix = TRUE)
Density, distribution function, quantile function and random generation for the generalized beta distribution, as proposed by Libby and Novick (1982).
dlino(x, shape1, shape2, lambda = 1, log = FALSE) plino(q, shape1, shape2, lambda = 1, lower.tail = TRUE, log.p = FALSE) qlino(p, shape1, shape2, lambda = 1, lower.tail = TRUE, log.p = FALSE) rlino(n, shape1, shape2, lambda = 1)
dlino(x, shape1, shape2, lambda = 1, log = FALSE) plino(q, shape1, shape2, lambda = 1, lower.tail = TRUE, log.p = FALSE) qlino(p, shape1, shape2, lambda = 1, lower.tail = TRUE, log.p = FALSE) rlino(n, shape1, shape2, lambda = 1)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as in |
shape1 , shape2 , lambda
|
see |
log |
Logical.
If |
lower.tail , log.p
|
See lino
, the VGAM family function
for estimating the parameters,
for the formula of the probability density function and other
details.
dlino
gives the density,
plino
gives the distribution function,
qlino
gives the quantile function, and
rlino
generates random deviates.
T. W. Yee and Kai Huang
lino
.
## Not run: lambda <- 0.4; shape1 <- exp(1.3); shape2 <- exp(1.3) x <- seq(0.0, 1.0, len = 101) plot(x, dlino(x, shape1 = shape1, shape2 = shape2, lambda = lambda), type = "l", col = "blue", las = 1, ylab = "", main = "Blue is PDF, orange is the CDF", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "blue", lty = 2) lines(x, plino(x, shape1, shape2, lambda = lambda), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qlino(probs, shape1 = shape1, shape2 = shape2, lambda = lambda) lines(Q, dlino(Q, shape1 = shape1, shape2 = shape2, lambda = lambda), col = "purple", lty = 3, type = "h") plino(Q, shape1, shape2, lambda = lambda) - probs # Should be all 0 ## End(Not run)
## Not run: lambda <- 0.4; shape1 <- exp(1.3); shape2 <- exp(1.3) x <- seq(0.0, 1.0, len = 101) plot(x, dlino(x, shape1 = shape1, shape2 = shape2, lambda = lambda), type = "l", col = "blue", las = 1, ylab = "", main = "Blue is PDF, orange is the CDF", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "blue", lty = 2) lines(x, plino(x, shape1, shape2, lambda = lambda), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qlino(probs, shape1 = shape1, shape2 = shape2, lambda = lambda) lines(Q, dlino(Q, shape1 = shape1, shape2 = shape2, lambda = lambda), col = "purple", lty = 3, type = "h") plino(Q, shape1, shape2, lambda = lambda) - probs # Should be all 0 ## End(Not run)
Low-iron rat teratology data.
data(lirat)
data(lirat)
A data frame with 58 observations on the following 4 variables.
N
Litter size.
R
Number of dead fetuses.
hb
Hemoglobin level.
grp
Group number. Group 1 is the untreated (low-iron) group, group 2 received injections on day 7 or day 10 only, group 3 received injections on days 0 and 7, and group 4 received injections weekly.
The following description comes from Moore and Tsiatis (1991). The data comes from the experimental setup from Shepard et al. (1980), which is typical of studies of the effects of chemical agents or dietary regimens on fetal development in laboratory rats.
Female rats were put in iron-deficient diets and divided into 4 groups. One group of controls was given weekly injections of iron supplement to bring their iron intake to normal levels, while another group was given only placebo injections. Two other groups were given fewer iron-supplement injections than the controls. The rats were made pregnant, sacrificed 3 weeks later, and the total number of fetuses and the number of dead fetuses in each litter were counted.
For each litter the number of dead fetuses may be considered to be
Binomial() where
is the litter size and
is the probability of a fetus dying. The parameter
is expected
to vary from litter to litter, therefore the total variance of the
proportions will be greater than that predicted by a binomial model,
even when the covariates for hemoglobin level and experimental group
are accounted for.
Moore, D. F. and Tsiatis, A. (1991) Robust Estimation of the Variance in Moment Methods for Extra-binomial and Extra-Poisson Variation. Biometrics, 47, 383–401.
Shepard, T. H., Mackler, B. and Finch, C. A. (1980). Reproductive studies in the iron-deficient rat. Teratology, 22, 329–334.
## Not run: # cf. Figure 3 of Moore and Tsiatis (1991) plot(R / N ~ hb, data = lirat, pch = as.character(grp), col = grp, las = 1, xlab = "Hemoglobin level", ylab = "Proportion Dead") ## End(Not run)
## Not run: # cf. Figure 3 of Moore and Tsiatis (1991) plot(R / N ~ hb, data = lirat, pch = as.character(grp), col = grp, las = 1, xlab = "Hemoglobin level", ylab = "Proportion Dead") ## End(Not run)
LMS quantile regression with the Box-Cox transformation to the gamma distribution.
lms.bcg(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lmu = "identitylink", lsigma = "loglink", idf.mu = 4, idf.sigma = 2, ilambda = 1, isigma = NULL)
lms.bcg(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lmu = "identitylink", lsigma = "loglink", idf.mu = 4, idf.sigma = 2, ilambda = 1, isigma = NULL)
percentiles |
A numerical vector containing values between 0 and 100, which are the quantiles. They will be returned as 'fitted values'. |
zero |
See |
llambda , lmu , lsigma
|
See |
idf.mu , idf.sigma
|
See |
ilambda , isigma
|
See |
Given a value of the covariate, this function applies a
Box-Cox transformation to the response to best obtain a
gamma distribution. The parameters chosen to do this are
estimated by maximum likelihood or penalized maximum likelihood.
Similar details can be found at lms.bcn
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
This VGAM family function comes with the same
warnings as lms.bcn
.
Also, the expected value of the second derivative with
respect to lambda may be incorrect (my calculations do
not agree with the Lopatatzidis and Green manuscript.)
Similar notes can be found at lms.bcn
.
Thomas W. Yee
Lopatatzidis A. and Green, P. J. (unpublished manuscript). Semiparametric quantile regression using the gamma distribution.
Yee, T. W. (2004). Quantile regression via vector generalized additive models. Statistics in Medicine, 23, 2295–2315.
lms.bcn
,
lms.yjn
,
qtplot.lmscreg
,
deplot.lmscreg
,
cdf.lmscreg
,
bmi.nz
,
amlexponential
.
# This converges, but deplot(fit) and qtplot(fit) do not work fit0 <- vglm(BMI ~ sm.bs(age, df = 4), lms.bcg, bmi.nz, trace = TRUE) coef(fit0, matrix = TRUE) ## Not run: par(mfrow = c(1, 1)) plotvgam(fit0, se = TRUE) # Plot mu function (only) ## End(Not run) # Use a trick: fit0 is used for initial values for fit1. fit1 <- vgam(BMI ~ s(age, df = c(4, 2)), etastart = predict(fit0), lms.bcg(zero = 1), bmi.nz, trace = TRUE) # Difficult to get a model that converges. Here, we prematurely # stop iterations because it fails near the solution. fit2 <- vgam(BMI ~ s(age, df = c(4, 2)), maxit = 4, lms.bcg(zero = 1, ilam = 3), bmi.nz, trace = TRUE) summary(fit1) head(predict(fit1)) head(fitted(fit1)) head(bmi.nz) # Person 1 is near the lower quartile of BMI amongst people his age head(cdf(fit1)) ## Not run: # Quantile plot par(bty = "l", mar=c(5, 4, 4, 3) + 0.1, xpd = TRUE) qtplot(fit1, percentiles=c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 90), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) (aa <- deplot(fit1, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "PDFs at Age = 20 (black), 42 (red) and 55 (blue)")) aa <- deplot(fit1, x0 = 42, y = ygrid, add=TRUE, llty=2, col="red") aa <- deplot(fit1, x0 = 55, y = ygrid, add=TRUE, llty=4, col="blue", Attach = TRUE) aa@post$deplot # Contains density function values ## End(Not run)
# This converges, but deplot(fit) and qtplot(fit) do not work fit0 <- vglm(BMI ~ sm.bs(age, df = 4), lms.bcg, bmi.nz, trace = TRUE) coef(fit0, matrix = TRUE) ## Not run: par(mfrow = c(1, 1)) plotvgam(fit0, se = TRUE) # Plot mu function (only) ## End(Not run) # Use a trick: fit0 is used for initial values for fit1. fit1 <- vgam(BMI ~ s(age, df = c(4, 2)), etastart = predict(fit0), lms.bcg(zero = 1), bmi.nz, trace = TRUE) # Difficult to get a model that converges. Here, we prematurely # stop iterations because it fails near the solution. fit2 <- vgam(BMI ~ s(age, df = c(4, 2)), maxit = 4, lms.bcg(zero = 1, ilam = 3), bmi.nz, trace = TRUE) summary(fit1) head(predict(fit1)) head(fitted(fit1)) head(bmi.nz) # Person 1 is near the lower quartile of BMI amongst people his age head(cdf(fit1)) ## Not run: # Quantile plot par(bty = "l", mar=c(5, 4, 4, 3) + 0.1, xpd = TRUE) qtplot(fit1, percentiles=c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 90), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) (aa <- deplot(fit1, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "PDFs at Age = 20 (black), 42 (red) and 55 (blue)")) aa <- deplot(fit1, x0 = 42, y = ygrid, add=TRUE, llty=2, col="red") aa <- deplot(fit1, x0 = 55, y = ygrid, add=TRUE, llty=4, col="blue", Attach = TRUE) aa@post$deplot # Contains density function values ## End(Not run)
LMS quantile regression with the Box-Cox transformation to normality.
lms.bcn(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lmu = "identitylink", lsigma = "loglink", idf.mu = 4, idf.sigma = 2, ilambda = 1, isigma = NULL, tol0 = 0.001)
lms.bcn(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lmu = "identitylink", lsigma = "loglink", idf.mu = 4, idf.sigma = 2, ilambda = 1, isigma = NULL, tol0 = 0.001)
percentiles |
A numerical vector containing values between 0 and 100, which are the quantiles. They will be returned as ‘fitted values’. |
zero |
Can be an integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
The values must be from the set {1,2,3}.
The default value usually increases the chance of successful
convergence.
Setting |
llambda , lmu , lsigma
|
Parameter link functions applied to the first, second and third
linear/additive predictors.
See |
idf.mu |
Degrees of freedom for the cubic smoothing spline fit applied to
get an initial estimate of mu.
See |
idf.sigma |
Degrees of freedom for the cubic smoothing spline fit applied to
get an initial estimate of sigma.
See |
ilambda |
Initial value for lambda.
If necessary, it is recycled to be a vector of length |
isigma |
Optional initial value for sigma.
If necessary, it is recycled to be a vector of length |
tol0 |
Small positive number, the tolerance for testing if lambda is equal to zero. |
Given a value of the covariate, this function applies a Box-Cox transformation to the response to best obtain normality. The parameters chosen to do this are estimated by maximum likelihood or penalized maximum likelihood.
In more detail,
the basic idea behind this method is that, for a fixed
value of , a Box-Cox transformation of the
response
is applied to obtain standard normality. The 3 parameters
(
,
,
,
which start with the letters “L-M-S”
respectively, hence its name) are chosen to maximize a penalized
log-likelihood (with
vgam
). Then the
appropriate quantiles of the standard normal distribution
are back-transformed onto the original scale to get the
desired quantiles.
The three parameters may vary as a smooth function of .
The Box-Cox power transformation here of the ,
given
, is
for .
(The singularity at
is handled by a simple function involving a logarithm.)
Then
is assumed to have a standard normal distribution.
The parameter
must be positive, therefore
VGAM chooses
by default.
The parameter
is also positive, but while
is
available, it is not the default because
is
more directly interpretable.
Given the estimated linear/additive predictors, the
percentile can be estimated
by inverting the Box-Cox power transformation at the
percentile of the standard
normal distribution.
Of the three functions, it is often a good idea to allow
to be more flexible because the functions
and
usually vary more smoothly with
. This is somewhat
reflected in the default value for the argument
zero
,
viz. zero = c(1, 3)
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
The computations are not simple, therefore convergence may
fail. Set trace = TRUE
to monitor convergence if it
isn't set already. Convergence failure will occur if, e.g.,
the response is bimodal at any particular value of .
In case of convergence failure, try different starting values.
Also, the estimate may diverge quickly near the solution, in
which case try prematurely stopping the iterations by assigning
maxits
to be the iteration number corresponding to the
highest likelihood value.
One trick is to fit a simple model and use it to provide initial values for a more complex model; see in the examples below.
The response must be positive because the Box-Cox transformation cannot handle negative values. In theory, the LMS-Yeo-Johnson-normal method can handle both positive and negative values.
In general, the lambda and sigma functions should be more
smoother than the mean function.
Having zero = 1
, zero = 3
or zero = c(1, 3)
is often a good idea. See the example below.
Thomas W. Yee
Cole, T. J. and Green, P. J. (1992). Smoothing Reference Centile Curves: The LMS Method and Penalized Likelihood. Statistics in Medicine, 11, 1305–1319.
Green, P. J. and Silverman, B. W. (1994). Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach, London: Chapman & Hall.
Yee, T. W. (2004). Quantile regression via vector generalized additive models. Statistics in Medicine, 23, 2295–2315.
lms.bcg
,
lms.yjn
,
qtplot.lmscreg
,
deplot.lmscreg
,
cdf.lmscreg
,
eCDF
,
extlogF1
,
alaplace1
,
amlnormal
,
denorm
,
CommonVGAMffArguments
.
## Not run: require("VGAMdata") mysub <- subset(xs.nz, sex == "M" & ethnicity == "Maori" & study1) mysub <- transform(mysub, BMI = weight / height^2) BMIdata <- na.omit(mysub) BMIdata <- subset(BMIdata, BMI < 80 & age < 65, select = c(age, BMI)) # Delete an outlier summary(BMIdata) fit <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), BMIdata) par(mfrow = c(1, 2)) plot(fit, scol = "blue", se = TRUE) # The two centered smooths head(predict(fit)) head(fitted(fit)) head(BMIdata) head(cdf(fit)) # Person 46 is probably overweight, given his age 100 * colMeans(c(depvar(fit)) < fitted(fit)) # Empirical proportions # Correct for "vgam" objects but not very elegant: fit@family@linkinv(eta = predict(fit, data.frame(age = 60)), extra = list(percentiles = c(10, 50))) if (FALSE) { # These work for "vglm" objects: fit2 <- vglm(BMI ~ bs(age, df = 4), lms.bcn(zero = 3), BMIdata) predict(fit2, percentiles = c(10, 50), newdata = data.frame(age = 60), type = "response") head(fitted(fit2, percentiles = c(10, 50))) # Different percentiles } # Convergence problems? Use fit0 for initial values for fit1 fit0 <- vgam(BMI ~ s(age, df = 4), lms.bcn(zero = c(1, 3)), BMIdata) fit1 <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), BMIdata, etastart = predict(fit0)) ## End(Not run) ## Not run: # Quantile plot par(bty = "l", mar = c(5, 4, 4, 3) + 0.1, xpd = TRUE) qtplot(fit, percentiles = c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 66), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) (aa <- deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "PDFs at Age = 20 (black), 42 (red) and 55 (blue)")) aa <- deplot(fit, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red") aa <- deplot(fit, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue", Attach = TRUE) aa@post$deplot # Contains density function values ## End(Not run)
## Not run: require("VGAMdata") mysub <- subset(xs.nz, sex == "M" & ethnicity == "Maori" & study1) mysub <- transform(mysub, BMI = weight / height^2) BMIdata <- na.omit(mysub) BMIdata <- subset(BMIdata, BMI < 80 & age < 65, select = c(age, BMI)) # Delete an outlier summary(BMIdata) fit <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), BMIdata) par(mfrow = c(1, 2)) plot(fit, scol = "blue", se = TRUE) # The two centered smooths head(predict(fit)) head(fitted(fit)) head(BMIdata) head(cdf(fit)) # Person 46 is probably overweight, given his age 100 * colMeans(c(depvar(fit)) < fitted(fit)) # Empirical proportions # Correct for "vgam" objects but not very elegant: fit@family@linkinv(eta = predict(fit, data.frame(age = 60)), extra = list(percentiles = c(10, 50))) if (FALSE) { # These work for "vglm" objects: fit2 <- vglm(BMI ~ bs(age, df = 4), lms.bcn(zero = 3), BMIdata) predict(fit2, percentiles = c(10, 50), newdata = data.frame(age = 60), type = "response") head(fitted(fit2, percentiles = c(10, 50))) # Different percentiles } # Convergence problems? Use fit0 for initial values for fit1 fit0 <- vgam(BMI ~ s(age, df = 4), lms.bcn(zero = c(1, 3)), BMIdata) fit1 <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero = 1), BMIdata, etastart = predict(fit0)) ## End(Not run) ## Not run: # Quantile plot par(bty = "l", mar = c(5, 4, 4, 3) + 0.1, xpd = TRUE) qtplot(fit, percentiles = c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 66), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) (aa <- deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "PDFs at Age = 20 (black), 42 (red) and 55 (blue)")) aa <- deplot(fit, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red") aa <- deplot(fit, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue", Attach = TRUE) aa@post$deplot # Contains density function values ## End(Not run)
LMS quantile regression with the Yeo-Johnson transformation to normality. This family function is experimental and the LMS-BCN family function is recommended instead.
lms.yjn(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lsigma = "loglink", idf.mu = 4, idf.sigma = 2, ilambda = 1, isigma = NULL, rule = c(10, 5), yoffset = NULL, diagW = FALSE, iters.diagW = 6) lms.yjn2(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lmu = "identitylink", lsigma = "loglink", idf.mu = 4, idf.sigma = 2, ilambda = 1.0, isigma = NULL, yoffset = NULL, nsimEIM = 250)
lms.yjn(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lsigma = "loglink", idf.mu = 4, idf.sigma = 2, ilambda = 1, isigma = NULL, rule = c(10, 5), yoffset = NULL, diagW = FALSE, iters.diagW = 6) lms.yjn2(percentiles = c(25, 50, 75), zero = c("lambda", "sigma"), llambda = "identitylink", lmu = "identitylink", lsigma = "loglink", idf.mu = 4, idf.sigma = 2, ilambda = 1.0, isigma = NULL, yoffset = NULL, nsimEIM = 250)
percentiles |
A numerical vector containing values between 0 and 100, which are the quantiles. They will be returned as 'fitted values'. |
zero |
See |
llambda , lmu , lsigma
|
See |
idf.mu , idf.sigma
|
See |
ilambda , isigma
|
See |
rule |
Number of abscissae used in the Gaussian integration scheme to work out elements of the weight matrices. The values given are the possible choices, with the first value being the default. The larger the value, the more accurate the approximation is likely to be but involving more computational expense. |
yoffset |
A value to be added to the response y, for the purpose
of centering the response before fitting the model to the data.
The default value, |
diagW |
Logical. This argument is offered because the expected information matrix may not be positive-definite. Using the diagonal elements of this matrix results in a higher chance of it being positive-definite, however convergence will be very slow. If |
iters.diagW |
Integer. Number of iterations in which the
diagonal elements of the expected information matrix are used.
Only used if |
nsimEIM |
See |
Given a value of the covariate, this function applies a
Yeo-Johnson transformation to the response to best obtain
normality. The parameters chosen to do this are estimated by
maximum likelihood or penalized maximum likelihood.
The function lms.yjn2()
estimates the expected information
matrices using simulation (and is consequently slower) while
lms.yjn()
uses numerical integration.
Try the other if one function fails.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
The computations are not simple, therefore convergence may fail. In that case, try different starting values.
The generic function predict
, when applied to a
lms.yjn
fit, does not add back the yoffset
value.
As described above, this family function is experimental and the LMS-BCN family function is recommended instead.
The response may contain both positive and negative values. In contrast, the LMS-Box-Cox-normal and LMS-Box-Cox-gamma methods only handle a positive response because the Box-Cox transformation cannot handle negative values.
Some other notes can be found at lms.bcn
.
Thomas W. Yee
Yeo, I.-K. and Johnson, R. A. (2000). A new family of power transformations to improve normality or symmetry. Biometrika, 87, 954–959.
Yee, T. W. (2004). Quantile regression via vector generalized additive models. Statistics in Medicine, 23, 2295–2315.
Yee, T. W. (2002). An Implementation for Regression Quantile Estimation. Pages 3–14. In: Haerdle, W. and Ronz, B., Proceedings in Computational Statistics COMPSTAT 2002. Heidelberg: Physica-Verlag.
lms.bcn
,
lms.bcg
,
qtplot.lmscreg
,
deplot.lmscreg
,
cdf.lmscreg
,
bmi.nz
,
amlnormal
.
fit <- vgam(BMI ~ s(age, df = 4), lms.yjn, bmi.nz, trace = TRUE) head(predict(fit)) head(fitted(fit)) head(bmi.nz) # Person 1 is near the lower quartile of BMI amongst people his age head(cdf(fit)) ## Not run: # Quantile plot par(bty = "l", mar = c(5, 4, 4, 3) + 0.1, xpd = TRUE) qtplot(fit, percentiles = c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 90), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) (Z <- deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "PDFs at Age = 20 (black), 42 (red) and 55 (blue)")) Z <- deplot(fit, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red") Z <- deplot(fit, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue", Attach = TRUE) with(Z@post, deplot) # Contains PDF values; == a@post$deplot ## End(Not run)
fit <- vgam(BMI ~ s(age, df = 4), lms.yjn, bmi.nz, trace = TRUE) head(predict(fit)) head(fitted(fit)) head(bmi.nz) # Person 1 is near the lower quartile of BMI amongst people his age head(cdf(fit)) ## Not run: # Quantile plot par(bty = "l", mar = c(5, 4, 4, 3) + 0.1, xpd = TRUE) qtplot(fit, percentiles = c(5, 50, 90, 99), main = "Quantiles", xlim = c(15, 90), las = 1, ylab = "BMI", lwd = 2, lcol = 4) # Density plot ygrid <- seq(15, 43, len = 100) # BMI ranges par(mfrow = c(1, 1), lwd = 2) (Z <- deplot(fit, x0 = 20, y = ygrid, xlab = "BMI", col = "black", main = "PDFs at Age = 20 (black), 42 (red) and 55 (blue)")) Z <- deplot(fit, x0 = 42, y = ygrid, add = TRUE, llty = 2, col = "red") Z <- deplot(fit, x0 = 55, y = ygrid, add = TRUE, llty = 4, col = "blue", Attach = TRUE) with(Z@post, deplot) # Contains PDF values; == a@post$deplot ## End(Not run)
Density, distribution function, quantile function, and random generation for the logarithmic distribution.
dlog(x, shape, log = FALSE) plog(q, shape, lower.tail = TRUE, log.p = FALSE) qlog(p, shape) rlog(n, shape)
dlog(x, shape, log = FALSE) plog(q, shape, lower.tail = TRUE, log.p = FALSE) qlog(p, shape) rlog(n, shape)
x , q , p , n , lower.tail
|
Same interpretation as in |
shape |
The shape parameter value |
log , log.p
|
Logical.
If |
The details are given in logff
.
dlog
gives the density,
plog
gives the distribution function,
qlog
gives the quantile function, and
rlog
generates random deviates.
Given some response data, the VGAM family function
logff
estimates the parameter shape
.
For plog()
, if argument q
contains large values
and/or q
is long in length
then the memory requirements may be very high.
Very large values in q
are handled by an approximation by
Owen (1965).
T. W. Yee
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
logff
,
Gaitdlog
,
Oilog
.
Otlog
.
dlog(1:20, 0.5) rlog(20, 0.5) ## Not run: shape <- 0.8; x <- 1:10 plot(x, dlog(x, shape = shape), type = "h", ylim = 0:1, sub = "shape=0.8", las = 1, col = "blue", ylab = "shape", main = "Logarithmic distribution: blue=PDF; orange=CDF") lines(x + 0.1, plog(x, shape), col = "orange", lty = 3, type = "h") ## End(Not run)
dlog(1:20, 0.5) rlog(20, 0.5) ## Not run: shape <- 0.8; x <- 1:10 plot(x, dlog(x, shape = shape), type = "h", ylim = 0:1, sub = "shape=0.8", las = 1, col = "blue", ylab = "shape", main = "Logarithmic distribution: blue=PDF; orange=CDF") lines(x + 0.1, plog(x, shape), col = "orange", lty = 3, type = "h") ## End(Not run)
Computes log(1 + exp(x))
and log(1 - exp(-x))
accurately.
log1mexp(x) log1pexp(x)
log1mexp(x) log1pexp(x)
x |
A vector of reals (numeric). Complex numbers not allowed since
|
Computes log(1 + exp(x))
and log(1 - exp(-x))
accurately. An adjustment is made when is away from 0
in value.
log1mexp(x)
gives the value of
.
log1pexp(x)
gives the value of
.
If NA
or NaN
is present in the input, the
corresponding output will be NA
.
This is a direct translation of the function in Martin Maechler's (2012) paper by Xiangjie Xue and T. W. Yee.
Maechler, Martin (2012). Accurately Computing log(1-exp(-|a|)). Assessed from the Rmpfr package.
x <- c(10, 50, 100, 200, 400, 500, 800, 1000, 1e4, 1e5, 1e20, Inf, NA) log1pexp(x) log(1 + exp(x)) # Naive; suffers from overflow log1mexp(x) log(1 - exp(-x)) y <- -x log1pexp(y) log(1 + exp(y)) # Naive; suffers from inaccuracy
x <- c(10, 50, 100, 200, 400, 500, 800, 1000, 1e4, 1e5, 1e20, Inf, NA) log1pexp(x) log(1 + exp(x)) # Naive; suffers from overflow log1mexp(x) log(1 - exp(-x)) y <- -x log1pexp(y) log(1 + exp(y)) # Naive; suffers from inaccuracy
Computes the Complementary-log Transformation, Including its Inverse and the First Two Derivatives.
logclink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
logclink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
bvalue |
See |
inverse , deriv , short , tag
|
Details at |
The complementary-log link function is suitable for parameters that
are less than unity.
Numerical values of theta
close to 1 or out of range
result in
Inf
, -Inf
, NA
or NaN
.
For deriv = 0
, the log of theta
, i.e.,
log(1-theta)
when inverse = FALSE
,
and if inverse = TRUE
then
1-exp(theta)
.
For deriv = 1
, then the function returns
d eta
/ d theta
as a function of theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns the reciprocal.
Here, all logarithms are natural logarithms, i.e., to base e.
Numerical instability may occur when theta
is close to 1.
One way of overcoming this is to use bvalue
.
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Links
,
loglink
,
clogloglink
,
logloglink
,
logofflink
.
## Not run: logclink(seq(-0.2, 1.1, by = 0.1)) # Has NAs ## End(Not run) logclink(seq(-0.2,1.1,by=0.1),bvalue=1-.Machine$double.eps) # Has no NAs
## Not run: logclink(seq(-0.2, 1.1, by = 0.1)) # Has NAs ## End(Not run) logclink(seq(-0.2,1.1,by=0.1),bvalue=1-.Machine$double.eps) # Has no NAs
Maximum likelihood estimation of the 2-parameter log F distribution.
logF(lshape1 = "loglink", lshape2 = "loglink", ishape1 = NULL, ishape2 = 1, imethod = 1)
logF(lshape1 = "loglink", lshape2 = "loglink", ishape1 = NULL, ishape2 = 1, imethod = 1)
lshape1 , lshape2
|
Parameter link functions for
the shape parameters.
Called |
ishape1 , ishape2
|
Optional initial values for the shape parameters.
If given, it must be numeric and values are recycled to the
appropriate length.
The default is to choose the value internally.
See |
imethod |
Initialization method.
Either the value 1, 2, or ....
See |
The density for this distribution is
where is real,
,
,
is the beta function
beta
.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
and vgam
.
Thomas W. Yee
Jones, M. C. (2008). On a class of distributions with simple exponential tails. Statistica Sinica, 18(3), 1101–1110.
nn <- 1000 ldata <- data.frame(y1 = rnorm(nn, +1, sd = exp(2)), # Not proper data x2 = rnorm(nn, -1, sd = exp(2)), y2 = rnorm(nn, -1, sd = exp(2))) # Not proper data fit1 <- vglm(y1 ~ 1 , logF, ldata, trace = TRUE) fit2 <- vglm(y2 ~ x2, logF, ldata, trace = TRUE) coef(fit2, matrix = TRUE) summary(fit2) vcov(fit2) head(fitted(fit1)) with(ldata, mean(y1)) max(abs(head(fitted(fit1)) - with(ldata, mean(y1))))
nn <- 1000 ldata <- data.frame(y1 = rnorm(nn, +1, sd = exp(2)), # Not proper data x2 = rnorm(nn, -1, sd = exp(2)), y2 = rnorm(nn, -1, sd = exp(2))) # Not proper data fit1 <- vglm(y1 ~ 1 , logF, ldata, trace = TRUE) fit2 <- vglm(y2 ~ x2, logF, ldata, trace = TRUE) coef(fit2, matrix = TRUE) summary(fit2) vcov(fit2) head(fitted(fit1)) with(ldata, mean(y1)) max(abs(head(fitted(fit1)) - with(ldata, mean(y1))))
Estimating the (single) parameter of the logarithmic distribution.
logff(lshape = "logitlink", gshape = -expm1(-7 * ppoints(4)), zero = NULL)
logff(lshape = "logitlink", gshape = -expm1(-7 * ppoints(4)), zero = NULL)
lshape |
Parameter link function for the parameter |
gshape , zero
|
Details at |
The logarithmic distribution is
a generalized power series distribution that is
based specifically on the logarithmic series
(scaled to a probability function).
Its probability function is
, for
,
where
(called
shape
),
and .
The mean is
(returned as the fitted values)
and variance is
.
When the sample mean is large, the value of
tends to
be very close to 1, hence it could be argued that
logitlink
is not the best choice.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The function log
computes the natural
logarithm. In the VGAM library, a link function with option
loglink
corresponds to this.
Multiple responses are permitted.
The “logarithmic distribution” has various meanings in
the literature. Sometimes it is also called the
log-series distribution.
Some others call some continuous distribution on
by the name “logarithmic distribution”.
T. W. Yee
Johnson N. L., Kemp, A. W. and Kotz S. (2005). Univariate Discrete Distributions, 3rd edition, ch.7. Hoboken, New Jersey: Wiley.
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011) Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
Log
,
gaitdlog
,
oalog
,
oilog
,
otlog
,
log
,
loglink
,
logofflink
,
explogff
,
simulate.vlm
.
nn <- 1000 ldata <- data.frame(y = rlog(nn, shape = logitlink(0.2, inv = TRUE))) fit <- vglm(y ~ 1, logff, data = ldata, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) Coef(fit) ## Not run: with(ldata, spikeplot(y, col = "blue", capped = TRUE)) x <- seq(1, with(ldata, max(y)), by = 1) with(ldata, lines(x + 0.1, dlog(x, Coef(fit)[1]), col = "orange", type = "h", lwd = 2)) ## End(Not run) # Example: Corbet (1943) butterfly Malaya data corbet <- data.frame(nindiv = 1:24, ofreq = c(118, 74, 44, 24, 29, 22, 20, 19, 20, 15, 12, 14, 6, 12, 6, 9, 9, 6, 10, 10, 11, 5, 3, 3)) fit <- vglm(nindiv ~ 1, logff, data = corbet, weights = ofreq) coef(fit, matrix = TRUE) shapehat <- Coef(fit)["shape"] pdf2 <- dlog(x = with(corbet, nindiv), shape = shapehat) print(with(corbet, cbind(nindiv, ofreq, fitted = pdf2 * sum(ofreq))), digits = 1)
nn <- 1000 ldata <- data.frame(y = rlog(nn, shape = logitlink(0.2, inv = TRUE))) fit <- vglm(y ~ 1, logff, data = ldata, trace = TRUE, crit = "c") coef(fit, matrix = TRUE) Coef(fit) ## Not run: with(ldata, spikeplot(y, col = "blue", capped = TRUE)) x <- seq(1, with(ldata, max(y)), by = 1) with(ldata, lines(x + 0.1, dlog(x, Coef(fit)[1]), col = "orange", type = "h", lwd = 2)) ## End(Not run) # Example: Corbet (1943) butterfly Malaya data corbet <- data.frame(nindiv = 1:24, ofreq = c(118, 74, 44, 24, 29, 22, 20, 19, 20, 15, 12, 14, 6, 12, 6, 9, 9, 6, 10, 10, 11, 5, 3, 3)) fit <- vglm(nindiv ~ 1, logff, data = corbet, weights = ofreq) coef(fit, matrix = TRUE) shapehat <- Coef(fit)["shape"] pdf2 <- dlog(x = with(corbet, nindiv), shape = shapehat) print(with(corbet, cbind(nindiv, ofreq, fitted = pdf2 * sum(ofreq))), digits = 1)
Estimates the location and scale parameters of the logistic distribution by maximum likelihood estimation.
logistic1(llocation = "identitylink", scale.arg = 1, imethod = 1) logistic(llocation = "identitylink", lscale = "loglink", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale")
logistic1(llocation = "identitylink", scale.arg = 1, imethod = 1) logistic(llocation = "identitylink", lscale = "loglink", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale")
llocation , lscale
|
Parameter link functions applied to the location parameter |
scale.arg |
Known positive scale parameter (called |
ilocation , iscale
|
See |
imethod , zero
|
See |
The two-parameter logistic distribution has a density that can be written as
where is the scale parameter, and
is the location
parameter. The response
. The mean
of
(which is the fitted value) is
and its variance is
.
A logistic distribution with scale = 0.65
(see dlogis
)
resembles
dt
with df = 7
;
see logistic1
and studentt
.
logistic1
estimates the location parameter only while
logistic
estimates both parameters.
By default,
and
for
logistic
.
logistic
can handle multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
Fisher scoring is used, and the Fisher information matrix is diagonal.
T. W. Yee
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994). Continuous Univariate Distributions, 2nd edition, Volume 1, New York: Wiley. Chapter 15.
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
Castillo, E., Hadi, A. S., Balakrishnan, N. and Sarabia, J. S. (2005). Extreme Value and Related Models with Applications in Engineering and Science, Hoboken, NJ, USA: Wiley-Interscience, p.130.
deCani, J. S. and Stine, R. A. (1986). A Note on Deriving the Information Matrix for a Logistic Distribution, The American Statistician, 40, 220–222.
rlogis
,
CommonVGAMffArguments
,
logitlink
,
gensh
,
cumulative
,
bilogistic
,
simulate.vlm
.
# Location unknown, scale known ldata <- data.frame(x2 = runif(nn <- 500)) ldata <- transform(ldata, y1 = rlogis(nn, loc = 1+5*x2, sc = exp(2))) fit1 <- vglm(y1 ~ x2, logistic1(scale = exp(2)), ldata, trace = TRUE) coef(fit1, matrix = TRUE) # Both location and scale unknown ldata <- transform(ldata, y2 = rlogis(nn, loc = 1 + 5*x2, exp(x2))) fit2 <- vglm(cbind(y1, y2) ~ x2, logistic, data = ldata, trace = TRUE) coef(fit2, matrix = TRUE) vcov(fit2) summary(fit2)
# Location unknown, scale known ldata <- data.frame(x2 = runif(nn <- 500)) ldata <- transform(ldata, y1 = rlogis(nn, loc = 1+5*x2, sc = exp(2))) fit1 <- vglm(y1 ~ x2, logistic1(scale = exp(2)), ldata, trace = TRUE) coef(fit1, matrix = TRUE) # Both location and scale unknown ldata <- transform(ldata, y2 = rlogis(nn, loc = 1 + 5*x2, exp(x2))) fit2 <- vglm(cbind(y1, y2) ~ x2, logistic, data = ldata, trace = TRUE) coef(fit2, matrix = TRUE) vcov(fit2) summary(fit2)
Computes the logit transformation, including its inverse and the first nine derivatives.
logitlink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) extlogitlink(theta, min = 0, max = 1, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
logitlink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) extlogitlink(theta, min = 0, max = 1, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
bvalue , bminvalue , bmaxvalue
|
See |
min , max
|
For |
inverse , deriv , short , tag
|
Details at |
The logit link function is very commonly used for parameters
that lie in the unit interval.
It is the inverse CDF of the logistic distribution.
Numerical values of theta
close to 0 or 1 or out of range
result in
Inf
, -Inf
, NA
or NaN
.
The extended logit link function extlogitlink
should be used more generally for parameters that lie in the
interval , say.
The formula is
and the default values for and
correspond to
the ordinary logit function.
Numerical values of
theta
close to
or
or out of range result in
Inf
, -Inf
, NA
or NaN
.
However these can be replaced by values and
first before computing the link function.
For logitlink
with deriv = 0
, the logit
of theta
, i.e., log(theta/(1-theta))
when
inverse = FALSE
, and if inverse = TRUE
then
exp(theta)/(1+exp(theta))
.
For deriv = 1
, then the function returns
d eta
/ d theta
as a function of
theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns the reciprocal.
Here, all logarithms are natural logarithms, i.e., to base e.
Numerical instability may occur when theta
is
close to 1 or 0 (for logitlink
), or close to
or
for
extlogitlink
.
One way of overcoming this is to use, e.g., bvalue
.
In terms of the threshold approach with cumulative probabilities
for an ordinal response this link function corresponds to the
univariate logistic distribution (see logistic
).
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Links
,
alogitlink
,
asinlink
,
logitoffsetlink
,
probitlink
,
clogloglink
,
cauchitlink
,
logistic1
,
loglink
,
Logistic
,
multilogitlink
.
p <- seq(0.01, 0.99, by = 0.01) logitlink(p) max(abs(logitlink(logitlink(p), inverse = TRUE) - p)) # 0? p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) logitlink(p) # Has NAs logitlink(p, bvalue = .Machine$double.eps) # Has no NAs p <- seq(0.9, 2.2, by = 0.1) extlogitlink(p, min = 1, max = 2, bminvalue = 1 + .Machine$double.eps, bmaxvalue = 2 - .Machine$double.eps) # Has no NAs ## Not run: par(mfrow = c(2,2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) p <- seq(0.01, 0.99, by = 0.01) for (d in 0:1) { myinv <- (d > 0) matplot(p, cbind( logitlink(p, deriv = d, inv = myinv), probitlink(p, deriv = d, inv = myinv)), las = 1, type = "n", col = "purple", ylab = "transformation", main = if (d == 0) "Some probability link functions" else "1 / first derivative") lines(p, logitlink(p, deriv = d, inverse = myinv), col = "limegreen") lines(p, probitlink(p, deriv = d, inverse = myinv), col = "purple") lines(p, clogloglink(p, deriv = d, inverse = myinv), col = "chocolate") lines(p, cauchitlink(p, deriv = d, inverse = myinv), col = "tan") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logitlink", "probitlink", "clogloglink", "cauchitlink"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = mylwd) } else abline(v = 0.5, lty = "dashed") } for (d in 0) { matplot(y, cbind(logitlink(y, deriv = d, inverse = TRUE), probitlink(y, deriv = d, inverse = TRUE)), las = 1, type = "n", col = "purple", xlab = "transformation", ylab = "p", main = if (d == 0) "Some inverse probability link functions" else "First derivative") lines(y, logitlink(y, deriv = d, inv = TRUE), col = "limegreen") lines(y, probitlink(y, deriv = d, inv = TRUE), col = "purple") lines(y, clogloglink(y, deriv = d, inv = TRUE), col = "chocolate") lines(y, cauchitlink(y, deriv = d, inv = TRUE), col = "tan") if (d == 0) { abline(h = 0.5, v = 0, lty = "dashed") legend(-4, 1, c("logitlink", "probitlink", "clogloglink", "cauchitlink"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = mylwd) } } p <- seq(0.21, 0.59, by = 0.01) plot(p, extlogitlink(p, min = 0.2, max = 0.6), xlim = c(0, 1), type = "l", col = "black", ylab = "transformation", las = 1, main = "extlogitlink(p, min = 0.2, max = 0.6)") par(lwd = 1) ## End(Not run)
p <- seq(0.01, 0.99, by = 0.01) logitlink(p) max(abs(logitlink(logitlink(p), inverse = TRUE) - p)) # 0? p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) logitlink(p) # Has NAs logitlink(p, bvalue = .Machine$double.eps) # Has no NAs p <- seq(0.9, 2.2, by = 0.1) extlogitlink(p, min = 1, max = 2, bminvalue = 1 + .Machine$double.eps, bmaxvalue = 2 - .Machine$double.eps) # Has no NAs ## Not run: par(mfrow = c(2,2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) p <- seq(0.01, 0.99, by = 0.01) for (d in 0:1) { myinv <- (d > 0) matplot(p, cbind( logitlink(p, deriv = d, inv = myinv), probitlink(p, deriv = d, inv = myinv)), las = 1, type = "n", col = "purple", ylab = "transformation", main = if (d == 0) "Some probability link functions" else "1 / first derivative") lines(p, logitlink(p, deriv = d, inverse = myinv), col = "limegreen") lines(p, probitlink(p, deriv = d, inverse = myinv), col = "purple") lines(p, clogloglink(p, deriv = d, inverse = myinv), col = "chocolate") lines(p, cauchitlink(p, deriv = d, inverse = myinv), col = "tan") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logitlink", "probitlink", "clogloglink", "cauchitlink"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = mylwd) } else abline(v = 0.5, lty = "dashed") } for (d in 0) { matplot(y, cbind(logitlink(y, deriv = d, inverse = TRUE), probitlink(y, deriv = d, inverse = TRUE)), las = 1, type = "n", col = "purple", xlab = "transformation", ylab = "p", main = if (d == 0) "Some inverse probability link functions" else "First derivative") lines(y, logitlink(y, deriv = d, inv = TRUE), col = "limegreen") lines(y, probitlink(y, deriv = d, inv = TRUE), col = "purple") lines(y, clogloglink(y, deriv = d, inv = TRUE), col = "chocolate") lines(y, cauchitlink(y, deriv = d, inv = TRUE), col = "tan") if (d == 0) { abline(h = 0.5, v = 0, lty = "dashed") legend(-4, 1, c("logitlink", "probitlink", "clogloglink", "cauchitlink"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = mylwd) } } p <- seq(0.21, 0.59, by = 0.01) plot(p, extlogitlink(p, min = 0.2, max = 0.6), xlim = c(0, 1), type = "l", col = "black", ylab = "transformation", las = 1, main = "extlogitlink(p, min = 0.2, max = 0.6)") par(lwd = 1) ## End(Not run)
Computes the logitoffsetlink transformation, including its inverse and the first two derivatives.
logitoffsetlink(theta, offset = 0, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
logitoffsetlink(theta, offset = 0, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
offset |
The offset value(s), which must be non-negative.
It is called |
inverse , deriv , short , tag
|
Details at |
This link function allows for some asymmetry compared to the
ordinary logitlink
link.
The formula is
and the default value for the offset is corresponds to the
ordinary
logitlink
link.
When inverse = TRUE
will mean that the value will
lie in the interval .
For logitoffsetlink
with deriv = 0
, the
logitoffsetlink of theta
, i.e.,
log(theta/(1-theta) - K)
when inverse = FALSE
,
and if inverse = TRUE
then
(K + exp(theta))/(1 + exp(theta) + K)
.
For deriv = 1
, then the function returns d
eta
/ d theta
as a function of theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns the reciprocal.
Here, all logarithms are natural logarithms, i.e., to base e.
This function is numerical less stability than
logitlink
.
Thomas W. Yee
Komori, O. and Eguchi, S. et al., 2016. An asymmetric logistic model for ecological data. Methods in Ecology and Evolution, 7.
p <- seq(0.05, 0.99, by = 0.01); myoff <- 0.05 logitoffsetlink(p, myoff) max(abs(logitoffsetlink(logitoffsetlink(p, myoff), myoff, inverse = TRUE) - p)) # Should be 0
p <- seq(0.05, 0.99, by = 0.01); myoff <- 0.05 logitoffsetlink(p, myoff) max(abs(logitoffsetlink(logitoffsetlink(p, myoff), myoff, inverse = TRUE) - p)) # Should be 0
Calculates the log-likelihood value or the element-by-element contributions of the log-likelihood.
## S3 method for class 'vlm' logLik(object, summation = TRUE, ...)
## S3 method for class 'vlm' logLik(object, summation = TRUE, ...)
object |
Some VGAM object, for example, having
class |
summation |
Logical, apply |
... |
Currently unused.
In the future:
other possible arguments fed into
|
By default, this function returns the log-likelihood of the object. Thus this code relies on the log-likelihood being defined, and computed, for the object.
Returns the log-likelihood of the object.
If summation = FALSE
then a -vector or
-row matrix (with the number of responses as
the number of columns) is returned.
Each element is the contribution to the log-likelihood.
The prior weights are assimulated within the answer.
Not all VGAM family functions have had the
summation
checked.
Not all VGAM family functions currently have the
summation
argument implemented.
T. W. Yee.
VGLMs are described in vglm-class
;
VGAMs are described in vgam-class
;
RR-VGLMs are described in rrvglm-class
;
AIC
;
anova.vglm
.
zdata <- data.frame(x2 = runif(nn <- 50)) zdata <- transform(zdata, Ps01 = logitlink(-0.5 , inverse = TRUE), Ps02 = logitlink( 0.5 , inverse = TRUE), lambda1 = loglink(-0.5 + 2*x2, inverse = TRUE), lambda2 = loglink( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(nn, lambda = lambda1, pstr0 = Ps01), y2 = rzipois(nn, lambda = lambda2, pstr0 = Ps02)) with(zdata, table(y1)) # Eyeball the data with(zdata, table(y2)) fit2 <- vglm(cbind(y1, y2) ~ x2, zipoisson(zero = NULL), data = zdata) logLik(fit2) # Summed over the two responses sum(logLik(fit2, sum = FALSE)) # For checking purposes (ll.matrix <- logLik(fit2, sum = FALSE)) # nn x 2 matrix colSums(ll.matrix) # log-likelihood for each response
zdata <- data.frame(x2 = runif(nn <- 50)) zdata <- transform(zdata, Ps01 = logitlink(-0.5 , inverse = TRUE), Ps02 = logitlink( 0.5 , inverse = TRUE), lambda1 = loglink(-0.5 + 2*x2, inverse = TRUE), lambda2 = loglink( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(nn, lambda = lambda1, pstr0 = Ps01), y2 = rzipois(nn, lambda = lambda2, pstr0 = Ps02)) with(zdata, table(y1)) # Eyeball the data with(zdata, table(y2)) fit2 <- vglm(cbind(y1, y2) ~ x2, zipoisson(zero = NULL), data = zdata) logLik(fit2) # Summed over the two responses sum(logLik(fit2, sum = FALSE)) # For checking purposes (ll.matrix <- logLik(fit2, sum = FALSE)) # nn x 2 matrix colSums(ll.matrix) # log-likelihood for each response
Fits a loglinear model to two binary responses.
loglinb2(exchangeable = FALSE, zero = "u12")
loglinb2(exchangeable = FALSE, zero = "u12")
exchangeable |
Logical.
If |
zero |
Which linear/additive predictors are modelled as
intercept-only?
A |
The model is
where and
are 0 or 1, and
the parameters are
,
,
.
The normalizing parameter
can be expressed as a
function of the other parameters, viz.,
The linear/additive predictors are
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
When fitted, the fitted.values
slot of the object
contains the four joint probabilities, labelled as
= (0,0), (0,1), (1,0), (1,1),
respectively.
The response must be a two-column matrix of ones and zeros only.
This is more restrictive than binom2.or
,
which can handle more types of input formats.
Note that each of the 4 combinations of the multivariate response
need to appear in the data set.
After estimation, the response attached to the object is also a
two-column matrix; possibly in the future it might change into a
four-column matrix.
Thomas W. Yee
Yee, T. W. and Wild, C. J. (2001). Discussion to: “Smoothing spline ANOVA for multivariate Bernoulli observations, with application to ophthalmology data (with discussion)” by Gao, F., Wahba, G., Klein, R., Klein, B. Journal of the American Statistical Association, 96, 127–160.
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
binom2.or
,
binom2.rho
,
loglinb3
.
coalminers <- transform(coalminers, Age = (age - 42) / 5) # Get the n x 4 matrix of counts fit0 <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age, binom2.or, coalminers) counts <- round(c(weights(fit0, type = "prior")) * depvar(fit0)) # Create a n x 2 matrix response for loglinb2() # bwmat <- matrix(c(0,0, 0,1, 1,0, 1,1), 4, 2, byrow = TRUE) bwmat <- cbind(bln = c(0,0,1,1), wheeze = c(0,1,0,1)) matof1 <- matrix(1, nrow(counts), 1) newminers <- data.frame(bln = kronecker(matof1, bwmat[, 1]), wheeze = kronecker(matof1, bwmat[, 2]), wt = c(t(counts)), Age = with(coalminers, rep(age, rep(4, length(age))))) newminers <- newminers[with(newminers, wt) > 0,] fit <- vglm(cbind(bln,wheeze) ~ Age, loglinb2(zero = NULL), weight = wt, data = newminers) coef(fit, matrix = TRUE) # Same! (at least for the log odds-ratio) summary(fit) # Try reconcile this with McCullagh and Nelder (1989), p.234 (0.166-0.131) / 0.027458 # 1.275 is approximately 1.25
coalminers <- transform(coalminers, Age = (age - 42) / 5) # Get the n x 4 matrix of counts fit0 <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age, binom2.or, coalminers) counts <- round(c(weights(fit0, type = "prior")) * depvar(fit0)) # Create a n x 2 matrix response for loglinb2() # bwmat <- matrix(c(0,0, 0,1, 1,0, 1,1), 4, 2, byrow = TRUE) bwmat <- cbind(bln = c(0,0,1,1), wheeze = c(0,1,0,1)) matof1 <- matrix(1, nrow(counts), 1) newminers <- data.frame(bln = kronecker(matof1, bwmat[, 1]), wheeze = kronecker(matof1, bwmat[, 2]), wt = c(t(counts)), Age = with(coalminers, rep(age, rep(4, length(age))))) newminers <- newminers[with(newminers, wt) > 0,] fit <- vglm(cbind(bln,wheeze) ~ Age, loglinb2(zero = NULL), weight = wt, data = newminers) coef(fit, matrix = TRUE) # Same! (at least for the log odds-ratio) summary(fit) # Try reconcile this with McCullagh and Nelder (1989), p.234 (0.166-0.131) / 0.027458 # 1.275 is approximately 1.25
Fits a loglinear model to three binary responses.
loglinb3(exchangeable = FALSE, zero = c("u12", "u13", "u23"))
loglinb3(exchangeable = FALSE, zero = c("u12", "u13", "u23"))
exchangeable |
Logical.
If |
zero |
Which linear/additive predictors are modelled as
intercept-only?
A |
The model is
where ,
and
are 0
or 1, and the parameters are
,
,
,
,
,
.
The normalizing parameter
can be expressed as a
function of the other parameters.
Note that a third-order association parameter,
for the product
,
is assumed to be zero for this family function.
The linear/additive predictors are
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
When fitted, the fitted.values
slot of the object
contains the eight joint probabilities, labelled as
= (0,0,0), (0,0,1), (0,1,0),
(0,1,1), (1,0,0), (1,0,1), (1,1,0), (1,1,1), respectively.
The response must be a 3-column matrix of ones and zeros only. Note that each of the 8 combinations of the multivariate response need to appear in the data set, therefore data sets will need to be large in order for this family function to work. After estimation, the response attached to the object is also a 3-column matrix; possibly in the future it might change into a 8-column matrix.
Thomas W. Yee
Yee, T. W. and Wild, C. J. (2001). Discussion to: “Smoothing spline ANOVA for multivariate Bernoulli observations, with application to ophthalmology data (with discussion)” by Gao, F., Wahba, G., Klein, R., Klein, B. Journal of the American Statistical Association, 96, 127–160.
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
lfit <- vglm(cbind(cyadea, beitaw, kniexc) ~ altitude, loglinb3, data = hunua, trace = TRUE) coef(lfit, matrix = TRUE) head(fitted(lfit)) summary(lfit)
lfit <- vglm(cbind(cyadea, beitaw, kniexc) ~ altitude, loglinb3, data = hunua, trace = TRUE) coef(lfit, matrix = TRUE) head(fitted(lfit)) summary(lfit)
Computes the log transformation, including its inverse and the first two derivatives.
loglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) negloglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) logneglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
loglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) negloglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) logneglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
bvalue |
See |
inverse , deriv , short , tag
|
Details at |
The log link function is very commonly used for parameters that
are positive.
Here, all logarithms are natural logarithms, i.e., to base
. Numerical values of
theta
close to 0 or out of
range result in Inf
, -Inf
, NA
or NaN
.
The function loglink
computes
whereas
negloglink
computes
.
The function logneglink
computes
, hence is suitable for parameters
that are negative, e.g.,
a trap-shy effect in
posbernoulli.b
.
The following concerns loglink
.
For deriv = 0
, the log of theta
, i.e.,
log(theta)
when inverse = FALSE
, and if
inverse = TRUE
then exp(theta)
.
For deriv = 1
, then the function returns
d eta
/ d theta
as a function of
theta
if inverse = FALSE
, else if
inverse = TRUE
then it returns the reciprocal.
This function was called loge
to avoid conflict with the
log
function.
Numerical instability may occur when theta
is close to
0 unless bvalue
is used.
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Links
,
explink
,
logitlink
,
logclink
,
logloglink
,
log
,
logofflink
,
lambertW
,
posbernoulli.b
.
## Not run: loglink(seq(-0.2, 0.5, by = 0.1)) loglink(seq(-0.2, 0.5, by = 0.1), bvalue = .Machine$double.xmin) negloglink(seq(-0.2, 0.5, by = 0.1)) negloglink(seq(-0.2, 0.5, by = 0.1), bvalue = .Machine$double.xmin) ## End(Not run) logneglink(seq(-0.5, -0.2, by = 0.1))
## Not run: loglink(seq(-0.2, 0.5, by = 0.1)) loglink(seq(-0.2, 0.5, by = 0.1), bvalue = .Machine$double.xmin) negloglink(seq(-0.2, 0.5, by = 0.1)) negloglink(seq(-0.2, 0.5, by = 0.1), bvalue = .Machine$double.xmin) ## End(Not run) logneglink(seq(-0.5, -0.2, by = 0.1))
Computes the two transformations, including their inverse and the first two derivatives.
logloglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) loglogloglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
logloglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) loglogloglink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
bvalue |
Values of |
inverse , deriv , short , tag
|
Details at |
The log-log link function is commonly used for parameters that
are greater than unity.
Similarly, the log-log-log link function is applicable
for parameters that
are greater than .
Numerical values of
theta
close to 1 or
or out of range
result in
Inf
, -Inf
, NA
or NaN
.
One possible application of loglogloglink()
is to
the parameter (also called
size
)
of negbinomial
to Poisson-like data but with
only a small amount of overdispersion; then is
a large number relative to
munb
.
In such situations a loglink
or
loglog
link may not be sufficient to draw the
estimate toward the interior of the parameter space. Using a
more stronger link function can help mitigate the Hauck-Donner
effect hdeff
.
For logloglink()
:
for deriv = 0
, the log of log(theta)
, i.e.,
log(log(theta))
when inverse = FALSE
,
and if inverse = TRUE
then
exp(exp(theta))
.
For loglogloglink()
:
for deriv = 0
, the log of log(log(theta))
, i.e.,
log(log(log(theta)))
when inverse = FALSE
,
and if inverse = TRUE
then
exp(exp(exp(theta)))
.
For deriv = 1
, then the function returns
d theta
/ d eta
as a function
of theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns the reciprocal.
Here, all logarithms are natural logarithms, i.e., to base e.
Numerical instability may occur when theta
is
close to 1 or
unless
bvalue
is used.
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
x <- seq(0.8, 1.5, by = 0.1) logloglink(x) # Has NAs logloglink(x, bvalue = 1.0 + .Machine$double.eps) # Has no NAs x <- seq(1.01, 10, len = 100) logloglink(x) max(abs(logloglink(logloglink(x), inverse = TRUE) - x)) # 0?
x <- seq(0.8, 1.5, by = 0.1) logloglink(x) # Has NAs logloglink(x, bvalue = 1.0 + .Machine$double.eps) # Has no NAs x <- seq(1.01, 10, len = 100) logloglink(x) max(abs(logloglink(logloglink(x), inverse = TRUE) - x)) # 0?
Maximum likelihood estimation of the (univariate) lognormal distribution.
lognormal(lmeanlog = "identitylink", lsdlog = "loglink", zero = "sdlog")
lognormal(lmeanlog = "identitylink", lsdlog = "loglink", zero = "sdlog")
lmeanlog , lsdlog
|
Parameter link functions applied to the mean and (positive)
|
zero |
Specifies which
linear/additive predictor is modelled as intercept-only.
For |
A random variable has a 2-parameter lognormal distribution
if
is distributed
.
The expected value of
, which is
and not , make up the fitted values.
The variance of
is
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
Lognormal
,
uninormal
,
CommonVGAMffArguments
,
simulate.vlm
.
ldata2 <- data.frame(x2 = runif(nn <- 1000)) ldata2 <- transform(ldata2, y1 = rlnorm(nn, 1 + 2 * x2, sd = exp(-1)), y2 = rlnorm(nn, 1, sd = exp(-1 + x2))) fit1 <- vglm(y1 ~ x2, lognormal(zero = 2), data = ldata2, trace = TRUE) fit2 <- vglm(y2 ~ x2, lognormal(zero = 1), data = ldata2, trace = TRUE) coef(fit1, matrix = TRUE) coef(fit2, matrix = TRUE)
ldata2 <- data.frame(x2 = runif(nn <- 1000)) ldata2 <- transform(ldata2, y1 = rlnorm(nn, 1 + 2 * x2, sd = exp(-1)), y2 = rlnorm(nn, 1, sd = exp(-1 + x2))) fit1 <- vglm(y1 ~ x2, lognormal(zero = 2), data = ldata2, trace = TRUE) fit2 <- vglm(y2 ~ x2, lognormal(zero = 1), data = ldata2, trace = TRUE) coef(fit1, matrix = TRUE) coef(fit2, matrix = TRUE)
Computes the log transformation with an offset, including its inverse and the first two derivatives.
logofflink(theta, offset = 0, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) log1plink(theta, offset = 0, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
logofflink(theta, offset = 0, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) log1plink(theta, offset = 0, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
offset |
Offset value.
See |
inverse , deriv , short , tag
|
Details at |
The log-offset link function is very commonly used
for parameters that
are greater than a certain value.
In particular, it is defined by
log(theta + offset)
where
offset
is the offset value. For example,
if offset = 0.5
then the value
of theta
is restricted
to be greater than .
Numerical values of theta
close
to -offset
or out of range
result in
Inf
, -Inf
, NA
or NaN
.
The offset is implicitly 1 in log1plink
.
It is equivalent to logofflink(offset = 1)
but is more accurate if abs(theta)
is tiny.
It may be used for lrho
in
extbetabinomial
provided
an offset log(size - 1)
for
is included.
For deriv = 0
, the log of theta+offset
,
i.e.,
log(theta+offset)
when inverse = FALSE
,
and if inverse = TRUE
then
exp(theta)-offset
.
For deriv = 1
, then the function returns
d theta
/ d eta
as
a function of theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns
the reciprocal.
Here, all logarithms are natural logarithms, i.e., to base e.
The default means this function is identical
to loglink
.
Numerical instability may occur when theta
is
close to -offset
.
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Links
,
loglink
,
extbetabinomial
.
## Not run: logofflink(seq(-0.2, 0.5, by = 0.1)) logofflink(seq(-0.2, 0.5, by = 0.1), offset = 0.5) log(seq(-0.2, 0.5, by = 0.1) + 0.5) ## End(Not run)
## Not run: logofflink(seq(-0.2, 0.5, by = 0.1)) logofflink(seq(-0.2, 0.5, by = 0.1), offset = 0.5) log(seq(-0.2, 0.5, by = 0.1) + 0.5) ## End(Not run)
Maximum likelihood estimation of the 2-parameter Lomax distribution.
lomax(lscale = "loglink", lshape3.q = "loglink", iscale = NULL, ishape3.q = NULL, imethod = 1, gscale = exp(-5:5), gshape3.q = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
lomax(lscale = "loglink", lshape3.q = "loglink", iscale = NULL, ishape3.q = NULL, imethod = 1, gscale = exp(-5:5), gshape3.q = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
lscale , lshape3.q
|
Parameter link function applied to the
(positive) parameters |
iscale , ishape3.q , imethod
|
See |
gscale , gshape3.q , zero , probs.y
|
The 2-parameter Lomax distribution is the 4-parameter
generalized beta II distribution with shape parameters .
It is probably more widely known as the Pareto (II) distribution.
It is also the 3-parameter Singh-Maddala distribution
with shape parameter
, as well as the
beta distribution of the second kind with
.
More details can be found in Kleiber and Kotz (2003).
The Lomax distribution has density
for ,
,
.
Here,
is the scale parameter
scale
,
and q
is a shape parameter.
The cumulative distribution function is
The mean is
provided ; these are returned as the fitted values.
This family function handles multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
See the notes in genbetaII
.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
Lomax
,
genbetaII
,
betaII
,
dagum
,
sinmad
,
fisk
,
inv.lomax
,
paralogistic
,
inv.paralogistic
,
simulate.vlm
.
ldata <- data.frame(y = rlomax(n = 1000, scale = exp(1), exp(2))) fit <- vglm(y ~ 1, lomax, data = ldata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit)
ldata <- data.frame(y = rlomax(n = 1000, scale = exp(1), exp(2))) fit <- vglm(y ~ 1, lomax, data = ldata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit)
Density, distribution function, quantile function and random
generation for the Lomax distribution with scale parameter
scale
and shape parameter q
.
dlomax(x, scale = 1, shape3.q, log = FALSE) plomax(q, scale = 1, shape3.q, lower.tail = TRUE, log.p = FALSE) qlomax(p, scale = 1, shape3.q, lower.tail = TRUE, log.p = FALSE) rlomax(n, scale = 1, shape3.q)
dlomax(x, scale = 1, shape3.q, log = FALSE) plomax(q, scale = 1, shape3.q, lower.tail = TRUE, log.p = FALSE) qlomax(p, scale = 1, shape3.q, lower.tail = TRUE, log.p = FALSE) rlomax(n, scale = 1, shape3.q)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations. If |
scale |
scale parameter. |
shape3.q |
shape parameter. |
log |
Logical.
If |
lower.tail , log.p
|
See lomax
, which is the VGAM family function
for estimating the parameters by maximum likelihood estimation.
dlomax
gives the density,
plomax
gives the distribution function,
qlomax
gives the quantile function, and
rlomax
generates random deviates.
The Lomax distribution is a special case of the 4-parameter generalized beta II distribution.
T. W. Yee and Kai Huang
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
probs <- seq(0.1, 0.9, by = 0.1) max(abs(plomax(qlomax(p = probs, shape3.q = 1), shape3.q = 1) - probs)) # Should be 0 ## Not run: par(mfrow = c(1, 2)) x <- seq(-0.01, 5, len = 401) plot(x, dexp(x), type = "l", col = "black", ylab = "", ylim = c(0, 3), main = "Black is std exponential, others are dlomax(x, shape3.q)") lines(x, dlomax(x, shape3.q = 1), col = "orange") lines(x, dlomax(x, shape3.q = 2), col = "blue") lines(x, dlomax(x, shape3.q = 5), col = "green") legend("topright", col = c("orange","blue","green"), lty = rep(1, 3), legend = paste("shape3.q =", c(1, 2, 5))) plot(x, pexp(x), type = "l", col = "black", ylab = "", las = 1, main = "Black is std exponential, others are plomax(x, shape3.q)") lines(x, plomax(x, shape3.q = 1), col = "orange") lines(x, plomax(x, shape3.q = 2), col = "blue") lines(x, plomax(x, shape3.q = 5), col = "green") legend("bottomright", col = c("orange","blue","green"), lty = rep(1, 3), legend = paste("shape3.q =", c(1, 2, 5))) ## End(Not run)
probs <- seq(0.1, 0.9, by = 0.1) max(abs(plomax(qlomax(p = probs, shape3.q = 1), shape3.q = 1) - probs)) # Should be 0 ## Not run: par(mfrow = c(1, 2)) x <- seq(-0.01, 5, len = 401) plot(x, dexp(x), type = "l", col = "black", ylab = "", ylim = c(0, 3), main = "Black is std exponential, others are dlomax(x, shape3.q)") lines(x, dlomax(x, shape3.q = 1), col = "orange") lines(x, dlomax(x, shape3.q = 2), col = "blue") lines(x, dlomax(x, shape3.q = 5), col = "green") legend("topright", col = c("orange","blue","green"), lty = rep(1, 3), legend = paste("shape3.q =", c(1, 2, 5))) plot(x, pexp(x), type = "l", col = "black", ylab = "", las = 1, main = "Black is std exponential, others are plomax(x, shape3.q)") lines(x, plomax(x, shape3.q = 1), col = "orange") lines(x, plomax(x, shape3.q = 2), col = "blue") lines(x, plomax(x, shape3.q = 5), col = "green") legend("bottomright", col = c("orange","blue","green"), lty = rep(1, 3), legend = paste("shape3.q =", c(1, 2, 5))) ## End(Not run)
Abundance of Leadbeater's Possums observed in the field.
data(lpossums)
data(lpossums)
A data frame with the following variables.
Values between 0 and 10 excluding 6.
Observed frequency, i.e., the number of sites.
A small data set recording the abundance of Leadbeater's Possums Gymnobelideus leadbeateri observed in the montane ash forests of the Central Highlands of Victoria, in south-eastern Australia. There are 151 3-hectare sites. The data has more 0s than usual relative to the Poisson, as well as exhibiting overdispersion too.
Welsh, A. H., Cunningham, R. B., Donnelly, C. F. and Lindenmayer, D. B. (1996). Modelling the abundances of rare species: statistical models for counts with extra zeros. Ecological Modelling, 88, 297–308.
lpossums (samplemean <- with(lpossums, weighted.mean(number, ofreq))) with(lpossums, var(rep(number, times = ofreq)) / samplemean) sum(with(lpossums, ofreq)) ## Not run: spikeplot(with(lpossums, rep(number, times = ofreq)), main = "Leadbeater's possums", col = "blue", xlab = "Number") ## End(Not run)
lpossums (samplemean <- with(lpossums, weighted.mean(number, ofreq))) with(lpossums, var(rep(number, times = ofreq)) / samplemean) sum(with(lpossums, ofreq)) ## Not run: spikeplot(with(lpossums, rep(number, times = ofreq)), main = "Leadbeater's possums", col = "blue", xlab = "Number") ## End(Not run)
Minimizes the L-q norm of residuals in a linear model.
lqnorm(qpower = 2, link = "identitylink", imethod = 1, imu = NULL, ishrinkage = 0.95)
lqnorm(qpower = 2, link = "identitylink", imethod = 1, imu = NULL, ishrinkage = 0.95)
qpower |
A single numeric, must be greater than one, called |
link |
Link function applied to the ‘mean’ |
imethod |
Must be 1, 2 or 3.
See |
imu |
Numeric, optional initial values used for the fitted values.
The default is to use |
ishrinkage |
How much shrinkage is used when initializing the fitted values.
The value must be between 0 and 1 inclusive, and
a value of 0 means the individual response values are used,
and a value of 1 means the median or mean is used.
This argument is used in conjunction with |
This function minimizes the objective function
where is the argument
qpower
,
where
is
the link function, and
is the vector of linear/additive predictors.
The prior weights
can be
inputted using the
weights
argument of
vlm
/vglm
/vgam
etc.; it should
be just a vector here since this function handles only a single
vector or one-column response.
Numerical problem will occur when is too close to one.
Probably reasonable values range from 1.5 and up, say.
The value
corresponds to ordinary least squares
while
corresponds to the MLE of a double exponential
(Laplace) distibution. The procedure becomes more sensitive to
outliers the larger the value of
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
Convergence failure is common, therefore the user is advised to be cautious and monitor convergence!
This VGAM family function is an initial attempt to
provide a more robust alternative for regression and/or offer
a little more flexibility than least squares.
The @misc
slot of the fitted object contains a list
component called objectiveFunction
which is the value
of the objective function at the final iteration.
Thomas W. Yee
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
set.seed(123) ldata <- data.frame(x = sort(runif(nn <- 10 ))) realfun <- function(x) 4 + 5*x ldata <- transform(ldata, y = realfun(x) + rnorm(nn, sd = exp(-1))) # Make the first observation an outlier ldata <- transform(ldata, y = c(4*y[1], y[-1]), x = c(-1, x[-1])) fit <- vglm(y ~ x, lqnorm(qpower = 1.2), data = ldata) coef(fit, matrix = TRUE) head(fitted(fit)) fit@misc$qpower fit@misc$objectiveFunction ## Not run: # Graphical check with(ldata, plot(x, y, main = paste0("LS = red, lqnorm = blue (qpower = ", fit@misc$qpower, "), truth = black"), col = "blue")) lmfit <- lm(y ~ x, data = ldata) with(ldata, lines(x, fitted(fit), col = "blue")) with(ldata, lines(x, lmfit$fitted, col = "red")) with(ldata, lines(x, realfun(x), col = "black")) ## End(Not run)
set.seed(123) ldata <- data.frame(x = sort(runif(nn <- 10 ))) realfun <- function(x) 4 + 5*x ldata <- transform(ldata, y = realfun(x) + rnorm(nn, sd = exp(-1))) # Make the first observation an outlier ldata <- transform(ldata, y = c(4*y[1], y[-1]), x = c(-1, x[-1])) fit <- vglm(y ~ x, lqnorm(qpower = 1.2), data = ldata) coef(fit, matrix = TRUE) head(fitted(fit)) fit@misc$qpower fit@misc$objectiveFunction ## Not run: # Graphical check with(ldata, plot(x, y, main = paste0("LS = red, lqnorm = blue (qpower = ", fit@misc$qpower, "), truth = black"), col = "blue")) lmfit <- lm(y ~ x, data = ldata) with(ldata, lines(x, fitted(fit), col = "blue")) with(ldata, lines(x, lmfit$fitted, col = "red")) with(ldata, lines(x, realfun(x), col = "black")) ## End(Not run)
Generic function that computes likelihood ratio test (LRT) statistics evaluated at the null values (consequently they do not suffer from the Hauck-Donner effect).
lrt.stat(object, ...) lrt.stat.vlm(object, values0 = 0, subset = NULL, omit1s = TRUE, all.out = FALSE, trace = FALSE, ...)
lrt.stat(object, ...) lrt.stat.vlm(object, values0 = 0, subset = NULL, omit1s = TRUE, all.out = FALSE, trace = FALSE, ...)
object , values0 , subset
|
Same as in |
omit1s , all.out , trace
|
Same as in |
... |
Ignored for now. |
When summary()
is applied to a vglm
object
a 4-column Wald table is produced.
The corresponding p-values are generally viewed as inferior to
those from a likelihood ratio test (LRT).
For example, the Hauck and Donner (1977) effect (HDE) produces
p-values that are biased upwards (see hdeff
).
Other reasons are that the Wald test is often less accurate
(especially in small samples) and is not invariant to
parameterization.
By default, this function returns p-values based on the LRT by
deleting one column at a time from the big VLM matrix
and then restarting IRLS to obtain convergence (hopefully).
Twice the difference between the log-likelihoods
(or equivalently, the difference in the deviances if they are defined)
is asymptotically chi-squared with 1 degree of freedom.
One might expect the p-values from this function
therefore to be more accurate
and not suffer from the HDE.
Thus this function is a recommended
alternative (if it works) to summaryvglm
for testing for the significance of a regression coefficient.
By default, a vector of signed square root of the LRT statistics;
these are asymptotically standard normal under the null hypotheses.
If all.out = TRUE
then a list is returned with the
following components:
lrt.stat
the signed LRT statistics,
pvalues
the 2-sided p-values,
Lrt.stat2
the usual LRT statistic,
values0
the null values.
See wald.stat.vlm
.
T. W. Yee.
score.stat
,
wald.stat
,
summaryvglm
,
anova.vglm
,
vglm
,
lrtest
,
confintvglm
,
pchisq
,
profilevglm
,
hdeff
.
set.seed(1) pneumo <- transform(pneumo, let = log(exposure.time), x3 = rnorm(nrow(pneumo))) fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo) cbind(coef(summary(fit)), "signed LRT stat" = lrt.stat(fit, omit1s = FALSE)) summary(fit, lrt0 = TRUE) # Easy way to get it
set.seed(1) pneumo <- transform(pneumo, let = log(exposure.time), x3 = rnorm(nrow(pneumo))) fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo) cbind(coef(summary(fit)), "signed LRT stat" = lrt.stat(fit, omit1s = FALSE)) summary(fit, lrt0 = TRUE) # Easy way to get it
lrtest
is a generic function for carrying out
likelihood ratio tests.
The default method can be employed for comparing nested VGLMs
(see details below).
lrtest(object, ...) lrtest_vglm(object, ..., no.warning = FALSE, name = NULL)
lrtest(object, ...) lrtest_vglm(object, ..., no.warning = FALSE, name = NULL)
object |
a |
... |
further object specifications passed to methods. See below for details. |
no.warning |
logical; if |
name |
a function for extracting a suitable name/description from
a fitted model object.
By default the name is queried by calling |
lrtest
is intended to be a generic function for
comparisons of models via asymptotic likelihood ratio
tests. The default method consecutively compares the
fitted model object object
with the models passed
in ...
. Instead of passing the fitted model
objects in ...
, several other specifications
are possible. The updating mechanism is the same as for
waldtest()
in lmtest:
the models in ...
can be specified as integers, characters (both for terms
that should be eliminated from the previous model),
update formulas or fitted model objects. Except for
the last case, the existence of an update
method is assumed.
See waldtest()
in lmtest for details.
Subsequently, an asymptotic likelihood ratio test for each
two consecutive models is carried out: Twice the difference
in log-likelihoods (as derived by the logLik
methods) is compared with a Chi-squared distribution.
An object of class "VGAManova"
which contains a slot
with the
log-likelihood, degrees of freedom, the difference in
degrees of freedom, likelihood ratio Chi-squared statistic
and corresponding p value.
These are printed by stats:::print.anova()
;
see anova
.
Several VGAM family functions implement distributions which do not satisfying the usual regularity conditions needed for the LRT to work. No checking or warning is given for these.
The code was adapted directly from lmtest (written by T. Hothorn, A. Zeileis, G. Millo, D. Mitchell) and made to work for VGLMs and S4. This help file also was adapted from lmtest.
Approximate LRTs might be applied to VGAMs, as
produced by vgam
, but it is probably better in
inference to use vglm
with regression splines
(bs
and
ns
).
This methods function should not be applied to other models
such as those produced
by rrvglm
,
by cqo
,
by cao
.
lmtest,
vglm
,
lrt.stat.vlm
,
score.stat.vlm
,
wald.stat.vlm
,
anova.vglm
.
set.seed(1) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) fit1 <- vglm(cbind(normal, mild, severe) ~ let , propodds, pneumo) fit2 <- vglm(cbind(normal, mild, severe) ~ let + x3, propodds, pneumo) fit3 <- vglm(cbind(normal, mild, severe) ~ let , cumulative, pneumo) # Various equivalent specifications of the LR test for testing x3 (ans1 <- lrtest(fit2, fit1)) ans2 <- lrtest(fit2, 2) ans3 <- lrtest(fit2, "x3") ans4 <- lrtest(fit2, . ~ . - x3) c(all.equal(ans1, ans2), all.equal(ans1, ans3), all.equal(ans1, ans4)) # Doing it manually (testStatistic <- 2 * (logLik(fit2) - logLik(fit1))) (pval <- pchisq(testStatistic, df = df.residual(fit1) - df.residual(fit2), lower.tail = FALSE)) (ans4 <- lrtest(fit3, fit1)) # Test PO (parallelism) assumption
set.seed(1) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) fit1 <- vglm(cbind(normal, mild, severe) ~ let , propodds, pneumo) fit2 <- vglm(cbind(normal, mild, severe) ~ let + x3, propodds, pneumo) fit3 <- vglm(cbind(normal, mild, severe) ~ let , cumulative, pneumo) # Various equivalent specifications of the LR test for testing x3 (ans1 <- lrtest(fit2, fit1)) ans2 <- lrtest(fit2, 2) ans3 <- lrtest(fit2, "x3") ans4 <- lrtest(fit2, . ~ . - x3) c(all.equal(ans1, ans2), all.equal(ans1, ans3), all.equal(ans1, ans4)) # Doing it manually (testStatistic <- 2 * (logLik(fit2) - logLik(fit1))) (pval <- pchisq(testStatistic, df = df.residual(fit1) - df.residual(fit2), lower.tail = FALSE)) (ans4 <- lrtest(fit3, fit1)) # Test PO (parallelism) assumption
Generic function for a latent variable plot (also known as an ordination diagram by ecologists).
lvplot(object, ...)
lvplot(object, ...)
object |
An object for a latent variable plot is meaningful. |
... |
Other arguments fed into the specific
methods function of the model. They usually are graphical
parameters, and sometimes they are fed
into the methods function for |
Latent variables occur in reduced-rank regression models, as well as in quadratic and additive ordination. For the latter, latent variables are often called the site scores. Latent variable plots were coined by Yee (2004), and have the latent variable as at least one of its axes.
The value returned depends specifically on the methods function invoked.
Latent variables are not really applicable to
vglm
/vgam
models.
Thomas W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
lvplot.qrrvglm
,
lvplot.cao
,
latvar
,
trplot
.
## Not run: hspider[,1:6] <- scale(hspider[,1:6]) # Stdz environmental vars set.seed(123) p1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Bestof = 3, df1.nl = c(Zoraspin = 2.5, 3), Crow1positive = TRUE) index <- 1:ncol(depvar(p1)) lvplot(p1, lcol = index, pcol = index, y = TRUE, las = 1) ## End(Not run)
## Not run: hspider[,1:6] <- scale(hspider[,1:6]) # Stdz environmental vars set.seed(123) p1 <- cao(cbind(Pardlugu, Pardmont, Pardnigr, Pardpull, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Bestof = 3, df1.nl = c(Zoraspin = 2.5, 3), Crow1positive = TRUE) index <- 1:ncol(depvar(p1)) lvplot(p1, lcol = index, pcol = index, y = TRUE, las = 1) ## End(Not run)
Produces an ordination diagram (latent variable plot) for quadratic ordination (QO) models. For rank-1 models, the x-axis is the first ordination/constrained/canonical axis. For rank-2 models, the x- and y-axis are the first and second ordination axes respectively.
lvplot.qrrvglm(object, varI.latvar = FALSE, refResponse = NULL, add = FALSE, show.plot = TRUE, rug = TRUE, y = FALSE, type = c("fitted.values", "predictors"), xlab = paste0("Latent Variable", if (Rank == 1) "" else " 1"), ylab = if (Rank == 1) switch(type, predictors = "Predictors", fitted.values = "Fitted values") else "Latent Variable 2", pcex = par()$cex, pcol = par()$col, pch = par()$pch, llty = par()$lty, lcol = par()$col, llwd = par()$lwd, label.arg = FALSE, adj.arg = -0.1, ellipse = 0.95, Absolute = FALSE, elty = par()$lty, ecol = par()$col, elwd = par()$lwd, egrid = 200, chull.arg = FALSE, clty = 2, ccol = par()$col, clwd = par()$lwd, cpch = " ", C = FALSE, OriginC = c("origin", "mean"), Clty = par()$lty, Ccol = par()$col, Clwd = par()$lwd, Ccex = par()$cex, Cadj.arg = -0.1, stretchC = 1, sites = FALSE, spch = NULL, scol = par()$col, scex = par()$cex, sfont = par()$font, check.ok = TRUE, jitter.y = FALSE, ...)
lvplot.qrrvglm(object, varI.latvar = FALSE, refResponse = NULL, add = FALSE, show.plot = TRUE, rug = TRUE, y = FALSE, type = c("fitted.values", "predictors"), xlab = paste0("Latent Variable", if (Rank == 1) "" else " 1"), ylab = if (Rank == 1) switch(type, predictors = "Predictors", fitted.values = "Fitted values") else "Latent Variable 2", pcex = par()$cex, pcol = par()$col, pch = par()$pch, llty = par()$lty, lcol = par()$col, llwd = par()$lwd, label.arg = FALSE, adj.arg = -0.1, ellipse = 0.95, Absolute = FALSE, elty = par()$lty, ecol = par()$col, elwd = par()$lwd, egrid = 200, chull.arg = FALSE, clty = 2, ccol = par()$col, clwd = par()$lwd, cpch = " ", C = FALSE, OriginC = c("origin", "mean"), Clty = par()$lty, Ccol = par()$col, Clwd = par()$lwd, Ccex = par()$cex, Cadj.arg = -0.1, stretchC = 1, sites = FALSE, spch = NULL, scol = par()$col, scex = par()$cex, sfont = par()$font, check.ok = TRUE, jitter.y = FALSE, ...)
object |
A CQO object. |
varI.latvar |
Logical that is fed into |
refResponse |
Integer or character that is fed into |
add |
Logical.
Add to an existing plot? If |
show.plot |
Logical. Plot it? |
rug |
Logical.
If |
y |
Logical. If |
type |
Either |
xlab |
Caption for the x-axis. See
|
ylab |
Caption for the y-axis. See
|
pcex |
Character expansion of the points.
Here, for rank-1 models, points are the response y data.
For rank-2 models, points are the optimums.
See the |
pcol |
Color of the points.
See the |
pch |
Either an integer specifying a symbol or a single
character to be used as the default in plotting points.
See |
llty |
Line type.
Rank-1 models only.
See the |
lcol |
Line color.
Rank-1 models only.
See the |
llwd |
Line width.
Rank-1 models only.
See the |
label.arg |
Logical. Label the optimums and C? (applies only to rank-2 models only). |
adj.arg |
Justification of text strings for labelling
the optimums
(applies only to rank-2 models only).
See the |
ellipse |
Numerical, of length 0 or 1 (applies only to rank-2 models only).
If |
Absolute |
Logical.
If |
elty |
Line type of the ellipses.
See the |
ecol |
Line color of the ellipses.
See the |
elwd |
Line width of the ellipses.
See the |
egrid |
Numerical. Line resolution of the ellipses. Choosing a larger value will result in smoother ellipses. Useful when ellipses are large. |
chull.arg |
Logical. Add a convex hull around the site scores? |
clty |
Line type of the convex hull.
See the |
ccol |
Line color of the convex hull.
See the |
clwd |
Line width of the convex hull.
See the |
cpch |
Character to be plotted at the intersection points of
the convex hull. Having white spaces means that site
labels are not obscured there.
See the |
C |
Logical. Add C (represented by arrows emanating
from |
OriginC |
Character or numeric.
Where the arrows representing C emanate from.
If character, it must be one of the choices given. By default the
first is chosen.
The value |
Clty |
Line type of the arrows representing C.
See the |
Ccol |
Line color of the arrows representing C.
See the |
Clwd |
Line width of the arrows representing C.
See the |
Ccex |
Numeric.
Character expansion of the labelling of C.
See the |
Cadj.arg |
Justification of text strings when labelling C.
See the |
stretchC |
Numerical. Stretching factor for C.
Instead of using C, |
sites |
Logical. Add the site scores (aka latent variable values, nu's) to the plot? (applies only to rank-2 models only). |
spch |
Plotting character of the site scores.
The default value of |
scol |
Color of the site scores.
See the |
scex |
Character expansion of the site scores.
See the |
sfont |
Font used for the site scores.
See the |
check.ok |
Logical. Whether a check is performed to see
that |
jitter.y |
Logical. If |
... |
Arguments passed into the |
This function only works for rank-1 and rank-2 QRR-VGLMs with
argument noRRR = ~ 1
.
For unequal-tolerances models, the latent variable axes can
be rotated so that at least one of the tolerance matrices is
diagonal; see Coef.qrrvglm
for details.
Arguments beginning with “p
” correspond to the points
e.g., pcex
and pcol
correspond to the size and
color of the points. Such “p
” arguments should be
vectors of length 1, or , the number of sites. For the
rank-2 model, arguments beginning with “
p
” correspond
to the optimums.
Returns a matrix of latent variables (site scores) regardless of whether a plot was produced or not.
Interpretation of a latent variable plot (CQO diagram) is potentially very misleading in terms of distances if (i) the tolerance matrices of the species are unequal and (ii) the contours of these tolerance matrices are not included in the ordination diagram.
A species which does not have an optimum will not have an ellipse drawn even if requested, i.e., if its tolerance matrix is not positive-definite.
Plotting C gives a visual display of the weights (loadings) of each of the variables used in the linear combination defining each latent variable.
The arguments elty
, ecol
and elwd
,
may be replaced in the future by llty
, lcol
and llwd
, respectively.
For rank-1 models, a similar function to this one is
perspqrrvglm
. It plots the fitted values on
a more fine grid rather than at the actual site scores here.
The result is a collection of smooth bell-shaped curves. However,
it has the weakness that the plot is more divorced from the data;
the user thinks it is the truth without an appreciation of the
statistical variability in the estimates.
In the example below, the data comes from an equal-tolerances model. The species' tolerance matrices are all the identity matrix, and the optimums are at (0,0), (1,1) and (-2,0) for species 1, 2, 3 respectively.
Thomas W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
lvplot
,
perspqrrvglm
,
Coef.qrrvglm
,
par
,
cqo
.
set.seed(123); nn <- 200 cdata <- data.frame(x2 = rnorm(nn), # Mean 0 (needed when I.tol=TRUE) x3 = rnorm(nn), # Mean 0 (needed when I.tol=TRUE) x4 = rnorm(nn)) # Mean 0 (needed when I.tol=TRUE) cdata <- transform(cdata, latvar1 = x2 + x3 - 2*x4, latvar2 = -x2 + x3 + 0*x4) # Nb. latvar2 is weakly correlated with latvar1 cdata <- transform(cdata, lambda1 = exp(6 - 0.5 * (latvar1-0)^2 - 0.5 * (latvar2-0)^2), lambda2 = exp(5 - 0.5 * (latvar1-1)^2 - 0.5 * (latvar2-1)^2), lambda3 = exp(5 - 0.5 * (latvar1+2)^2 - 0.5 * (latvar2-0)^2)) cdata <- transform(cdata, spp1 = rpois(nn, lambda1), spp2 = rpois(nn, lambda2), spp3 = rpois(nn, lambda3)) set.seed(111) ## Not run: p2 <- cqo(cbind(spp1, spp2, spp3) ~ x2 + x3 + x4, poissonff, data = cdata, Rank = 2, I.tolerances = TRUE, Crow1positive = c(TRUE, FALSE)) # deviance = 505.81 if (deviance(p2) > 506) stop("suboptimal fit obtained") sort(deviance(p2, history = TRUE)) # A history of the iterations Coef(p2) ## End(Not run) ## Not run: lvplot(p2, sites = TRUE, spch = "*", scol = "darkgreen", scex = 1.5, chull = TRUE, label = TRUE, Absolute = TRUE, ellipse = 140, adj = -0.5, pcol = "blue", pcex = 1.3, las = 1, Ccol = "orange", C = TRUE, Cadj = c(-0.3, -0.3, 1), Clwd = 2, Ccex = 1.4, main = paste("Contours at Abundance = 140 with", "convex hull of the site scores")) ## End(Not run) ## Not run: var(latvar(p2)) # A diagonal matrix, i.e., uncorrelated latent vars var(latvar(p2, varI.latvar = TRUE)) # Identity matrix Tol(p2)[, , 1:2] # Identity matrix Tol(p2, varI.latvar = TRUE)[, , 1:2] # A diagonal matrix ## End(Not run)
set.seed(123); nn <- 200 cdata <- data.frame(x2 = rnorm(nn), # Mean 0 (needed when I.tol=TRUE) x3 = rnorm(nn), # Mean 0 (needed when I.tol=TRUE) x4 = rnorm(nn)) # Mean 0 (needed when I.tol=TRUE) cdata <- transform(cdata, latvar1 = x2 + x3 - 2*x4, latvar2 = -x2 + x3 + 0*x4) # Nb. latvar2 is weakly correlated with latvar1 cdata <- transform(cdata, lambda1 = exp(6 - 0.5 * (latvar1-0)^2 - 0.5 * (latvar2-0)^2), lambda2 = exp(5 - 0.5 * (latvar1-1)^2 - 0.5 * (latvar2-1)^2), lambda3 = exp(5 - 0.5 * (latvar1+2)^2 - 0.5 * (latvar2-0)^2)) cdata <- transform(cdata, spp1 = rpois(nn, lambda1), spp2 = rpois(nn, lambda2), spp3 = rpois(nn, lambda3)) set.seed(111) ## Not run: p2 <- cqo(cbind(spp1, spp2, spp3) ~ x2 + x3 + x4, poissonff, data = cdata, Rank = 2, I.tolerances = TRUE, Crow1positive = c(TRUE, FALSE)) # deviance = 505.81 if (deviance(p2) > 506) stop("suboptimal fit obtained") sort(deviance(p2, history = TRUE)) # A history of the iterations Coef(p2) ## End(Not run) ## Not run: lvplot(p2, sites = TRUE, spch = "*", scol = "darkgreen", scex = 1.5, chull = TRUE, label = TRUE, Absolute = TRUE, ellipse = 140, adj = -0.5, pcol = "blue", pcex = 1.3, las = 1, Ccol = "orange", C = TRUE, Cadj = c(-0.3, -0.3, 1), Clwd = 2, Ccex = 1.4, main = paste("Contours at Abundance = 140 with", "convex hull of the site scores")) ## End(Not run) ## Not run: var(latvar(p2)) # A diagonal matrix, i.e., uncorrelated latent vars var(latvar(p2, varI.latvar = TRUE)) # Identity matrix Tol(p2)[, , 1:2] # Identity matrix Tol(p2, varI.latvar = TRUE)[, , 1:2] # A diagonal matrix ## End(Not run)
Produces an ordination diagram (also known as a biplot or latent variable plot) for reduced-rank vector generalized linear models (RR-VGLMs). For rank-2 models only, the x- and y-axis are the first and second canonical axes respectively.
lvplot.rrvglm(object, A = TRUE, C = TRUE, scores = FALSE, show.plot = TRUE, groups = rep(1, n), gapC = sqrt(sum(par()$cxy^2)), scaleA = 1, xlab = "Latent Variable 1", ylab = "Latent Variable 2", Alabels = if (length(object@misc$predictors.names)) object@misc$predictors.names else param.names("LP", M), Aadj = par()$adj, Acex = par()$cex, Acol = par()$col, Apch = NULL, Clabels = rownames(Cmat), Cadj = par()$adj, Ccex = par()$cex, Ccol = par()$col, Clty = par()$lty, Clwd = par()$lwd, chull.arg = FALSE, ccex = par()$cex, ccol = par()$col, clty = par()$lty, clwd = par()$lwd, spch = NULL, scex = par()$cex, scol = par()$col, slabels = rownames(x2mat), ...)
lvplot.rrvglm(object, A = TRUE, C = TRUE, scores = FALSE, show.plot = TRUE, groups = rep(1, n), gapC = sqrt(sum(par()$cxy^2)), scaleA = 1, xlab = "Latent Variable 1", ylab = "Latent Variable 2", Alabels = if (length(object@misc$predictors.names)) object@misc$predictors.names else param.names("LP", M), Aadj = par()$adj, Acex = par()$cex, Acol = par()$col, Apch = NULL, Clabels = rownames(Cmat), Cadj = par()$adj, Ccex = par()$cex, Ccol = par()$col, Clty = par()$lty, Clwd = par()$lwd, chull.arg = FALSE, ccex = par()$cex, ccol = par()$col, clty = par()$lty, clwd = par()$lwd, spch = NULL, scex = par()$cex, scol = par()$col, slabels = rownames(x2mat), ...)
object |
Object of class |
A |
Logical. Allow the plotting of A? |
C |
Logical. Allow the plotting of C? If |
scores |
Logical. Allow the plotting of the |
show.plot |
Logical. Plot it? If |
groups |
A vector whose distinct values indicate
which group the observation belongs to. By default, all the
observations belong to a single group. Useful for the multinomial
logit model (see |
gapC |
The gap between the end of the arrow and the text labelling of C, in latent variable units. |
scaleA |
Numerical value that is multiplied by A, so that C is divided by this value. |
xlab |
Caption for the x-axis. See
|
ylab |
Caption for the y-axis. See
|
Alabels |
Character vector to label A. Must be
of length |
Aadj |
Justification of text strings for
labelling A. See the |
Acex |
Numeric. Character expansion of the
labelling of A. See the |
Acol |
Line color of the arrows representing C.
See the |
Apch |
Either an integer specifying a symbol or a single
character
to be used as the default in plotting points. See
|
Clabels |
Character vector to label C. Must be
of length |
Cadj |
Justification of text strings for
labelling C. See the |
Ccex |
Numeric. Character expansion of the
labelling of C. See the |
Ccol |
Line color of the arrows representing C.
See the |
Clty |
Line type of the arrows representing C.
See the |
Clwd |
Line width of the arrows representing C.
See the |
chull.arg |
Logical. Plot the convex hull of the scores?
This is done for each group (see the |
ccex |
Numeric.
Character expansion of the labelling of the convex hull.
See the |
ccol |
Line color of the convex hull. See the |
clty |
Line type of the convex hull. See the |
clwd |
Line width of the convex hull. See the |
spch |
Either an integer specifying a symbol or
a single character
to be used as the default in plotting points.
See |
scex |
Numeric. Character expansion of the
labelling of the scores.
See the |
scol |
Line color of the arrows representing C.
See the |
slabels |
Character vector to label the scores.
Must be of length |
... |
Arguments passed into the |
For RR-VGLMs, a biplot and a latent variable
plot coincide.
In general, many of the arguments starting with
“A” refer to A (of length ),
“C” to C (of length
),
“c” to the convex hull (of length
length(unique(groups))
),
and “s” to scores (of length ).
As the result is a biplot, its interpretation is based on the inner product.
The matrix of scores ( latent variable values) is returned
regardless of whether a plot was produced or not.
The functions lvplot.rrvglm
and
biplot.rrvglm
are equivalent.
In the example below the predictor variables are centered, which is a good idea.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
lvplot
,
par
,
rrvglm
,
Coef.rrvglm
,
rrvglm.control
.
set.seed(1) nn <- nrow(pneumo) # x1--x3 are some unrelated covariates pneumo <- transform(pneumo, slet = scale(log(exposure.time)), imag = severe + 3, # Fictitional! x1 = rnorm(nn), x2 = rnorm(nn), x3 = rnorm(nn)) fit <- rrvglm(cbind(normal, mild, severe, imag) ~ slet + x1 + x2 + x3, # Corner = FALSE, Uncorrel = TRUE, # orig. multinomial, data = pneumo, Rank = 2) ## Not run: lvplot(fit, chull = TRUE, scores = TRUE, clty = 2, ccol = 4, scol = "red", Ccol = "green3", Clwd = 2, Ccex = 2, main = "Biplot of some fictitional data") ## End(Not run)
set.seed(1) nn <- nrow(pneumo) # x1--x3 are some unrelated covariates pneumo <- transform(pneumo, slet = scale(log(exposure.time)), imag = severe + 3, # Fictitional! x1 = rnorm(nn), x2 = rnorm(nn), x3 = rnorm(nn)) fit <- rrvglm(cbind(normal, mild, severe, imag) ~ slet + x1 + x2 + x3, # Corner = FALSE, Uncorrel = TRUE, # orig. multinomial, data = pneumo, Rank = 2) ## Not run: lvplot(fit, chull = TRUE, scores = TRUE, clty = 2, ccol = 4, scol = "red", Ccol = "green3", Clwd = 2, Ccex = 2, main = "Biplot of some fictitional data") ## End(Not run)
A small count data set involving 414 machinists from a three months study, of accidents around the end of WWI.
data(machinists)
data(machinists)
A data frame with the following variables.
The number of accidents
Observed frequency, i.e., the number of machinists with that many accidents
The data was collected over a period of three months. There were 414 machinists in total. Also, there were data collected over six months, but it is not given here.
Incidence of Industrial Accidents. Report No. 4 (Industrial Fatigue Research Board), Stationery Office, London, 1919.
Greenwood, M. and Yule, G. U. (1920). An Inquiry into the Nature of Frequency Distributions Representative of Multiple Happenings with Particular Reference to the Occurrence of Multiple Attacks of Disease or of Repeated Accidents. Journal of the Royal Statistical Society, 83, 255–279.
machinists mean(with(machinists, rep(accidents, times = ofreq))) var(with(machinists, rep(accidents, times = ofreq))) ## Not run: barplot(with(machinists, ofreq), names.arg = as.character(with(machinists, accidents)), main = "Machinists accidents", col = "lightblue", las = 1, ylab = "Frequency", xlab = "accidents") ## End(Not run)
machinists mean(with(machinists, rep(accidents, times = ofreq))) var(with(machinists, rep(accidents, times = ofreq))) ## Not run: barplot(with(machinists, ofreq), names.arg = as.character(with(machinists, accidents)), main = "Machinists accidents", col = "lightblue", las = 1, ylab = "Frequency", xlab = "accidents") ## End(Not run)
Maximum likelihood estimation of the 3-parameter Makeham distribution.
makeham(lscale = "loglink", lshape = "loglink", lepsilon = "loglink", iscale = NULL, ishape = NULL, iepsilon = NULL, gscale = exp(-5:5),gshape = exp(-5:5), gepsilon = exp(-4:1), nsimEIM = 500, oim.mean = TRUE, zero = NULL, nowarning = FALSE)
makeham(lscale = "loglink", lshape = "loglink", lepsilon = "loglink", iscale = NULL, ishape = NULL, iepsilon = NULL, gscale = exp(-5:5),gshape = exp(-5:5), gepsilon = exp(-4:1), nsimEIM = 500, oim.mean = TRUE, zero = NULL, nowarning = FALSE)
nowarning |
Logical. Suppress a warning? Ignored for VGAM 0.9-7 and higher. |
lshape , lscale , lepsilon
|
Parameter link functions applied to the
shape parameter |
ishape , iscale , iepsilon
|
Optional initial values.
A |
gshape , gscale , gepsilon
|
|
nsimEIM , zero
|
See |
oim.mean |
To be currently ignored. |
The Makeham distribution, which adds another parameter to the Gompertz distribution, has cumulative distribution function
which leads to a probability density function
for ,
,
,
.
Here,
is called the scale parameter
scale
,
and is called a shape parameter.
The moments for this distribution do
not appear to be available in closed form.
Simulated Fisher scoring is used and multiple responses are handled.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
A lot of care is needed because
this is a rather difficult distribution for parameter estimation,
especially when the shape parameter is large relative to the
scale parameter.
If the self-starting initial values fail then try experimenting
with the initial value arguments, especially iepsilon
.
Successful convergence depends on having very good initial values.
More improvements could be made here.
Also, monitor convergence by setting trace = TRUE
.
A trick is to fit a gompertz
distribution and use
it for initial values; see below.
However, this family function is currently numerically fraught.
T. W. Yee
dmakeham
,
gompertz
,
simulate.vlm
.
## Not run: set.seed(123) mdata <- data.frame(x2 = runif(nn <- 1000)) mdata <- transform(mdata, eta1 = -1, ceta1 = 1, eeta1 = -2) mdata <- transform(mdata, shape1 = exp(eta1), scale1 = exp(ceta1), epsil1 = exp(eeta1)) mdata <- transform(mdata, y1 = rmakeham(nn, shape = shape1, scale = scale1, eps = epsil1)) # A trick is to fit a Gompertz distribution first fit0 <- vglm(y1 ~ 1, gompertz, data = mdata, trace = TRUE) fit1 <- vglm(y1 ~ 1, makeham, data = mdata, etastart = cbind(predict(fit0), log(0.1)), trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) ## End(Not run)
## Not run: set.seed(123) mdata <- data.frame(x2 = runif(nn <- 1000)) mdata <- transform(mdata, eta1 = -1, ceta1 = 1, eeta1 = -2) mdata <- transform(mdata, shape1 = exp(eta1), scale1 = exp(ceta1), epsil1 = exp(eeta1)) mdata <- transform(mdata, y1 = rmakeham(nn, shape = shape1, scale = scale1, eps = epsil1)) # A trick is to fit a Gompertz distribution first fit0 <- vglm(y1 ~ 1, gompertz, data = mdata, trace = TRUE) fit1 <- vglm(y1 ~ 1, makeham, data = mdata, etastart = cbind(predict(fit0), log(0.1)), trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) ## End(Not run)
Density, cumulative distribution function, quantile function and random generation for the Makeham distribution.
dmakeham(x, scale = 1, shape, epsilon = 0, log = FALSE) pmakeham(q, scale = 1, shape, epsilon = 0, lower.tail = TRUE, log.p = FALSE) qmakeham(p, scale = 1, shape, epsilon = 0, lower.tail = TRUE, log.p = FALSE) rmakeham(n, scale = 1, shape, epsilon = 0)
dmakeham(x, scale = 1, shape, epsilon = 0, log = FALSE) pmakeham(q, scale = 1, shape, epsilon = 0, lower.tail = TRUE, log.p = FALSE) qmakeham(p, scale = 1, shape, epsilon = 0, lower.tail = TRUE, log.p = FALSE) rmakeham(n, scale = 1, shape, epsilon = 0)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as in |
log |
Logical.
If |
lower.tail , log.p
|
|
scale , shape
|
positive scale and shape parameters. |
epsilon |
another parameter. Must be non-negative. See below. |
See makeham
for details.
The default value of epsilon = 0
corresponds
to the Gompertz distribution.
The function pmakeham
uses lambertW
.
dmakeham
gives the density,
pmakeham
gives the cumulative distribution function,
qmakeham
gives the quantile function, and
rmakeham
generates random deviates.
T. W. Yee and Kai Huang
Jodra, P. (2009). A closed-form expression for the quantile function of the Gompertz-Makeham distribution. Mathematics and Computers in Simulation, 79, 3069–3075.
probs <- seq(0.01, 0.99, by = 0.01) Shape <- exp(-1); Scale <- exp(1); eps = Epsilon <- exp(-1) max(abs(pmakeham(qmakeham(probs, sca = Scale, Shape, eps = Epsilon), sca = Scale, Shape, eps = Epsilon) - probs)) # Should be 0 ## Not run: x <- seq(-0.1, 2.0, by = 0.01); plot(x, dmakeham(x, sca = Scale, Shape, eps = Epsilon), type = "l", main = "Blue is density, orange is the CDF", sub = "Purple lines are the 10,20,...,90 percentiles", col = "blue", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, pmakeham(x, sca = Scale, Shape, eps = Epsilon), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qmakeham(probs, sca = Scale, Shape, eps = Epsilon) lines(Q, dmakeham(Q, sca = Scale, Shape, eps = Epsilon), col = "purple", lty = 3, type = "h") pmakeham(Q, sca = Scale, Shape, eps = Epsilon) - probs # Should be all 0 abline(h = probs, col = "purple", lty = 3) ## End(Not run)
probs <- seq(0.01, 0.99, by = 0.01) Shape <- exp(-1); Scale <- exp(1); eps = Epsilon <- exp(-1) max(abs(pmakeham(qmakeham(probs, sca = Scale, Shape, eps = Epsilon), sca = Scale, Shape, eps = Epsilon) - probs)) # Should be 0 ## Not run: x <- seq(-0.1, 2.0, by = 0.01); plot(x, dmakeham(x, sca = Scale, Shape, eps = Epsilon), type = "l", main = "Blue is density, orange is the CDF", sub = "Purple lines are the 10,20,...,90 percentiles", col = "blue", las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, pmakeham(x, sca = Scale, Shape, eps = Epsilon), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qmakeham(probs, sca = Scale, Shape, eps = Epsilon) lines(Q, dmakeham(Q, sca = Scale, Shape, eps = Epsilon), col = "purple", lty = 3, type = "h") pmakeham(Q, sca = Scale, Shape, eps = Epsilon) - probs # Should be all 0 abline(h = probs, col = "purple", lty = 3) ## End(Not run)
Marginal effects for the multinomial logit model and cumulative logit/probit/... models and continuation ratio models and stopping ratio models and adjacent categories models: the derivative of the fitted probabilities with respect to each explanatory variable.
margeff(object, subset = NULL, ...)
margeff(object, subset = NULL, ...)
object |
A |
subset |
Numerical or logical vector, denoting the required observation(s). Recycling is used if possible. The default means all observations. |
... |
further arguments passed into the other methods functions. |
Computes the derivative of the fitted probabilities of the categorical response model with respect to each explanatory variable. Formerly one big function, this function now uses S4 dispatch to break up the computations.
The function margeff()
is not generic. However, it
calls the function margeffS4VGAM()
which is.
This is based on the class of the VGAMff
argument, and
it uses the S4 function setMethod
to
correctly dispatch to the required methods function.
The inheritance is given by the vfamily
slot of the
VGAM family function.
A by
by
array, where
is the
number of explanatory variables and the (hopefully) nominal
response has
levels, and there are
observations.
In general, if
is.numeric(subset)
and
length(subset) == 1
then a
by
matrix is returned.
Care is needed in interpretation, e.g., the change is not universally accurate for a unit change in each explanatory variable because eventually the ‘new’ probabilities may become negative or greater than unity. Also, the ‘new’ probabilities will not sum to one.
This function is not applicable for models with
data-dependent terms such as bs
and
poly
.
Also the function should not be applied to models with any
terms that
have generated more than one column of the LM model matrix,
such as bs
and poly
.
For such try using numerical methods such as finite-differences.
The formula
in object
should comprise of simple terms
of the form ~ x2 + x3 + x4
, etc.
Some numerical problems may occur if the fitted values are
close to 0 or 1 for the
cratio
and
sratio
models.
Models with offsets may result in an incorrect answer.
For multinomial
this function should handle any value of refLevel
and also
any constraint matrices.
However, it does not currently handle
the xij
or form2
arguments,
nor vgam
objects.
If marginal effects are to be computed for some values not
equal to those used in the training set, then
the @x
and the @predictors
slots both need to be
assigned. See Example 3 below.
Some other limitations are imposed, e.g.,
for acat
models
only a loglink
link is allowed.
T. W. Yee, with some help and motivation from Stasha Rmandic.
multinomial
,
cumulative
,
propodds
,
acat
,
cratio
,
sratio
,
poissonff
,
negbinomial
,
vglm
.
# Not a good example for multinomial() since the response is ordinal!! ii <- 3; hh <- 1/100 pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, multinomial, pneumo) fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(reverse = TRUE, parallel = TRUE), data = pneumo) fitted(fit)[ii, ] mynewdata <- with(pneumo, data.frame(let = let[ii] + hh)) (newp <- predict(fit, newdata = mynewdata, type = "response")) # Compare the difference. Should be the same as hh --> 0. round((newp-fitted(fit)[ii, ]) / hh, 3) # Finite-diff approxn round(margeff(fit, subset = ii)["let",], 3) # Other examples round(margeff(fit), 3) round(margeff(fit, subset = 2)["let",], 3) round(margeff(fit, subset = c(FALSE, TRUE))["let",,], 3) # Recycling round(margeff(fit, subset = c(2, 4, 6, 8))["let",,], 3) # Example 3; margeffs at a new value mynewdata2a <- data.frame(let = 2) # New value mynewdata2b <- data.frame(let = 2 + hh) # For finite-diff approxn (neweta2 <- predict(fit, newdata = mynewdata2a)) fit@x[1, ] <- c(1, unlist(mynewdata2a)) fit@predictors[1, ] <- neweta2 # Needed max(abs(margeff(fit, subset = 1)["let", ] - ( predict(fit, newdata = mynewdata2b, type = "response") - predict(fit, newdata = mynewdata2a, type = "response")) / hh )) # Should be 0
# Not a good example for multinomial() since the response is ordinal!! ii <- 3; hh <- 1/100 pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, multinomial, pneumo) fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(reverse = TRUE, parallel = TRUE), data = pneumo) fitted(fit)[ii, ] mynewdata <- with(pneumo, data.frame(let = let[ii] + hh)) (newp <- predict(fit, newdata = mynewdata, type = "response")) # Compare the difference. Should be the same as hh --> 0. round((newp-fitted(fit)[ii, ]) / hh, 3) # Finite-diff approxn round(margeff(fit, subset = ii)["let",], 3) # Other examples round(margeff(fit), 3) round(margeff(fit, subset = 2)["let",], 3) round(margeff(fit, subset = c(FALSE, TRUE))["let",,], 3) # Recycling round(margeff(fit, subset = c(2, 4, 6, 8))["let",,], 3) # Example 3; margeffs at a new value mynewdata2a <- data.frame(let = 2) # New value mynewdata2b <- data.frame(let = 2 + hh) # For finite-diff approxn (neweta2 <- predict(fit, newdata = mynewdata2a)) fit@x[1, ] <- c(1, unlist(mynewdata2a)) fit@predictors[1, ] <- neweta2 # Needed max(abs(margeff(fit, subset = 1)["let", ] - ( predict(fit, newdata = mynewdata2b, type = "response") - predict(fit, newdata = mynewdata2a, type = "response")) / hh )) # Should be 0
Some marital data mainly from a large NZ company collected in the early 1990s.
data(marital.nz)
data(marital.nz)
A data frame with 6053 observations on the following 3 variables.
age
a numeric vector, age in years
ethnicity
a factor with levels European
Maori
Other
Polynesian
.
Only Europeans are included in the data set.
mstatus
a factor with levels
Divorced/Separated
, Married/Partnered
,
Single
, Widowed
.
This is a subset of a data set collected from a self-administered questionnaire administered in a large New Zealand workforce observational study conducted during 1992–3. The data were augmented by a second study consisting of retirees. The data can be considered a reasonable representation of the white male New Zealand population in the early 1990s.
Clinical Trials Research Unit, University of Auckland, New Zealand.
summary(marital.nz)
summary(marital.nz)
Generic function for the maximums (maxima) of a model.
Max(object, ...)
Max(object, ...)
object |
An object for which the computation or extraction of a maximum (or maximums) is meaningful. |
... |
Other arguments fed into the specific
methods function of the model. Sometimes they are fed
into the methods function for |
Different models can define a maximum in different ways. Many models have no such notion or definition.
Maximums occur in quadratic and additive ordination, e.g., CQO or CAO. For these models the maximum is the fitted value at the optimum. For quadratic ordination models there is a formula for the optimum but for additive ordination models the optimum must be searched for numerically. If it occurs on the boundary, then the optimum is undefined. For a valid optimum, the fitted value at the optimum is the maximum.
The value returned depends specifically on the methods function invoked.
Thomas W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
## Not run: set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, Bestof = 2, data = hspider, Crow1positive = FALSE) Max(p1) index <- 1:ncol(depvar(p1)) persp(p1, col = index, las = 1, llwd = 2) abline(h = Max(p1), lty = 2, col = index) ## End(Not run)
## Not run: set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, Bestof = 2, data = hspider, Crow1positive = FALSE) Max(p1) index <- 1:ncol(depvar(p1)) persp(p1, col = index, las = 1, llwd = 2) abline(h = Max(p1), lty = 2, col = index) ## End(Not run)
Estimating the parameter of the Maxwell distribution by maximum likelihood estimation.
maxwell(link = "loglink", zero = NULL, parallel = FALSE, type.fitted = c("mean", "percentiles", "Qlink"), percentiles = 50)
maxwell(link = "loglink", zero = NULL, parallel = FALSE, type.fitted = c("mean", "percentiles", "Qlink"), percentiles = 50)
link |
Parameter link function applied to |
zero , parallel
|
|
type.fitted , percentiles
|
See |
The Maxwell distribution, which is used in the area of thermodynamics, has a probability density function that can be written
for and
.
The mean of
is
(returned as the fitted values), and its variance is
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
Fisher-scoring and Newton-Raphson are the same here.
A related distribution is the Rayleigh distribution.
This VGAM family function handles multiple responses.
This VGAM family function can be mimicked by
poisson.points(ostatistic = 1.5, dimension = 2)
.
T. W. Yee
von Seggern, D. H. (1993). CRC Standard Curves and Surfaces, Boca Raton, FL, USA: CRC Press.
Maxwell
,
rayleigh
,
poisson.points
.
mdata <- data.frame(y = rmaxwell(1000, rate = exp(2))) fit <- vglm(y ~ 1, maxwell, mdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit)
mdata <- data.frame(y = rmaxwell(1000, rate = exp(2))) fit <- vglm(y ~ 1, maxwell, mdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit)
Density, distribution function, quantile function and random generation for the Maxwell distribution.
dmaxwell(x, rate, log = FALSE) pmaxwell(q, rate, lower.tail = TRUE, log.p = FALSE) qmaxwell(p, rate, lower.tail = TRUE, log.p = FALSE) rmaxwell(n, rate)
dmaxwell(x, rate, log = FALSE) pmaxwell(q, rate, lower.tail = TRUE, log.p = FALSE) qmaxwell(p, rate, lower.tail = TRUE, log.p = FALSE) rmaxwell(n, rate)
x , q , p , n
|
Same as |
rate |
the (rate) parameter. |
log |
Logical.
If |
lower.tail , log.p
|
See maxwell
, the VGAM family function for
estimating the (rate) parameter by maximum likelihood
estimation, for the formula of the probability density function.
dmaxwell
gives the density,
pmaxwell
gives the distribution function,
qmaxwell
gives the quantile function, and
rmaxwell
generates random deviates.
The Maxwell distribution is related to the Rayleigh distribution.
T. W. Yee and Kai Huang
Balakrishnan, N. and Nevzorov, V. B. (2003). A Primer on Statistical Distributions. Hoboken, New Jersey: Wiley.
## Not run: rate <- 3; x <- seq(-0.5, 3, length = 100) plot(x, dmaxwell(x, rate = rate), type = "l", col = "blue", main = "Blue is density, orange is CDF", ylab = "", las = 1, sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "blue", lty = 2) lines(x, pmaxwell(x, rate = rate), type = "l", col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qmaxwell(probs, rate = rate) lines(Q, dmaxwell(Q, rate), col = "purple", lty = 3, type = "h") lines(Q, pmaxwell(Q, rate), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pmaxwell(Q, rate) - probs)) # Should be zero ## End(Not run)
## Not run: rate <- 3; x <- seq(-0.5, 3, length = 100) plot(x, dmaxwell(x, rate = rate), type = "l", col = "blue", main = "Blue is density, orange is CDF", ylab = "", las = 1, sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "blue", lty = 2) lines(x, pmaxwell(x, rate = rate), type = "l", col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qmaxwell(probs, rate = rate) lines(Q, dmaxwell(Q, rate), col = "purple", lty = 3, type = "h") lines(Q, pmaxwell(Q, rate), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pmaxwell(Q, rate) - probs)) # Should be zero ## End(Not run)
Estimates the two parameters of the McCullagh (1989) distribution by maximum likelihood estimation.
mccullagh89(ltheta = "rhobitlink", lnu = logofflink(offset = 0.5), itheta = NULL, inu = NULL, zero = NULL)
mccullagh89(ltheta = "rhobitlink", lnu = logofflink(offset = 0.5), itheta = NULL, inu = NULL, zero = NULL)
ltheta , lnu
|
Link functions
for the |
itheta , inu
|
Numeric.
Optional initial values for |
zero |
See |
The McCullagh (1989) distribution has density function
where and
.
This distribution is equation (1) in that paper.
The parameter
satisfies
,
therefore the default is to use an log-offset link
with offset equal to 0.5, i.e.,
.
The mean is of
is
,
and these are returned as the fitted values.
This distribution is related to the Leipnik distribution (see Johnson
et al. (1995)), is related to ultraspherical functions, and under
certain conditions, arises as exit distributions for Brownian motion.
Fisher scoring is implemented here and it uses a diagonal matrix so
the parameters are globally orthogonal in the Fisher information sense.
McCullagh (1989) also states that, to some extent,
and
have the properties of a location parameter and a
precision parameter, respectively.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
Convergence may be slow or fail unless the initial values are
reasonably close. If a failure occurs, try assigning the argument
inu
and/or itheta
. Figure 1 of McCullagh (1989) gives a
broad range of densities for different values of and
, and this could be consulted for obtaining reasonable
initial values if all else fails.
T. W. Yee
McCullagh, P. (1989). Some statistical properties of a family of continuous univariate distributions. Journal of the American Statistical Association, 84, 125–129.
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1995). Continuous Univariate Distributions, 2nd edition, Volume 2, New York: Wiley. (pages 612–617).
leipnik
,
rhobitlink
,
logofflink
.
# Limit as theta = 0, nu = Inf: mdata <- data.frame(y = rnorm(1000, sd = 0.2)) fit <- vglm(y ~ 1, mccullagh89, data = mdata, trace = TRUE) head(fitted(fit)) with(mdata, mean(y)) summary(fit) coef(fit, matrix = TRUE) Coef(fit)
# Limit as theta = 0, nu = Inf: mdata <- data.frame(y = rnorm(1000, sd = 0.2)) fit <- vglm(y ~ 1, mccullagh89, data = mdata, trace = TRUE) head(fitted(fit)) with(mdata, mean(y)) summary(fit) coef(fit, matrix = TRUE) Coef(fit)
Returns the mean of a 1- or 2-parameter GAITD combo probability mass function.
meangaitd(theta.p, fam = c("pois", "log", "zeta"), a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, theta.a = theta.p, theta.i = theta.p, theta.d = theta.p, ...)
meangaitd(theta.p, fam = c("pois", "log", "zeta"), a.mix = NULL, i.mix = NULL, d.mix = NULL, a.mlm = NULL, i.mlm = NULL, d.mlm = NULL, truncate = NULL, max.support = Inf, pobs.mix = 0, pobs.mlm = 0, pstr.mix = 0, pstr.mlm = 0, pdip.mix = 0, pdip.mlm = 0, byrow.aid = FALSE, theta.a = theta.p, theta.i = theta.p, theta.d = theta.p, ...)
theta.p |
Same as |
fam |
Same as |
a.mix , i.mix , a.mlm , i.mlm
|
Same as |
d.mix , d.mlm
|
Same as |
truncate , max.support
|
Same as |
pobs.mix , pobs.mlm , byrow.aid
|
Same as |
pstr.mix , pstr.mlm , pdip.mix , pdip.mlm
|
Same as |
theta.a , theta.i , theta.d
|
Same as |
... |
Currently unused. |
This function returns the mean of the PMF of
the GAITD combo model.
Many of its arguments are the same as dgaitdplot
.
More functionality may be added in the future, such as
returning the variance.
The mean.
This utility function may change a lot in the future.
T. W. Yee.
dgaitdplot
,
Gaitdpois
,
gaitdpoisson
.
i.mix <- seq(0, 15, by = 5) lambda.p <- 10 meangaitd(lambda.p, a.mix = i.mix + 1, i.mix = i.mix, max.support = 17, pobs.mix = 0.1, pstr.mix = 0.1)
i.mix <- seq(0, 15, by = 5) lambda.p <- 10 meangaitd(lambda.p, a.mix = i.mix + 1, i.mix = i.mix, max.support = 17, pobs.mix = 0.1, pstr.mix = 0.1)
Melbourne daily maximum temperatures in degrees Celsius over the ten-year period 1981–1990.
data(melbmaxtemp)
data(melbmaxtemp)
A vector with 3650 observations.
This is a time series data from Melbourne, Australia. It is commonly used to give a difficult quantile regression problem since the data is bimodal. That is, a hot day is likely to be followed by either an equally hot day or one much cooler. However, an independence assumption is typically made.
Hyndman, R. J. and Bashtannyk, D. M. and Grunwald, G. K. (1996). Estimating and visualizing conditional densities. J. Comput. Graph. Statist., 5(4), 315–336.
summary(melbmaxtemp) ## Not run: melb <- data.frame(today = melbmaxtemp[-1], yesterday = melbmaxtemp[-length(melbmaxtemp)]) plot(today ~ yesterday, data = melb, xlab = "Yesterday's Max Temperature", ylab = "Today's Max Temperature", cex = 1.4, type = "n") points(today ~ yesterday, melb, pch = 0, cex = 0.50, col = "blue") abline(a = 0, b = 1, lty = 3) ## End(Not run)
summary(melbmaxtemp) ## Not run: melb <- data.frame(today = melbmaxtemp[-1], yesterday = melbmaxtemp[-length(melbmaxtemp)]) plot(today ~ yesterday, data = melb, xlab = "Yesterday's Max Temperature", ylab = "Today's Max Temperature", cex = 1.4, type = "n") points(today ~ yesterday, melb, pch = 0, cex = 0.50, col = "blue") abline(a = 0, b = 1, lty = 3) ## End(Not run)
Mean excess plot (also known as a mean residual life plot), a diagnostic plot for the generalized Pareto distribution (GPD).
meplot(object, ...) meplot.default(y, main = "Mean Excess Plot", xlab = "Threshold", ylab = "Mean Excess", lty = c(2, 1:2), conf = 0.95, col = c("blue", "black", "blue"), type = "l", ...) meplot.vlm(object, ...)
meplot(object, ...) meplot.default(y, main = "Mean Excess Plot", xlab = "Threshold", ylab = "Mean Excess", lty = c(2, 1:2), conf = 0.95, col = c("blue", "black", "blue"), type = "l", ...) meplot.vlm(object, ...)
y |
A numerical vector. |
main , xlab , ylab
|
Character. Overall title for the plot, and titles for the x- and y-axes. |
lty |
Line type. The second value is for the mean excess value, the first and third values are for the envelope surrounding the confidence interval. |
conf |
Confidence level. The default results in approximate 95 percent confidence intervals for each mean excess value. |
col |
Colour of the three lines. |
type |
Type of plot. The default means lines are joined between the mean excesses and also the upper and lower limits of the confidence intervals. |
object |
An object that inherits class |
... |
Graphical argument passed into
|
If has a GPD with scale parameter
and shape parameter
,
and if
, then
It is a linear function in , the threshold.
Note that
is called the excess and
values of
greater than
are
called exceedances.
The empirical versions used by these functions is to use
sample means to estimate the left hand side of the equation.
Values of
in the plot are the values of
itself.
If the plot is roughly a straight line then the GPD is a good
fit; this plot can be used to select an appropriate threshold
value. See
gpd
for more details.
If the plot is flat then the data may be exponential,
and if it is curved then it may be Weibull or gamma.
There is often a lot of variance/fluctuation at the RHS of the
plot due to fewer observations.
The function meplot
is generic, and
meplot.default
and meplot.vlm
are some
methods functions for mean excess plots.
A list is returned invisibly with the following components.
threshold |
The x axis values. |
meanExcess |
The y axis values.
Each value is a sample mean minus a value |
plusminus |
The amount which is added or subtracted
from the mean excess to give the confidence interval.
The last value is a |
The function is designed for speed and not accuracy, therefore
huge data sets with extremely large values may cause failure
(the function cumsum
is used.) Ties may
not be well handled.
T. W. Yee
Davison, A. C. and Smith, R. L. (1990). Models for exceedances over high thresholds (with discussion). Journal of the Royal Statistical Society, Series B, Methodological, 52, 393–442.
Coles, S. (2001). An Introduction to Statistical Modeling of Extreme Values. London: Springer-Verlag.
gpd
.
## Not run: meplot(with(venice90, sealevel), las = 1) -> ii names(ii) abline(h = ii$meanExcess[1], col = "orange", lty = "dashed") par(mfrow = c(2, 2)) for (ii in 1:4) meplot(rgpd(1000), col = c("orange", "blue", "orange")) ## End(Not run)
## Not run: meplot(with(venice90, sealevel), las = 1) -> ii names(ii) abline(h = ii$meanExcess[1], col = "orange", lty = "dashed") par(mfrow = c(2, 2)) for (ii in 1:4) meplot(rgpd(1000), col = c("orange", "blue", "orange")) ## End(Not run)
Fits a Michaelis-Menten nonlinear regression model.
micmen(rpar = 0.001, divisor = 10, init1 = NULL, init2 = NULL, imethod = 1, oim = TRUE, link1 = "identitylink", link2 = "identitylink", firstDeriv = c("nsimEIM", "rpar"), probs.x = c(0.15, 0.85), nsimEIM = 500, dispersion = 0, zero = NULL)
micmen(rpar = 0.001, divisor = 10, init1 = NULL, init2 = NULL, imethod = 1, oim = TRUE, link1 = "identitylink", link2 = "identitylink", firstDeriv = c("nsimEIM", "rpar"), probs.x = c(0.15, 0.85), nsimEIM = 500, dispersion = 0, zero = NULL)
rpar |
Numeric. Initial positive ridge parameter. This is used to create positive-definite weight matrices. |
divisor |
Numerical. The divisor used to divide the
ridge parameter at each
iteration until it is very small but still positive.
The value of
|
init1 , init2
|
Numerical. Optional initial value for the first and second parameters, respectively. The default is to use a self-starting value. |
link1 , link2
|
Parameter link function applied to the first and second
parameters, respectively.
See |
dispersion |
Numerical. Dispersion parameter. |
firstDeriv |
Character. Algorithm for computing the first derivatives and working weights. The first is the default. |
imethod , probs.x
|
See |
nsimEIM , zero
|
See |
oim |
Use the OIM?
See |
The Michaelis-Menten model is given by
where and
are the two parameters.
The relationship between iteratively reweighted least squares and the Gauss-Newton algorithm is given in Wedderburn (1974). However, the algorithm used by this family function is different. Details are given at the Author's web site.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
This function is not (nor could ever be) entirely reliable. Plotting the fitted function and monitoring convergence is recommended.
The regressor values are inputted as the RHS of
the
form2
argument.
It should just be a simple term; no smart prediction is used.
It should just a single vector, therefore omit
the intercept term.
The LHS of the formula form2
is ignored.
To predict the response at new values
of one must
assign the
@extra$Xm2
slot in the fitted
object these values,
e.g., see the example below.
Numerical problems may occur. If so, try setting some initial values for the parameters. In the future, several self-starting initial values will be implemented.
T. W. Yee
Seber, G. A. F. and Wild, C. J. (1989). Nonlinear Regression, New York: Wiley.
Wedderburn, R. W. M. (1974). Quasi-likelihood functions, generalized linear models, and the Gauss-Newton method. Biometrika, 61, 439–447.
Bates, D. M. and Watts, D. G. (1988). Nonlinear Regression Analysis and Its Applications, New York: Wiley.
mfit <- vglm(velocity ~ 1, micmen, data = enzyme, trace = TRUE, crit = "coef", form2 = ~ conc - 1) summary(mfit) ## Not run: plot(velocity ~ conc, enzyme, xlab = "concentration", las = 1, col = "blue", main = "Michaelis-Menten equation for the enzyme data", ylim = c(0, max(velocity)), xlim = c(0, max(conc))) points(fitted(mfit) ~ conc, enzyme, col = 2, pch = "+", cex = 2) # This predicts the response at a finer grid: newenzyme <- data.frame(conc = seq(0, max(with(enzyme, conc)), len = 200)) mfit@extra$Xm2 <- newenzyme$conc # This is needed for prediction lines(predict(mfit, newenzyme, "response") ~ conc, newenzyme, col = "red") ## End(Not run)
mfit <- vglm(velocity ~ 1, micmen, data = enzyme, trace = TRUE, crit = "coef", form2 = ~ conc - 1) summary(mfit) ## Not run: plot(velocity ~ conc, enzyme, xlab = "concentration", las = 1, col = "blue", main = "Michaelis-Menten equation for the enzyme data", ylim = c(0, max(velocity)), xlim = c(0, max(conc))) points(fitted(mfit) ~ conc, enzyme, col = 2, pch = "+", cex = 2) # This predicts the response at a finer grid: newenzyme <- data.frame(conc = seq(0, max(with(enzyme, conc)), len = 200)) mfit@extra$Xm2 <- newenzyme$conc # This is needed for prediction lines(predict(mfit, newenzyme, "response") ~ conc, newenzyme, col = "red") ## End(Not run)
Computes the Mills ratio.
mills.ratio(x) mills.ratio2(x)
mills.ratio(x) mills.ratio2(x)
x |
Numeric (real). |
The Mills ratio here is dnorm(x) / pnorm(x)
(some use (1 - pnorm(x)) / dnorm(x)
).
Some care is needed as x
approaches -Inf
;
when is very negative then its value approaches
.
mills.ratio
returns the Mills ratio, and
mills.ratio2
returns dnorm(x) * dnorm(x) / pnorm(x)
.
T. W. Yee
Mills, J. P. (1926). Table of the ratio: area to bounding ordinate, for any portion of normal curve. Biometrika. 18(3/4), 395–400.
## Not run: curve(mills.ratio, -5, 5, col = "orange", las = 1) curve(mills.ratio, -5, 5, col = "orange", las = 1, log = "y") ## End(Not run)
## Not run: curve(mills.ratio, -5, 5, col = "orange", las = 1) curve(mills.ratio, -5, 5, col = "orange", las = 1, log = "y") ## End(Not run)
Estimates the three parameters of a mixture of two exponential distributions by maximum likelihood estimation.
mix2exp(lphi = "logitlink", llambda = "loglink", iphi = 0.5, il1 = NULL, il2 = NULL, qmu = c(0.8, 0.2), nsimEIM = 100, zero = "phi")
mix2exp(lphi = "logitlink", llambda = "loglink", iphi = 0.5, il1 = NULL, il2 = NULL, qmu = c(0.8, 0.2), nsimEIM = 100, zero = "phi")
lphi , llambda
|
Link functions for the parameters |
iphi , il1 , il2
|
Initial value for |
qmu |
Vector with two values giving the probabilities relating to the
sample quantiles for obtaining initial values for
|
nsimEIM , zero
|
The probability density function can be loosely written as
where is the probability an observation belongs
to the first group, and
.
The parameter
satisfies
.
The mean of
is
and this is returned as the fitted values.
By default, the three linear/additive predictors are
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
This VGAM family function requires care for a successful
application.
In particular, good initial values are required because
of the presence of local solutions. Therefore running
this function with several different combinations of
arguments such as iphi
, il1
, il2
,
qmu
is highly recommended. Graphical methods such
as hist
can be used as an aid.
This VGAM family function is experimental and should be used with care.
Fitting this model successfully to data can be
difficult due to local solutions, uniqueness problems
and ill-conditioned data. It pays to fit the model
several times with different initial values and check
that the best fit looks reasonable. Plotting the
results is recommended. This function works better as
and
become more different. The default control argument
trace = TRUE
is to encourage monitoring convergence.
T. W. Yee
rexp
,
exponential
,
mix2poisson
.
## Not run: lambda1 <- exp(1); lambda2 <- exp(3) (phi <- logitlink(-1, inverse = TRUE)) mdata <- data.frame(y1 = rexp(nn <- 1000, lambda1)) mdata <- transform(mdata, y2 = rexp(nn, lambda2)) mdata <- transform(mdata, Y = ifelse(runif(nn) < phi, y1, y2)) fit <- vglm(Y ~ 1, mix2exp, data = mdata, trace = TRUE) coef(fit, matrix = TRUE) # Compare the results with the truth round(rbind('Estimated' = Coef(fit), 'Truth' = c(phi, lambda1, lambda2)), digits = 2) with(mdata, hist(Y, prob = TRUE, main = "Orange=estimate, blue=truth")) abline(v = 1 / Coef(fit)[c(2, 3)], lty = 2, col = "orange", lwd = 2) abline(v = 1 / c(lambda1, lambda2), lty = 2, col = "blue", lwd = 2) ## End(Not run)
## Not run: lambda1 <- exp(1); lambda2 <- exp(3) (phi <- logitlink(-1, inverse = TRUE)) mdata <- data.frame(y1 = rexp(nn <- 1000, lambda1)) mdata <- transform(mdata, y2 = rexp(nn, lambda2)) mdata <- transform(mdata, Y = ifelse(runif(nn) < phi, y1, y2)) fit <- vglm(Y ~ 1, mix2exp, data = mdata, trace = TRUE) coef(fit, matrix = TRUE) # Compare the results with the truth round(rbind('Estimated' = Coef(fit), 'Truth' = c(phi, lambda1, lambda2)), digits = 2) with(mdata, hist(Y, prob = TRUE, main = "Orange=estimate, blue=truth")) abline(v = 1 / Coef(fit)[c(2, 3)], lty = 2, col = "orange", lwd = 2) abline(v = 1 / c(lambda1, lambda2), lty = 2, col = "blue", lwd = 2) ## End(Not run)
Estimates the five parameters of a mixture of two univariate normal distributions by maximum likelihood estimation.
mix2normal(lphi = "logitlink", lmu = "identitylink", lsd = "loglink", iphi = 0.5, imu1 = NULL, imu2 = NULL, isd1 = NULL, isd2 = NULL, qmu = c(0.2, 0.8), eq.sd = TRUE, nsimEIM = 100, zero = "phi")
mix2normal(lphi = "logitlink", lmu = "identitylink", lsd = "loglink", iphi = 0.5, imu1 = NULL, imu2 = NULL, isd1 = NULL, isd2 = NULL, qmu = c(0.2, 0.8), eq.sd = TRUE, nsimEIM = 100, zero = "phi")
lphi , lmu , lsd
|
Link functions for the parameters |
iphi |
Initial value for |
imu1 , imu2
|
Optional initial value for |
isd1 , isd2
|
Optional initial value for |
qmu |
Vector with two values giving the probabilities relating
to the sample quantiles for obtaining initial values for
|
eq.sd |
Logical indicating whether the two standard deviations should
be constrained to be equal. If |
nsimEIM |
|
zero |
May be an integer vector
specifying which linear/additive predictors are modelled as
intercept-only. If given, the value or values can be from the
set |
The probability density function can be loosely written as
where is the probability an observation belongs
to the first group.
The parameters
and
are the
means, and
and
are the
standard deviations. The parameter
satisfies
.
The mean of
is
and this is returned as the fitted values.
By default, the five linear/additive predictors are
.
If
eq.sd = TRUE
then
is enforced.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
Numerical problems can occur and
half-stepping is not uncommon.
If failure to converge occurs, try inputting better initial
values,
e.g., by using iphi
,
qmu
,
imu1
,
imu2
,
isd1
,
isd2
,
etc.
This VGAM family function is experimental and should be used with care.
Fitting this model successfully to data can be difficult due
to numerical problems and ill-conditioned data. It pays to
fit the model several times with different initial values and
check that the best fit looks reasonable. Plotting the results
is recommended. This function works better as
and
become more different.
Convergence can be slow, especially when the two component
distributions are not well separated.
The default control argument trace = TRUE
is to encourage
monitoring convergence.
Having eq.sd = TRUE
often makes the overall optimization
problem easier.
T. W. Yee
McLachlan, G. J. and Peel, D. (2000). Finite Mixture Models. New York: Wiley.
Everitt, B. S. and Hand, D. J. (1981). Finite Mixture Distributions. London: Chapman & Hall.
uninormal
,
Normal
,
mix2poisson
.
## Not run: mu1 <- 99; mu2 <- 150; nn <- 1000 sd1 <- sd2 <- exp(3) (phi <- logitlink(-1, inverse = TRUE)) rrn <- runif(nn) mdata <- data.frame(y = ifelse(rrn < phi, rnorm(nn, mu1, sd1), rnorm(nn, mu2, sd2))) fit <- vglm(y ~ 1, mix2normal(eq.sd = TRUE), data = mdata) # Compare the results cfit <- coef(fit) round(rbind('Estimated' = c(logitlink(cfit[1], inverse = TRUE), cfit[2], exp(cfit[3]), cfit[4]), 'Truth' = c(phi, mu1, sd1, mu2)), digits = 2) # Plot the results xx <- with(mdata, seq(min(y), max(y), len = 200)) plot(xx, (1-phi) * dnorm(xx, mu2, sd2), type = "l", xlab = "y", main = "red = estimate, blue = truth", col = "blue", ylab = "Density") phi.est <- logitlink(coef(fit)[1], inverse = TRUE) sd.est <- exp(coef(fit)[3]) lines(xx, phi*dnorm(xx, mu1, sd1), col = "blue") lines(xx, phi.est * dnorm(xx, Coef(fit)[2], sd.est), col = "red") lines(xx, (1-phi.est)*dnorm(xx, Coef(fit)[4], sd.est), col="red") abline(v = Coef(fit)[c(2,4)], lty = 2, col = "red") abline(v = c(mu1, mu2), lty = 2, col = "blue") ## End(Not run)
## Not run: mu1 <- 99; mu2 <- 150; nn <- 1000 sd1 <- sd2 <- exp(3) (phi <- logitlink(-1, inverse = TRUE)) rrn <- runif(nn) mdata <- data.frame(y = ifelse(rrn < phi, rnorm(nn, mu1, sd1), rnorm(nn, mu2, sd2))) fit <- vglm(y ~ 1, mix2normal(eq.sd = TRUE), data = mdata) # Compare the results cfit <- coef(fit) round(rbind('Estimated' = c(logitlink(cfit[1], inverse = TRUE), cfit[2], exp(cfit[3]), cfit[4]), 'Truth' = c(phi, mu1, sd1, mu2)), digits = 2) # Plot the results xx <- with(mdata, seq(min(y), max(y), len = 200)) plot(xx, (1-phi) * dnorm(xx, mu2, sd2), type = "l", xlab = "y", main = "red = estimate, blue = truth", col = "blue", ylab = "Density") phi.est <- logitlink(coef(fit)[1], inverse = TRUE) sd.est <- exp(coef(fit)[3]) lines(xx, phi*dnorm(xx, mu1, sd1), col = "blue") lines(xx, phi.est * dnorm(xx, Coef(fit)[2], sd.est), col = "red") lines(xx, (1-phi.est)*dnorm(xx, Coef(fit)[4], sd.est), col="red") abline(v = Coef(fit)[c(2,4)], lty = 2, col = "red") abline(v = c(mu1, mu2), lty = 2, col = "blue") ## End(Not run)
Estimates the three parameters of a mixture of two Poisson distributions by maximum likelihood estimation.
mix2poisson(lphi = "logitlink", llambda = "loglink", iphi = 0.5, il1 = NULL, il2 = NULL, qmu = c(0.2, 0.8), nsimEIM = 100, zero = "phi")
mix2poisson(lphi = "logitlink", llambda = "loglink", iphi = 0.5, il1 = NULL, il2 = NULL, qmu = c(0.2, 0.8), nsimEIM = 100, zero = "phi")
lphi , llambda
|
Link functions for the parameter |
iphi |
Initial value for |
il1 , il2
|
Optional initial value for |
qmu |
Vector with two values giving the probabilities relating
to the sample quantiles for obtaining initial values for
|
nsimEIM , zero
|
The probability function can be loosely written as
where is the probability an observation belongs
to the first group, and
.
The parameter
satisfies
.
The mean of
is
and this is returned as the fitted values.
By default, the three linear/additive predictors are
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
This VGAM family function requires care for a successful
application.
In particular, good initial values are required because
of the presence of local solutions. Therefore running
this function with several different combinations of
arguments such as iphi
, il1
, il2
,
qmu
is highly recommended. Graphical methods such as
hist
can be used as an aid.
With grouped data (i.e., using the weights
argument)
one has to use a large value of nsimEIM
;
see the example below.
This VGAM family function is experimental and should be used with care.
The response must be integer-valued since
dpois
is invoked.
Fitting this model successfully to data can be difficult
due to local solutions and ill-conditioned data. It pays to
fit the model several times with different initial values,
and check that the best fit looks reasonable. Plotting
the results is recommended. This function works better as
and
become
more different. The default control argument
trace =
TRUE
is to encourage monitoring convergence.
T. W. Yee
## Not run: # Example 1: simulated data nn <- 1000 mu1 <- exp(2.5) # Also known as lambda1 mu2 <- exp(3) (phi <- logitlink(-0.5, inverse = TRUE)) mdata <- data.frame(y = rpois(nn, ifelse(runif(nn) < phi, mu1, mu2))) mfit <- vglm(y ~ 1, mix2poisson, data = mdata) coef(mfit, matrix = TRUE) # Compare the results with the truth round(rbind('Estimated' = Coef(mfit), 'Truth' = c(phi, mu1, mu2)), 2) ty <- with(mdata, table(y)) plot(names(ty), ty, type = "h", main = "Orange=estimate, blue=truth", ylab = "Frequency", xlab = "y") abline(v = Coef(mfit)[-1], lty = 2, col = "orange", lwd = 2) abline(v = c(mu1, mu2), lty = 2, col = "blue", lwd = 2) # Example 2: London Times data (Lange, 1997, p.31) ltdata1 <- data.frame(deaths = 0:9, freq = c(162,267,271, 185,111,61,27,8,3,1)) ltdata2 <- data.frame(y = with(ltdata1, rep(deaths, freq))) # Usually this does not work well unless nsimEIM is large Mfit <- vglm(deaths ~ 1, weight = freq, data = ltdata1, mix2poisson(iphi=0.3, il1=1, il2=2.5, nsimEIM=5000)) # This works better in general Mfit = vglm(y ~ 1, mix2poisson(iphi=0.3, il1=1, il2=2.5), ltdata2) coef(Mfit, matrix = TRUE) Coef(Mfit) ## End(Not run)
## Not run: # Example 1: simulated data nn <- 1000 mu1 <- exp(2.5) # Also known as lambda1 mu2 <- exp(3) (phi <- logitlink(-0.5, inverse = TRUE)) mdata <- data.frame(y = rpois(nn, ifelse(runif(nn) < phi, mu1, mu2))) mfit <- vglm(y ~ 1, mix2poisson, data = mdata) coef(mfit, matrix = TRUE) # Compare the results with the truth round(rbind('Estimated' = Coef(mfit), 'Truth' = c(phi, mu1, mu2)), 2) ty <- with(mdata, table(y)) plot(names(ty), ty, type = "h", main = "Orange=estimate, blue=truth", ylab = "Frequency", xlab = "y") abline(v = Coef(mfit)[-1], lty = 2, col = "orange", lwd = 2) abline(v = c(mu1, mu2), lty = 2, col = "blue", lwd = 2) # Example 2: London Times data (Lange, 1997, p.31) ltdata1 <- data.frame(deaths = 0:9, freq = c(162,267,271, 185,111,61,27,8,3,1)) ltdata2 <- data.frame(y = with(ltdata1, rep(deaths, freq))) # Usually this does not work well unless nsimEIM is large Mfit <- vglm(deaths ~ 1, weight = freq, data = ltdata1, mix2poisson(iphi=0.3, il1=1, il2=2.5, nsimEIM=5000)) # This works better in general Mfit = vglm(y ~ 1, mix2poisson(iphi=0.3, il1=1, il2=2.5), ltdata2) coef(Mfit, matrix = TRUE) Coef(Mfit) ## End(Not run)
Estimates the three independent parameters of the the MNSs blood group system.
MNSs(link = "logitlink", imS = NULL, ims = NULL, inS = NULL)
MNSs(link = "logitlink", imS = NULL, ims = NULL, inS = NULL)
link |
Link function applied to the three parameters.
See |
imS , ims , inS
|
Optional initial value for |
There are three independent
parameters: m_S
, m_s
, n_S
, say, so that
n_s = 1 - m_S - m_s - n_S
.
We let the eta vector (transposed) be
(g(m_S), g(m_s), g(n_S))
where g
is the
link function.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
The input can be a 6-column matrix of counts, where the columns are
MS, Ms, MNS, MNs, NS, Ns (in order).
Alternatively, the input can be a 6-column matrix of
proportions (so each row adds to 1) and the weights
argument is used to specify the total number of counts for each row.
T. W. Yee
Elandt-Johnson, R. C. (1971). Probability Models and Statistical Methods in Genetics, New York: Wiley.
AA.Aa.aa
,
AB.Ab.aB.ab
,
ABO
,
A1A2A3
.
# Order matters only: y <- cbind(MS = 295, Ms = 107, MNS = 379, MNs = 322, NS = 102, Ns = 214) fit <- vglm(y ~ 1, MNSs("logitlink", .25, .28, .08), trace = TRUE) fit <- vglm(y ~ 1, MNSs(link = logitlink), trace = TRUE, crit = "coef") Coef(fit) rbind(y, sum(y)*fitted(fit)) sqrt(diag(vcov(fit)))
# Order matters only: y <- cbind(MS = 295, Ms = 107, MNS = 379, MNs = 322, NS = 102, Ns = 214) fit <- vglm(y ~ 1, MNSs("logitlink", .25, .28, .08), trace = TRUE) fit <- vglm(y ~ 1, MNSs(link = logitlink), trace = TRUE, crit = "coef") Coef(fit) rbind(y, sum(y)*fitted(fit)) sqrt(diag(vcov(fit)))
This function returns a data.frame
with the
variables. It is applied to an object which inherits from
class "vlm"
(e.g., a fitted model of class "vglm"
).
model.framevlm(object, setupsmart = TRUE, wrapupsmart = TRUE, ...)
model.framevlm(object, setupsmart = TRUE, wrapupsmart = TRUE, ...)
object |
a model object from the VGAM R package
that inherits from a vector linear model (VLM),
e.g., a model of class |
... |
further arguments such as |
setupsmart , wrapupsmart
|
Logical. Arguments to determine whether to use smart prediction. |
Since object
is
an object which inherits from class "vlm"
(e.g.,
a fitted model of class "vglm"
),
the method will either returned the saved model frame
used when fitting the model (if any, selected by argument
model = TRUE
) or pass the call used when fitting on to
the default method.
This code implements smart prediction
(see smartpred
).
A data.frame
containing the variables used in
the object
plus those specified in ...
.
Chambers, J. M. (1992). Data for models. Chapter 3 of Statistical Models in S eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole.
model.frame
,
model.matrixvlm
,
predictvglm
,
smartpred
.
# Illustrates smart prediction pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal,mild, severe) ~ poly(c(scale(let)), 2), multinomial, pneumo, trace = TRUE, x = FALSE) class(fit) check1 <- head(model.frame(fit)) check1 check2 <- model.frame(fit, data = head(pneumo)) check2 all.equal(unlist(check1), unlist(check2)) # Should be TRUE q0 <- head(predict(fit)) q1 <- head(predict(fit, newdata = pneumo)) q2 <- predict(fit, newdata = head(pneumo)) all.equal(q0, q1) # Should be TRUE all.equal(q1, q2) # Should be TRUE
# Illustrates smart prediction pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal,mild, severe) ~ poly(c(scale(let)), 2), multinomial, pneumo, trace = TRUE, x = FALSE) class(fit) check1 <- head(model.frame(fit)) check1 check2 <- model.frame(fit, data = head(pneumo)) check2 all.equal(unlist(check1), unlist(check2)) # Should be TRUE q0 <- head(predict(fit)) q1 <- head(predict(fit, newdata = pneumo)) q2 <- predict(fit, newdata = head(pneumo)) all.equal(q0, q1) # Should be TRUE all.equal(q1, q2) # Should be TRUE
Creates a model matrix. Two types can be
returned: a large one (class "vlm"
or one that inherits
from this such as "vglm"
) or a small one
(such as returned if it were of class "lm"
).
model.matrixqrrvglm(object, type = c("latvar", "lm", "vlm"), ...)
model.matrixqrrvglm(object, type = c("latvar", "lm", "vlm"), ...)
object |
an object of a class |
type |
Type of model (or design) matrix returned.
The first is the default.
The value |
... |
further arguments passed to or from other methods. |
This function creates one of several design matrices
from object
.
For example, this can be a small LM object or a big VLM object.
When type = "vlm"
this function calls fnumat2R()
to construct the big model matrix given C.
That is, the constrained coefficients are assumed known,
so that something like a large Poisson or logistic regression
is set up.
This is because all responses are fitted simultaneously here.
The columns are labelled in the following order and
with the following prefixes:
"A"
for the matrix (linear in the latent variables),
"D"
for the matrix (quadratic in the latent variables),
"x1."
for the matrix (usually contains
the intercept; see the argument
noRRR
in
qrrvglm.control
).
The design matrix after scaling
for a regression model with the specified formula and data.
By after scaling, it is meant that it matches the output
of coef(qrrvglmObject)
rather than the original
scaling of the fitted object.
model.matrixvlm
,
cqo
,
vcovqrrvglm
.
## Not run: set.seed(1); n <- 40; p <- 3; S <- 4; myrank <- 1 mydata <- rcqo(n, p, S, Rank = myrank, es.opt = TRUE, eq.max = TRUE) (myform <- attr(mydata, "formula")) mycqo <- cqo(myform, poissonff, data = mydata, I.tol = TRUE, Rank = myrank, Bestof = 5) model.matrix(mycqo, type = "latvar") model.matrix(mycqo, type = "lm") model.matrix(mycqo, type = "vlm") ## End(Not run)
## Not run: set.seed(1); n <- 40; p <- 3; S <- 4; myrank <- 1 mydata <- rcqo(n, p, S, Rank = myrank, es.opt = TRUE, eq.max = TRUE) (myform <- attr(mydata, "formula")) mycqo <- cqo(myform, poissonff, data = mydata, I.tol = TRUE, Rank = myrank, Bestof = 5) model.matrix(mycqo, type = "latvar") model.matrix(mycqo, type = "lm") model.matrix(mycqo, type = "vlm") ## End(Not run)
Creates a design matrix. Two types can be
returned: a large one (class "vlm"
or one that inherits
from this such as "vglm"
) or a small one
(such as returned if it were of class "lm"
).
model.matrixvlm(object, type = c("vlm", "lm", "lm2", "bothlmlm2"), linpred.index = NULL, label.it = TRUE, ...)
model.matrixvlm(object, type = c("vlm", "lm", "lm2", "bothlmlm2"), linpred.index = NULL, label.it = TRUE, ...)
object |
an object of a class that inherits from the vector linear model (VLM). |
type |
Type of design matrix returned. The first is the default.
The value |
linpred.index |
Vector of integers.
The index for a linear/additive predictor,
it must have values from the set |
label.it |
Logical. Label the row and columns with character names?
If |
... |
further arguments passed to or from other methods.
These include |
This function creates a design matrix from object
.
This can be a small LM object or a big VLM object (default).
The latter is constructed from the former and the constraint
matrices.
This code implements smart prediction
(see smartpred
).
The design matrix for a regression model with the specified formula
and data.
If type = "bothlmlm2"
then a list is returned with components
"X"
and "Xm2"
.
Sometimes
(especially if x = TRUE
when calling vglm
)
the model matrix has attributes:
"assign"
("lm"
-type) and
"vassign"
("vlm"
-type) and
"orig.assign.lm"
("lm"
-type).
These are used internally a lot for bookkeeping,
especially regarding
the columns of both types of model matrices.
In particular, constraint matrices and variable selection
relies on this information a lot.
The "orig.assign.lm"
is the ordinary "assign"
attribute for lm
and glm
objects.
Chambers, J. M. (1992). Data for models. Chapter 3 of Statistical Models in S eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole.
model.matrix
,
model.framevlm
,
predictvglm
,
smartpred
,
constraints.vlm
,
trim.constraints
,
add1.vglm
,
drop1.vglm
,
step4vglm
.
# (I) Illustrates smart prediction ,,,,,,,,,,,,,,,,,,,,,,, pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ sm.poly(c(sm.scale(let)), 2), multinomial, data = pneumo, trace = TRUE, x = FALSE) class(fit) [email protected] # Data-dependent parameters fit@x # Not saved on the object model.matrix(fit) model.matrix(fit, linpred.index = 1, type = "lm") model.matrix(fit, linpred.index = 2, type = "lm") (Check1 <- head(model.matrix(fit, type = "lm"))) (Check2 <- model.matrix(fit, data = head(pneumo), type = "lm")) all.equal(c(Check1), c(Check2)) # Should be TRUE q0 <- head(predict(fit)) q1 <- head(predict(fit, newdata = pneumo)) q2 <- predict(fit, newdata = head(pneumo)) all.equal(q0, q1) # Should be TRUE all.equal(q1, q2) # Should be TRUE # (II) Attributes ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, fit2 <- vglm(cbind(normal, mild, severe) ~ let, # x = TRUE multinomial, data = pneumo, trace = TRUE) fit2@x # "lm"-type; saved on the object; note the attributes model.matrix(fit2, type = "lm") # Note the attributes model.matrix(fit2, type = "vlm") # Note the attributes
# (I) Illustrates smart prediction ,,,,,,,,,,,,,,,,,,,,,,, pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ sm.poly(c(sm.scale(let)), 2), multinomial, data = pneumo, trace = TRUE, x = FALSE) class(fit) fit@smart.prediction # Data-dependent parameters fit@x # Not saved on the object model.matrix(fit) model.matrix(fit, linpred.index = 1, type = "lm") model.matrix(fit, linpred.index = 2, type = "lm") (Check1 <- head(model.matrix(fit, type = "lm"))) (Check2 <- model.matrix(fit, data = head(pneumo), type = "lm")) all.equal(c(Check1), c(Check2)) # Should be TRUE q0 <- head(predict(fit)) q1 <- head(predict(fit, newdata = pneumo)) q2 <- predict(fit, newdata = head(pneumo)) all.equal(q0, q1) # Should be TRUE all.equal(q1, q2) # Should be TRUE # (II) Attributes ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, fit2 <- vglm(cbind(normal, mild, severe) ~ let, # x = TRUE multinomial, data = pneumo, trace = TRUE) fit2@x # "lm"-type; saved on the object; note the attributes model.matrix(fit2, type = "lm") # Note the attributes model.matrix(fit2, type = "vlm") # Note the attributes
Modify a matrix by shifting successive elements.
moffset(mat, roffset = 0, coffset = 0, postfix = "", rprefix = "Row.", cprefix = "Col.")
moffset(mat, roffset = 0, coffset = 0, postfix = "", rprefix = "Row.", cprefix = "Col.")
mat |
Data frame or matrix.
This ought to have at least three rows and three columns.
The elements are shifted in the order of |
roffset , coffset
|
Numeric or character.
If numeric, the amount of shift (offset) for each row and column.
The default is no change to |
postfix |
Character. Modified rows and columns are renamed by pasting this argument to the end of each name. The default is no change. |
rprefix , cprefix
|
Same as |
This function allows a matrix to be rearranged so that
element (roffset
+ 1, coffset
+ 1)
becomes the (1, 1) element.
The elements are assumed to be ordered in the same way
as the elements of c(mat)
,
This function is applicable to, e.g.,
alcoff
,
where it is useful to define the effective day
as starting
at some other hour than midnight, e.g., 6.00am.
This is because partying on Friday night continues on into
Saturday morning, therefore it is more interpretable to use
the effective day when considering a daily effect.
This is a data preprocessing function for rcim
and plotrcim0
. The differences between
Rcim
and moffset
is that
Rcim
only reorders the level of the
rows and columns
so that the data is shifted but not moved.
That is, a value in one row stays in that row,
and ditto for column.
But in moffset
values in one column can be moved to a previous column.
See the examples below.
A matrix of the same dimensional as its input.
The input mat
should have row names and column names.
T. W. Yee, Alfian F. Hadi.
Rcim
,
rcim
,
plotrcim0
,
alcoff
,
crashi
.
# Some day's data is moved to previous day: moffset(alcoff, 3, 2, "*") Rcim(alcoff, 3 + 1, 2 + 1) # Data does not move as much. alcoff # Original data moffset(alcoff, 3, 2, "*") - Rcim(alcoff, 3+1, 2+1) # Note the differences # An 'effective day' data set: alcoff.e <- moffset(alcoff, roffset = "6", postfix = "*") fit.o <- rcim(alcoff) # default baselines are 1st row and col fit.e <- rcim(alcoff.e) # default baselines are 1st row and col ## Not run: par(mfrow = c(2, 2), mar = c(9, 4, 2, 1)) plot(fit.o, rsub = "Not very interpretable", csub = "Not very interpretable") plot(fit.e, rsub = "More interpretable", csub = "More interpretable") ## End(Not run) # Some checking all.equal(moffset(alcoff), alcoff) # Should be no change moffset(alcoff, 1, 1, "*") moffset(alcoff, 2, 3, "*") moffset(alcoff, 1, 0, "*") moffset(alcoff, 0, 1, "*") moffset(alcoff, "6", "Mon", "*") # This one is good # Customise row and column baselines fit2 <- rcim(Rcim(alcoff.e, rbaseline = "11", cbaseline = "Mon*"))
# Some day's data is moved to previous day: moffset(alcoff, 3, 2, "*") Rcim(alcoff, 3 + 1, 2 + 1) # Data does not move as much. alcoff # Original data moffset(alcoff, 3, 2, "*") - Rcim(alcoff, 3+1, 2+1) # Note the differences # An 'effective day' data set: alcoff.e <- moffset(alcoff, roffset = "6", postfix = "*") fit.o <- rcim(alcoff) # default baselines are 1st row and col fit.e <- rcim(alcoff.e) # default baselines are 1st row and col ## Not run: par(mfrow = c(2, 2), mar = c(9, 4, 2, 1)) plot(fit.o, rsub = "Not very interpretable", csub = "Not very interpretable") plot(fit.e, rsub = "More interpretable", csub = "More interpretable") ## End(Not run) # Some checking all.equal(moffset(alcoff), alcoff) # Should be no change moffset(alcoff, 1, 1, "*") moffset(alcoff, 2, 3, "*") moffset(alcoff, 1, 0, "*") moffset(alcoff, 0, 1, "*") moffset(alcoff, "6", "Mon", "*") # This one is good # Customise row and column baselines fit2 <- rcim(Rcim(alcoff.e, rbaseline = "11", cbaseline = "Mon*"))
Computes the multilogit transformation, including its inverse and the first two derivatives.
multilogitlink(theta, refLevel = "(Last)", M = NULL, whitespace = FALSE, bvalue = NULL, inverse = FALSE, deriv = 0, all.derivs = FALSE, short = TRUE, tag = FALSE)
multilogitlink(theta, refLevel = "(Last)", M = NULL, whitespace = FALSE, bvalue = NULL, inverse = FALSE, deriv = 0, all.derivs = FALSE, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
refLevel , M , whitespace
|
See |
bvalue |
See |
all.derivs |
Logical. This is currently experimental only. |
inverse , deriv , short , tag
|
Details at |
The multilogitlink()
link function is a generalization of the
logitlink
link to levels/classes. It forms the
basis of the
multinomial
logit model. It is sometimes
called the multi-logit link or the multinomial logit
link; some people use softmax too. When its inverse function
is computed it returns values which are positive and add to unity.
For multilogitlink
with deriv = 0
,
the multilogit of theta
,
i.e.,
log(theta[, j]/theta[, M+1])
when inverse = FALSE
,
and if inverse = TRUE
then
exp(theta[, j])/(1+rowSums(exp(theta)))
.
For deriv = 1
, then the function returns
d eta
/ d theta
as a function of
theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns the reciprocal.
Here, all logarithms are natural logarithms, i.e., to base e.
Numerical instability may occur when theta
is
close to 1 or 0 (for multilogitlink
).
One way of overcoming this is to use, e.g., bvalue
.
Currently care.exp()
is used to avoid NA
s being
returned if the probability is too close to 1.
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Links
,
multinomial
,
logitlink
,
gaitdpoisson
,
normal.vcm
,
CommonVGAMffArguments
.
pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, # For illustration only! multinomial, trace = TRUE, data = pneumo) fitted(fit) predict(fit) multilogitlink(fitted(fit)) multilogitlink(fitted(fit)) - predict(fit) # Should be all 0s multilogitlink(predict(fit), inverse = TRUE) # rowSums() add to unity multilogitlink(predict(fit), inverse = TRUE, refLevel = 1) multilogitlink(predict(fit), inverse = TRUE) - fitted(fit) # Should be all 0s multilogitlink(fitted(fit), deriv = 1) multilogitlink(fitted(fit), deriv = 2)
pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, # For illustration only! multinomial, trace = TRUE, data = pneumo) fitted(fit) predict(fit) multilogitlink(fitted(fit)) multilogitlink(fitted(fit)) - predict(fit) # Should be all 0s multilogitlink(predict(fit), inverse = TRUE) # rowSums() add to unity multilogitlink(predict(fit), inverse = TRUE, refLevel = 1) multilogitlink(predict(fit), inverse = TRUE) - fitted(fit) # Should be all 0s multilogitlink(fitted(fit), deriv = 1) multilogitlink(fitted(fit), deriv = 2)
Fits a multinomial logit model (MLM) to a (preferably unordered) factor response.
multinomial(zero = NULL, parallel = FALSE, nointercept = NULL, refLevel = "(Last)", ynames = FALSE, imethod = 1, imu = NULL, byrow.arg = FALSE, Thresh = NULL, Trev = FALSE, Tref = if (Trev) "M" else 1, whitespace = FALSE)
multinomial(zero = NULL, parallel = FALSE, nointercept = NULL, refLevel = "(Last)", ynames = FALSE, imethod = 1, imu = NULL, byrow.arg = FALSE, Thresh = NULL, Trev = FALSE, Tref = if (Trev) "M" else 1, whitespace = FALSE)
zero |
Can be an integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
Any values must be from the set {1,2,..., |
parallel |
A logical, or formula specifying which terms have equal/unequal coefficients. |
ynames |
Logical.
If |
nointercept , whitespace
|
See |
imu , byrow.arg
|
See |
refLevel |
Either a (1) single positive integer or (2) a value of
the factor or (3) a character string.
If inputted as an integer then it specifies which
column of the response matrix is the reference or baseline level.
The default is the last one (the |
imethod |
Choosing 2 will use the mean sample proportions of each
column of the response matrix, which corresponds to
the MLEs for intercept-only models.
See |
Thresh , Trev , Tref
|
Same as |
In this help file the response is
assumed to be a factor with unordered values
, so
that
is the number of linear/additive
predictors
.
The default model can be written
where is the
th
linear/additive predictor.
Here,
, and
is 0 by definition. That is, the last level
of the factor,
or last column of the response matrix, is
taken as the
reference level or baseline—this is for
identifiability
of the parameters. The reference or
baseline level can
be changed with the
refLevel
argument.
In almost all the literature, the constraint matrices associated with
this family of models are known. For example, setting parallel
= TRUE
will make all constraint matrices (including the intercept)
equal to a vector of 1's; to suppress the intercepts from
being parallel then set
parallel = FALSE ~ 1
. If the
constraint matrices are unknown and to be estimated, then this can be
achieved by fitting the model as a reduced-rank vector generalized
linear model (RR-VGLM; see rrvglm
). In particular, a
multinomial logit model with unknown constraint matrices is known as a
stereotype model (Anderson, 1984), and can be fitted with
rrvglm
.
The above details correspond to the ordinary MLM where all the levels are altered (in the terminology of GAITD regression).
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
No check is made to verify that the response is nominal.
See CommonVGAMffArguments
for more warnings.
The response should be either a matrix of counts
(with row sums that are all positive), or a
factor. In both cases, the y
slot returned by
vglm
/vgam
/rrvglm
is the matrix of sample proportions.
The multinomial logit model is more appropriate for a nominal
(unordered) factor response than for an
ordinal (ordered) factor
response.
Models more suited for the latter include those based on
cumulative probabilities, e.g., cumulative
.
multinomial
is prone to numerical difficulties if
the groups are separable and/or the fitted probabilities
are close to 0 or 1. The fitted values returned
are estimates of the probabilities for
. See safeBinaryRegression
for the logistic regression case.
Here is an example of the usage of the parallel
argument. If there are covariates x2
, x3
and x4
, then parallel = TRUE ~ x2 + x3 -
1
and parallel = FALSE ~ x4
are equivalent. This
would constrain the regression coefficients for x2
and x3
to be equal; those of the intercepts and
x4
would be different.
In Example 4 below, a conditional logit model is
fitted to an artificial data set that explores how
cost and travel time affect people's decision about
how to travel to work. Walking is the baseline group.
The variable Cost.car
is the difference between
the cost of travel to work by car and walking, etc. The
variable Time.car
is the difference between
the travel duration/time to work by car and walking,
etc. For other details about the xij
argument see
vglm.control
and fill1
.
The multinom
function in the
nnet package uses the first level of the factor as
baseline, whereas the last level of the factor is used
here. Consequently the estimated regression coefficients
differ.
Thomas W. Yee
Agresti, A. (2013). Categorical Data Analysis, 3rd ed. Hoboken, NJ, USA: Wiley.
Anderson, J. A. (1984). Regression and ordered categorical variables. Journal of the Royal Statistical Society, Series B, Methodological, 46, 1–30.
Hastie, T. J., Tibshirani, R. J. and Friedman, J. H. (2009). The Elements of Statistical Learning: Data Mining, Inference and Prediction, 2nd ed. New York, USA: Springer-Verlag.
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Tutz, G. (2012). Regression for Categorical Data, Cambridge: Cambridge University Press.
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
Yee, T. W. (2010). The VGAM package for categorical data analysis. Journal of Statistical Software, 32, 1–34. doi:10.18637/jss.v032.i10.
Yee, T. W. and Ma, C. (2024). Generally altered, inflated, truncated and deflated regression. Statistical Science, 39 (in press).
multilogitlink
,
margeff
,
cumulative
,
acat
,
cratio
,
sratio
,
CM.equid
,
CommonVGAMffArguments
,
dirichlet
,
dirmultinomial
,
rrvglm
,
fill1
,
Multinomial
,
gaitdpoisson
,
Gaitdpois
,
iris
.
# Example 1: Regn spline VGAM: marital status versus age data(marital.nz) ooo <- with(marital.nz, order(age)) om.nz <- marital.nz[ooo, ] fit1 <- vglm(mstatus ~ sm.bs(age), multinomial, om.nz) coef(fit1, matrix = TRUE) # Mostly meaningless ## Not run: with(om.nz, matplot(age, fitted(fit1), type = "l", las = 1, lwd = 2)) legend("topright", leg = colnames(fitted(fit1)), lty = 1:4, col = 1:4, lwd = 2) ## End(Not run) # Example 2a: a simple example ycounts <- t(rmultinom(10, size = 20, prob = c(0.1, 0.2, 0.8))) fit <- vglm(ycounts ~ 1, multinomial) head(fitted(fit)) # Proportions [email protected] # NOT recommended for the prior weights weights(fit, type = "prior", matrix = FALSE) # The better method depvar(fit) # Sample proportions; same as fit@y constraints(fit) # Constraint matrices # Example 2b: Different reference level used as the baseline fit2 <- vglm(ycounts ~ 1, multinomial(refLevel = 2)) coef(fit2, matrix = TRUE) coef(fit , matrix = TRUE) # Easy to reconcile this output with fit2 # Example 3: The response is a factor. nn <- 10 dframe3 <- data.frame(yfac = gl(3, nn, labels = c("Ctrl", "Trt1", "Trt2")), x2 = runif(3 * nn)) myrefLevel <- with(dframe3, yfac[12]) fit3a <- vglm(yfac ~ x2, multinomial(refLevel = myrefLevel), dframe3) fit3b <- vglm(yfac ~ x2, multinomial(refLevel = 2), dframe3) coef(fit3a, matrix = TRUE) # "Trt1" is the reference level coef(fit3b, matrix = TRUE) # "Trt1" is the reference level margeff(fit3b) # Example 4: Fit a rank-1 stereotype model fit4 <- rrvglm(Country ~ Width + Height + HP, multinomial, car.all) coef(fit4) # Contains the C matrix constraints(fit4)$HP # The A matrix coef(fit4, matrix = TRUE) # The B matrix Coef(fit4)@C # The C matrix concoef(fit4) # Better to get the C matrix this way Coef(fit4)@A # The A matrix svd(coef(fit4, matrix = TRUE)[-1, ])$d # Has rank 1; = C %*% t(A) # Classification (but watch out for NAs in some of the variables): apply(fitted(fit4), 1, which.max) # Classification # Classification: colnames(fitted(fit4))[apply(fitted(fit4), 1, which.max)] apply(predict(fit4, car.all, type = "response"), 1, which.max) # Ditto # Example 5: Using the xij argument (aka conditional logit model) set.seed(111) nn <- 100 # Number of people who travel to work M <- 3 # There are M+1 models of transport to go to work ycounts <- matrix(0, nn, M+1) ycounts[cbind(1:nn, sample(x = M+1, size = nn, replace = TRUE))] = 1 dimnames(ycounts) <- list(NULL, c("bus","train","car","walk")) gotowork <- data.frame(cost.bus = runif(nn), time.bus = runif(nn), cost.train= runif(nn), time.train= runif(nn), cost.car = runif(nn), time.car = runif(nn), cost.walk = runif(nn), time.walk = runif(nn)) gotowork <- round(gotowork, digits = 2) # For convenience gotowork <- transform(gotowork, Cost.bus = cost.bus - cost.walk, Cost.car = cost.car - cost.walk, Cost.train = cost.train - cost.walk, Cost = cost.train - cost.walk, # for labelling Time.bus = time.bus - time.walk, Time.car = time.car - time.walk, Time.train = time.train - time.walk, Time = time.train - time.walk) # for labelling fit <- vglm(ycounts ~ Cost + Time, multinomial(parall = TRUE ~ Cost + Time - 1), xij = list(Cost ~ Cost.bus + Cost.train + Cost.car, Time ~ Time.bus + Time.train + Time.car), form2 = ~ Cost + Cost.bus + Cost.train + Cost.car + Time + Time.bus + Time.train + Time.car, data = gotowork, trace = TRUE) head(model.matrix(fit, type = "lm")) # LM model matrix head(model.matrix(fit, type = "vlm")) # Big VLM model matrix coef(fit) coef(fit, matrix = TRUE) constraints(fit) summary(fit) max(abs(predict(fit) - predict(fit, new = gotowork))) # Should be 0
# Example 1: Regn spline VGAM: marital status versus age data(marital.nz) ooo <- with(marital.nz, order(age)) om.nz <- marital.nz[ooo, ] fit1 <- vglm(mstatus ~ sm.bs(age), multinomial, om.nz) coef(fit1, matrix = TRUE) # Mostly meaningless ## Not run: with(om.nz, matplot(age, fitted(fit1), type = "l", las = 1, lwd = 2)) legend("topright", leg = colnames(fitted(fit1)), lty = 1:4, col = 1:4, lwd = 2) ## End(Not run) # Example 2a: a simple example ycounts <- t(rmultinom(10, size = 20, prob = c(0.1, 0.2, 0.8))) fit <- vglm(ycounts ~ 1, multinomial) head(fitted(fit)) # Proportions fit@prior.weights # NOT recommended for the prior weights weights(fit, type = "prior", matrix = FALSE) # The better method depvar(fit) # Sample proportions; same as fit@y constraints(fit) # Constraint matrices # Example 2b: Different reference level used as the baseline fit2 <- vglm(ycounts ~ 1, multinomial(refLevel = 2)) coef(fit2, matrix = TRUE) coef(fit , matrix = TRUE) # Easy to reconcile this output with fit2 # Example 3: The response is a factor. nn <- 10 dframe3 <- data.frame(yfac = gl(3, nn, labels = c("Ctrl", "Trt1", "Trt2")), x2 = runif(3 * nn)) myrefLevel <- with(dframe3, yfac[12]) fit3a <- vglm(yfac ~ x2, multinomial(refLevel = myrefLevel), dframe3) fit3b <- vglm(yfac ~ x2, multinomial(refLevel = 2), dframe3) coef(fit3a, matrix = TRUE) # "Trt1" is the reference level coef(fit3b, matrix = TRUE) # "Trt1" is the reference level margeff(fit3b) # Example 4: Fit a rank-1 stereotype model fit4 <- rrvglm(Country ~ Width + Height + HP, multinomial, car.all) coef(fit4) # Contains the C matrix constraints(fit4)$HP # The A matrix coef(fit4, matrix = TRUE) # The B matrix Coef(fit4)@C # The C matrix concoef(fit4) # Better to get the C matrix this way Coef(fit4)@A # The A matrix svd(coef(fit4, matrix = TRUE)[-1, ])$d # Has rank 1; = C %*% t(A) # Classification (but watch out for NAs in some of the variables): apply(fitted(fit4), 1, which.max) # Classification # Classification: colnames(fitted(fit4))[apply(fitted(fit4), 1, which.max)] apply(predict(fit4, car.all, type = "response"), 1, which.max) # Ditto # Example 5: Using the xij argument (aka conditional logit model) set.seed(111) nn <- 100 # Number of people who travel to work M <- 3 # There are M+1 models of transport to go to work ycounts <- matrix(0, nn, M+1) ycounts[cbind(1:nn, sample(x = M+1, size = nn, replace = TRUE))] = 1 dimnames(ycounts) <- list(NULL, c("bus","train","car","walk")) gotowork <- data.frame(cost.bus = runif(nn), time.bus = runif(nn), cost.train= runif(nn), time.train= runif(nn), cost.car = runif(nn), time.car = runif(nn), cost.walk = runif(nn), time.walk = runif(nn)) gotowork <- round(gotowork, digits = 2) # For convenience gotowork <- transform(gotowork, Cost.bus = cost.bus - cost.walk, Cost.car = cost.car - cost.walk, Cost.train = cost.train - cost.walk, Cost = cost.train - cost.walk, # for labelling Time.bus = time.bus - time.walk, Time.car = time.car - time.walk, Time.train = time.train - time.walk, Time = time.train - time.walk) # for labelling fit <- vglm(ycounts ~ Cost + Time, multinomial(parall = TRUE ~ Cost + Time - 1), xij = list(Cost ~ Cost.bus + Cost.train + Cost.car, Time ~ Time.bus + Time.train + Time.car), form2 = ~ Cost + Cost.bus + Cost.train + Cost.car + Time + Time.bus + Time.train + Time.car, data = gotowork, trace = TRUE) head(model.matrix(fit, type = "lm")) # LM model matrix head(model.matrix(fit, type = "vlm")) # Big VLM model matrix coef(fit) coef(fit, matrix = TRUE) constraints(fit) summary(fit) max(abs(predict(fit) - predict(fit, new = gotowork))) # Should be 0
Density, and random generation for the (four parameter bivariate) Linear Model–Bernoulli copula distribution.
dN1binom(x1, x2, mean = 0, sd = 1, prob, apar = 0, copula = "gaussian", log = FALSE) rN1binom(n, mean = 0, sd = 1, prob, apar = 0, copula = "gaussian")
dN1binom(x1, x2, mean = 0, sd = 1, prob, apar = 0, copula = "gaussian", log = FALSE) rN1binom(n, mean = 0, sd = 1, prob, apar = 0, copula = "gaussian")
x1 , x2
|
vector of quantiles.
The valid values of |
n |
number of observations.
Same as |
copula |
See |
mean , sd , prob , apar
|
See |
log |
Logical.
If |
See N1binomial
, the VGAM
family functions for estimating the
parameter by maximum likelihood estimation,
for details.
dN1binom
gives the probability density/mass function,
rN1binom
generates random deviate and returns
a two-column matrix.
T. W. Yee
## Not run: nn <- 1000; apar <- rhobitlink(1.5, inverse = TRUE) prob <- logitlink(0.5, inverse = TRUE) mymu <- 1; sdev <- exp(1) mat <- rN1binom(nn, mymu, sdev, prob, apar) bndata <- data.frame(y1 = mat[, 1], y2 = mat[, 2]) with(bndata, plot(jitter(y1), jitter(y2), col = "blue")) ## End(Not run)
## Not run: nn <- 1000; apar <- rhobitlink(1.5, inverse = TRUE) prob <- logitlink(0.5, inverse = TRUE) mymu <- 1; sdev <- exp(1) mat <- rN1binom(nn, mymu, sdev, prob, apar) bndata <- data.frame(y1 = mat[, 1], y2 = mat[, 2]) with(bndata, plot(jitter(y1), jitter(y2), col = "blue")) ## End(Not run)
Estimate the four parameters of
the (bivariate) –binomial copula
mixed data type model
by maximum likelihood estimation.
N1binomial(lmean = "identitylink", lsd = "loglink", lvar = "loglink", lprob = "logitlink", lapar = "rhobitlink", zero = c(if (var.arg) "var" else "sd", "apar"), nnodes = 20, copula = "gaussian", var.arg = FALSE, imethod = 1, isd = NULL, iprob = NULL, iapar = NULL)
N1binomial(lmean = "identitylink", lsd = "loglink", lvar = "loglink", lprob = "logitlink", lapar = "rhobitlink", zero = c(if (var.arg) "var" else "sd", "apar"), nnodes = 20, copula = "gaussian", var.arg = FALSE, imethod = 1, isd = NULL, iprob = NULL, iapar = NULL)
lmean , lsd , lvar , lprob , lapar
|
Details at |
imethod , isd , iprob , iapar
|
Initial values.
Details at |
zero |
Details at |
nnodes |
Number of nodes and weights for the Gauss–Hermite (GH) quadrature. While a higher value should be more accurate, setting an excessive value runs the risk of evaluating some special functions near the boundary of the parameter space and producing numerical problems. |
copula |
Type of copula used. Currently only the bivariate normal is used but more might be implemented in the future. |
var.arg |
See |
The bivariate response comprises
from the linear model having parameters
mean
and sd
for
and
,
and the binary
having parameter
prob
for its mean .
The
joint probability density/mass function is
and
where
adjusts
according to the association parameter
.
The quantity
is
.
The quantity
is
.
Thus there is an underlying bivariate normal
distribution, and a copula is used to bring the
two marginal distributions together.
Here,
, and
is the
cumulative distribution function
pnorm
of a standard univariate normal.
The first marginal
distribution is a normal distribution
for the linear model.
The second column of the response must
have values 0 or 1,
e.g.,
Bernoulli random variables.
When
then
.
Together, this family function combines
uninormal
and
binomialff
.
If the response are correlated then
a more efficient joint analysis
should result.
This VGAM family function cannot handle
multiple responses. Only a two-column
matrix is allowed.
The two-column fitted
value matrix has columns
and
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
This VGAM family function is fragile.
Because the EIMs are approximated by
GH quadrature it is found that convergence
may be a little slower than for other models
whose EIM is tractable.
Also, the log-likelihood may be flat at the MLE
with respect to especially
because the correlation
between the two mixed data types may be weak.
It pays to set trace = TRUE
to
monitor convergence, especially when
abs(apar)
is high.
T. W. Yee
Song, P. X.-K. (2007). Correlated Data Analysis: Modeling, Analytics, and Applications. Springer.
rN1binom
,
N1poisson
,
binormalcop
,
uninormal
,
binomialff
,
pnorm
.
nn <- 1000; mymu <- 1; sdev <- exp(1) apar <- rhobitlink(0.5, inverse = TRUE) prob <- logitlink(0.5, inverse = TRUE) mat <- rN1binom(nn, mymu, sdev, prob, apar) nbdata <- data.frame(y1 = mat[, 1], y2 = mat[, 2]) fit1 <- vglm(cbind(y1, y2) ~ 1, N1binomial, nbdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1) confint(fit1)
nn <- 1000; mymu <- 1; sdev <- exp(1) apar <- rhobitlink(0.5, inverse = TRUE) prob <- logitlink(0.5, inverse = TRUE) mat <- rN1binom(nn, mymu, sdev, prob, apar) nbdata <- data.frame(y1 = mat[, 1], y2 = mat[, 2]) fit1 <- vglm(cbind(y1, y2) ~ 1, N1binomial, nbdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1) confint(fit1)
Density, and random generation for the (four parameter bivariate) Linear Model–Poisson copula distribution.
dN1pois(x1, x2, mean = 0, sd = 1, lambda, apar = 0, doff = 5, copula = "gaussian", log = FALSE) rN1pois(n, mean = 0, sd = 1, lambda, apar = 0, doff = 5, copula = "gaussian")
dN1pois(x1, x2, mean = 0, sd = 1, lambda, apar = 0, doff = 5, copula = "gaussian", log = FALSE) rN1pois(n, mean = 0, sd = 1, lambda, apar = 0, doff = 5, copula = "gaussian")
x1 , x2
|
vector of quantiles.
The valid values of |
n |
number of observations.
Same as |
copula |
See |
mean , sd , lambda , apar
|
See |
doff |
See |
log |
Logical.
If |
See N1poisson
, the VGAM
family functions for estimating the
parameter by maximum likelihood estimation,
for details.
dN1pois
gives the probability density/mass function,
rN1pois
generates random deviate and returns
a two-column matrix.
T. W. Yee
## Not run: nn <- 1000; mymu <- 1; sdev <- exp(1) apar <- rhobitlink(0.4, inverse = TRUE) lambda <- loglink(1, inverse = TRUE) mat <- rN1pois(nn, mymu, sdev, lambda, apar) pndata <- data.frame(y1 = mat[, 1], y2 = mat[, 2]) with(pndata, plot(jitter(y1), jitter(y2), col = 4)) ## End(Not run)
## Not run: nn <- 1000; mymu <- 1; sdev <- exp(1) apar <- rhobitlink(0.4, inverse = TRUE) lambda <- loglink(1, inverse = TRUE) mat <- rN1pois(nn, mymu, sdev, lambda, apar) pndata <- data.frame(y1 = mat[, 1], y2 = mat[, 2]) with(pndata, plot(jitter(y1), jitter(y2), col = 4)) ## End(Not run)
Estimate the four parameters of
the (bivariate) –Poisson copula
mixed data type model
by maximum likelihood estimation.
N1poisson(lmean = "identitylink", lsd = "loglink", lvar = "loglink", llambda = "loglink", lapar = "rhobitlink", zero = c(if (var.arg) "var" else "sd", "apar"), doff = 5, nnodes = 20, copula = "gaussian", var.arg = FALSE, imethod = 1, isd = NULL, ilambda = NULL, iapar = NULL)
N1poisson(lmean = "identitylink", lsd = "loglink", lvar = "loglink", llambda = "loglink", lapar = "rhobitlink", zero = c(if (var.arg) "var" else "sd", "apar"), doff = 5, nnodes = 20, copula = "gaussian", var.arg = FALSE, imethod = 1, isd = NULL, ilambda = NULL, iapar = NULL)
lmean , lsd , lvar , llambda , lapar
|
Details at |
imethod , isd , ilambda , iapar
|
Initial values.
Details at |
zero |
Details at |
doff |
Numeric of unit length, the denominator offset
Alternatively,
|
nnodes , copula
|
Details at |
var.arg |
See |
The bivariate response comprises
from a linear model
having parameters
mean
and sd
for
and
,
and the Poisson count
having parameter
lambda
for its mean .
The
joint probability density/mass function is
where
adjusts
according to the association parameter
.
The quantity
is
where
maps
onto the unit interval.
The quantity
is
.
Thus there is an underlying bivariate normal
distribution, and a copula is used to bring the
two marginal distributions together.
Here,
, and
is the
cumulative distribution function
pnorm
of a standard univariate normal.
The first marginal
distribution is a normal distribution
for the linear model.
The second column of the response must
have nonnegative integer values.
When
then
.
Together, this family function combines
uninormal
and
poissonff
.
If the response are correlated then
a more efficient joint analysis
should result.
The second marginal distribution allows
for overdispersion relative to an ordinary
Poisson distribution—a property due to
.
This VGAM family function cannot handle
multiple responses.
Only a two-column matrix is allowed.
The two-column fitted
value matrix has columns
and
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
This VGAM family function is based on
N1binomial
and shares many
properties with it.
It pays to set trace = TRUE
to
monitor convergence, especially when
abs(apar)
is high.
T. W. Yee
rN1pois
,
N1binomial
,
binormalcop
,
uninormal
,
poissonff
,
dpois
.
apar <- rhobitlink(0.3, inverse = TRUE) nn <- 1000; mymu <- 1; sdev <- exp(1) lambda <- loglink(1, inverse = TRUE) mat <- rN1pois(nn, mymu, sdev, lambda, apar) npdata <- data.frame(y1 = mat[, 1], y2 = mat[, 2]) with(npdata, var(y2) / mean(y2)) # Overdispersion fit1 <- vglm(cbind(y1, y2) ~ 1, N1poisson, npdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1) confint(fit1)
apar <- rhobitlink(0.3, inverse = TRUE) nn <- 1000; mymu <- 1; sdev <- exp(1) lambda <- loglink(1, inverse = TRUE) mat <- rN1pois(nn, mymu, sdev, lambda, apar) npdata <- data.frame(y1 = mat[, 1], y2 = mat[, 2]) with(npdata, var(y2) / mean(y2)) # Overdispersion fit1 <- vglm(cbind(y1, y2) ~ 1, N1poisson, npdata, trace = TRUE) coef(fit1, matrix = TRUE) Coef(fit1) head(fitted(fit1)) summary(fit1) confint(fit1)
Estimation of the two parameters of the Nakagami distribution by maximum likelihood estimation.
nakagami(lscale = "loglink", lshape = "loglink", iscale = 1, ishape = NULL, nowarning = FALSE, zero = "shape")
nakagami(lscale = "loglink", lshape = "loglink", iscale = 1, ishape = NULL, nowarning = FALSE, zero = "shape")
nowarning |
Logical. Suppress a warning? |
lscale , lshape
|
Parameter link functions applied to the
scale and shape parameters.
Log links ensure they are positive.
See |
iscale , ishape
|
Optional initial values for the shape and scale parameters.
For |
zero |
The Nakagami distribution, which is useful for modelling wireless systems such as radio links, can be written
for ,
,
.
The mean of
is
and these are returned as the fitted values.
By default, the linear/additive predictors are
and
.
Fisher scoring is implemented.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The Nakagami distribution is also known as the
Nakagami-m distribution, where here.
Special cases:
is a one-sided Gaussian
distribution and
is a Rayleigh distribution.
The second moment is
.
If has a Nakagami distribution with parameters
shape and scale then
has a gamma
distribution with shape parameter shape and scale
parameter scale/shape.
T. W. Yee
Nakagami, M. (1960). The m-distribution: a general formula of intensity distribution of rapid fading, pp.3–36 in: Statistical Methods in Radio Wave Propagation. W. C. Hoffman, Ed., New York: Pergamon.
nn <- 1000; shape <- exp(0); Scale <- exp(1) ndata <- data.frame(y1 = sqrt(rgamma(nn, shape = shape, scale = Scale/shape))) nfit <- vglm(y1 ~ 1, nakagami, data = ndata, trace = TRUE, crit = "coef") ndata <- transform(ndata, y2 = rnaka(nn, scale = Scale, shape = shape)) nfit <- vglm(y2 ~ 1, nakagami(iscale = 3), data = ndata, trace = TRUE) head(fitted(nfit)) with(ndata, mean(y2)) coef(nfit, matrix = TRUE) (Cfit <- Coef(nfit)) ## Not run: sy <- with(ndata, sort(y2)) hist(with(ndata, y2), prob = TRUE, main = "", xlab = "y", ylim = c(0, 0.6), col = "lightblue") lines(dnaka(sy, scale = Cfit["scale"], shape = Cfit["shape"]) ~ sy, data = ndata, col = "orange") ## End(Not run)
nn <- 1000; shape <- exp(0); Scale <- exp(1) ndata <- data.frame(y1 = sqrt(rgamma(nn, shape = shape, scale = Scale/shape))) nfit <- vglm(y1 ~ 1, nakagami, data = ndata, trace = TRUE, crit = "coef") ndata <- transform(ndata, y2 = rnaka(nn, scale = Scale, shape = shape)) nfit <- vglm(y2 ~ 1, nakagami(iscale = 3), data = ndata, trace = TRUE) head(fitted(nfit)) with(ndata, mean(y2)) coef(nfit, matrix = TRUE) (Cfit <- Coef(nfit)) ## Not run: sy <- with(ndata, sort(y2)) hist(with(ndata, y2), prob = TRUE, main = "", xlab = "y", ylim = c(0, 0.6), col = "lightblue") lines(dnaka(sy, scale = Cfit["scale"], shape = Cfit["shape"]) ~ sy, data = ndata, col = "orange") ## End(Not run)
Density, cumulative distribution function, quantile function and random generation for the Nakagami distribution.
dnaka(x, scale = 1, shape, log = FALSE) pnaka(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qnaka(p, scale = 1, shape, ...) rnaka(n, scale = 1, shape, Smallno = 1.0e-6)
dnaka(x, scale = 1, shape, log = FALSE) pnaka(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qnaka(p, scale = 1, shape, ...) rnaka(n, scale = 1, shape, Smallno = 1.0e-6)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as in |
scale , shape
|
arguments for the parameters of the distribution.
See |
Smallno |
Numeric, a small value used by the rejection method for determining
the upper limit of the distribution.
That is, |
... |
Arguments that can be passed into |
log |
Logical.
If |
lower.tail , log.p
|
See nakagami
for more details.
dnaka
gives the density,
pnaka
gives the cumulative distribution function,
qnaka
gives the quantile function, and
rnaka
generates random deviates.
T. W. Yee and Kai Huang
## Not run: x <- seq(0, 3.2, len = 200) plot(x, dgamma(x, shape = 1), type = "n", col = "black", ylab = "", ylim = c(0,1.5), main = "dnaka(x, shape = shape)") lines(x, dnaka(x, shape = 1), col = "orange") lines(x, dnaka(x, shape = 2), col = "blue") lines(x, dnaka(x, shape = 3), col = "green") legend(2, 1.0, col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape =", c(1, 2, 3))) plot(x, pnorm(x), type = "n", col = "black", ylab = "", ylim = 0:1, main = "pnaka(x, shape = shape)") lines(x, pnaka(x, shape = 1), col = "orange") lines(x, pnaka(x, shape = 2), col = "blue") lines(x, pnaka(x, shape = 3), col = "green") legend(2, 0.6, col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape =", c(1, 2, 3))) ## End(Not run) probs <- seq(0.1, 0.9, by = 0.1) pnaka(qnaka(p = probs, shape = 2), shape = 2) - probs # Should be all 0
## Not run: x <- seq(0, 3.2, len = 200) plot(x, dgamma(x, shape = 1), type = "n", col = "black", ylab = "", ylim = c(0,1.5), main = "dnaka(x, shape = shape)") lines(x, dnaka(x, shape = 1), col = "orange") lines(x, dnaka(x, shape = 2), col = "blue") lines(x, dnaka(x, shape = 3), col = "green") legend(2, 1.0, col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape =", c(1, 2, 3))) plot(x, pnorm(x), type = "n", col = "black", ylab = "", ylim = 0:1, main = "pnaka(x, shape = shape)") lines(x, pnaka(x, shape = 1), col = "orange") lines(x, pnaka(x, shape = 2), col = "blue") lines(x, pnaka(x, shape = 3), col = "green") legend(2, 0.6, col = c("orange","blue","green"), lty = rep(1, len = 3), legend = paste("shape =", c(1, 2, 3))) ## End(Not run) probs <- seq(0.1, 0.9, by = 0.1) pnaka(qnaka(p = probs, shape = 2), shape = 2) - probs # Should be all 0
Computes the negative binomial canonical link transformation, including its inverse and the first two derivatives.
nbcanlink(theta, size = NULL, wrt.param = NULL, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
nbcanlink(theta, size = NULL, wrt.param = NULL, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. Typically the mean of a negative binomial distribution (NBD). See below for further details. |
size , wrt.param
|
|
bvalue |
Details at |
inverse , deriv , short , tag
|
Details at |
The NBD canonical link is
where
is the NBD mean.
The canonical link is used for theoretically
relating the NBD to GLM class.
This link function was specifically written for
negbinomial
and
negbinomial.size
,
and should not be used elsewhere
(these VGAM family functions have
code that
specifically handles nbcanlink()
.)
Estimation with the NB canonical link
has a somewhat interesting history.
If we take the problem as beginning with the admission of
McCullagh and Nelder (1983; first edition, p.195)
[see also McCullagh and Nelder (1989, p.374)]
that the NB is little used in
applications and has a “problematical” canonical link
then it appears
only one other publicized attempt was made to
solve the problem seriously.
This was Hilbe, who produced a defective solution.
However, Miranda and Yee (2023) solve
this four-decade old problem using
total derivatives and
it is implemented by using
nbcanlink
with
negbinomial
.
Note that early versions of VGAM had
a defective solution.
For deriv = 0
, the above equation
when inverse = FALSE
, and
if inverse = TRUE
then
kmatrix / expm1(-theta)
where theta
is really eta
.
For deriv = 1
, then the function
returns
d eta
/ d theta
as a function of theta
if inverse = FALSE
,
else if inverse = TRUE
then it
returns the reciprocal.
While theoretically nice, this function is not recommended
in general since its value is always negative
(linear predictors
ought to be unbounded in general). A loglink
link for argument lmu
is recommended instead.
Numerical instability may occur when theta
is close to 0 or 1.
Values of theta
which are less than or
equal to 0 can be
replaced by bvalue
before computing the link function value.
See Links
.
Victor Miranda and Thomas W. Yee.
Hilbe, J. M. (2011). Negative Binomial Regression, 2nd Edition. Cambridge: Cambridge University Press.
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Miranda-Soberanis, V. F. and Yee, T. W. (2023). Two-parameter link functions, with applications to negative binomial, Weibull and quantile regression. Computational Statistics, 38, 1463–1485.
Yee, T. W. (2014). Reduced-rank vector generalized linear models with two linear predictors. Computational Statistics and Data Analysis, 71, 889–902.
negbinomial
,
negbinomial.size
.
nbcanlink("mu", short = FALSE) mymu <- 1:10 # Test some basic operations: kmatrix <- cbind(runif(length(mymu))) eta1 <- nbcanlink(mymu, size = kmatrix) ans2 <- nbcanlink(eta1, size = kmatrix, inverse = TRUE) max(abs(ans2 - mymu)) # Should be 0 ## Not run: mymu <- seq(0.5, 10, length = 101) kmatrix <- matrix(10, length(mymu), 1) plot(nbcanlink(mymu, size = kmatrix) ~ mymu, las = 1, type = "l", col = "blue", xlab = expression({mu})) ## End(Not run) # Estimate the parameters from some simulated data ndata <- data.frame(x2 = runif(nn <- 500)) ndata <- transform(ndata, eta1 = -1 - 1 * x2, # eta1 < 0 size1 = exp(1), size2 = exp(2)) ndata <- transform(ndata, mu1 = nbcanlink(eta1, size = size1, inverse = TRUE), mu2 = nbcanlink(eta1, size = size2, inverse = TRUE)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = mu1, size1), y2 = rnbinom(nn, mu = mu2, size2)) summary(ndata) nbcfit <- vglm(cbind(y1, y2) ~ x2, # crit = "c", negbinomial(lmu = "nbcanlink"), data = ndata, trace = TRUE) coef(nbcfit, matrix = TRUE) summary(nbcfit)
nbcanlink("mu", short = FALSE) mymu <- 1:10 # Test some basic operations: kmatrix <- cbind(runif(length(mymu))) eta1 <- nbcanlink(mymu, size = kmatrix) ans2 <- nbcanlink(eta1, size = kmatrix, inverse = TRUE) max(abs(ans2 - mymu)) # Should be 0 ## Not run: mymu <- seq(0.5, 10, length = 101) kmatrix <- matrix(10, length(mymu), 1) plot(nbcanlink(mymu, size = kmatrix) ~ mymu, las = 1, type = "l", col = "blue", xlab = expression({mu})) ## End(Not run) # Estimate the parameters from some simulated data ndata <- data.frame(x2 = runif(nn <- 500)) ndata <- transform(ndata, eta1 = -1 - 1 * x2, # eta1 < 0 size1 = exp(1), size2 = exp(2)) ndata <- transform(ndata, mu1 = nbcanlink(eta1, size = size1, inverse = TRUE), mu2 = nbcanlink(eta1, size = size2, inverse = TRUE)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = mu1, size1), y2 = rnbinom(nn, mu = mu2, size2)) summary(ndata) nbcfit <- vglm(cbind(y1, y2) ~ x2, # crit = "c", negbinomial(lmu = "nbcanlink"), data = ndata, trace = TRUE) coef(nbcfit, matrix = TRUE) summary(nbcfit)
Maximum likelihood estimation of the two parameters of a negative binomial distribution.
negbinomial(zero = "size", parallel = FALSE, deviance.arg = FALSE, type.fitted = c("mean", "quantiles"), percentiles = c(25, 50, 75), vfl = FALSE, mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lmu = "loglink", lsize = "loglink", imethod = 1, imu = NULL, iprobs.y = NULL, gprobs.y = ppoints(6), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) polya(zero = "size", type.fitted = c("mean", "prob"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lprob = "logitlink", lsize = "loglink", imethod = 1, iprob = NULL, iprobs.y = NULL, gprobs.y = ppoints(6), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)), imunb = NULL) polyaR(zero = "size", type.fitted = c("mean", "prob"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lsize = "loglink", lprob = "logitlink", imethod = 1, iprob = NULL, iprobs.y = NULL, gprobs.y = ppoints(6), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)), imunb = NULL)
negbinomial(zero = "size", parallel = FALSE, deviance.arg = FALSE, type.fitted = c("mean", "quantiles"), percentiles = c(25, 50, 75), vfl = FALSE, mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lmu = "loglink", lsize = "loglink", imethod = 1, imu = NULL, iprobs.y = NULL, gprobs.y = ppoints(6), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) polya(zero = "size", type.fitted = c("mean", "prob"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lprob = "logitlink", lsize = "loglink", imethod = 1, iprob = NULL, iprobs.y = NULL, gprobs.y = ppoints(6), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)), imunb = NULL) polyaR(zero = "size", type.fitted = c("mean", "prob"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lsize = "loglink", lprob = "logitlink", imethod = 1, iprob = NULL, iprobs.y = NULL, gprobs.y = ppoints(6), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)), imunb = NULL)
zero |
Can be an integer-valued vector, and if so, then
it is usually assigned |
lmu , lsize , lprob
|
Link functions applied to the |
imu , imunb , isize , iprob
|
Optional initial values for the mean and |
nsimEIM |
This argument is used
for computing the diagonal element of the
expected information matrix (EIM) corresponding to |
cutoff.prob |
Fed into the |
max.chunk.MB , max.support
|
|
mds.min |
Numeric.
Minimum value of the NBD mean divided by |
vfl |
Logical.
Fit the
Variance–variance
Factorized
Loglinear
(VFL)
model?
If |
eps.trig |
Numeric.
A small positive value used in the computation of the EIMs.
It focusses on the denominator of the terms of a series.
Each term in the series (that is used to approximate an infinite
series) has a value greater than |
gsize.mux |
Similar to |
type.fitted , percentiles
|
See |
deviance.arg |
Logical.
If |
imethod |
An integer with value |
parallel |
Setting |
gprobs.y |
A vector representing a grid;
passed into the |
iprobs.y |
Passed into the |
The negative binomial distribution (NBD)
can be motivated in several ways,
e.g., as a Poisson distribution with a mean that is gamma
distributed.
There are several common parametrizations of the NBD.
The one used by negbinomial()
uses the
mean and an index parameter
, both which are positive.
Specifically, the density of a random variable
is
where ,
and
and
.
Note that the dispersion parameter is
, so that as
approaches infinity the
NBD approaches a Poisson distribution.
The response has variance
.
When fitted, the
fitted.values
slot of the object
contains the estimated value of the parameter,
i.e., of the mean
.
It is common for some to use
as the
ancillary or heterogeneity parameter;
so common alternatives for
lsize
are
negloglink
and
reciprocallink
.
For polya
the density is
where ,
and
and
.
Family function polyaR()
is the same as polya()
except the order of the two parameters are switched. The reason
is that polyaR()
tries to match with
rnbinom
closely
in terms of the argument order, etc.
Should the probability parameter be of primary interest,
probably, users will prefer using polya()
rather than
polyaR()
.
Possibly polyaR()
will be decommissioned one day.
The NBD can be coerced into the
classical GLM framework with one of the parameters being
of interest and the other treated as a nuisance/scale
parameter (this is implemented in the MASS library). The
VGAM family function negbinomial()
treats both
parameters on the same footing, and estimates them both
by full maximum likelihood estimation.
The parameters and
are independent
(diagonal EIM), and the confidence region for
is extremely skewed so that its standard error is often
of no practical use. The parameter
has been
used as a measure of aggregation.
For the NB-C the EIM is not diagonal.
These VGAM family functions handle
multiple responses, so that a response matrix can be
inputted. The number of columns is the number
of species, say, and setting zero = -2
means that
all species have a equalling a (different)
intercept only.
Conlisk, et al. (2007) show that fitting the NBD to presence-absence data will result in identifiability problems. However, the model is identifiable if the response values include 0, 1 and 2.
For the NB canonical link (NB-C), its estimation
has a somewhat interesting history.
Some details are at nbcanlink
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such
as vglm
,
rrvglm
and vgam
.
Poisson regression corresponds to equalling
infinity. If the data is Poisson or close to Poisson,
numerical problems may occur.
Some corrective measures are taken, e.g.,
is effectively capped
(relative to the mean) during
estimation to some large value and a warning is issued.
And setting
stepsize = 0.5
for
half stepping is probably
a good idea too when the data is extreme.
The NBD is a strictly unimodal distribution. Any data set
that does not exhibit a mode (somewhere in the middle) makes
the estimation problem difficult. Set trace = TRUE
to monitor convergence.
These functions are fragile; the maximum likelihood estimate
of the index parameter is fraught (see Lawless, 1987).
Other alternatives to negbinomial
are to fit a NB-1 or
RR-NB (aka NB-P) model; see Yee (2014). Also available are
the NB-C, NB-H and NB-G. Assigning values to the isize
argument may lead to a local solution, and smaller values are
preferred over large values when using this argument.
If one wants to force SFS
to be used on all observations, then
set max.support = 0
or max.chunk.MB = 0
.
If one wants to force the exact method
to be used for all observations, then
set max.support = Inf
.
If the computer has much memory, then trying
max.chunk.MB = Inf
and
max.support = Inf
may provide a small speed increase.
If SFS is used at all, then the working
weights (@weights
) slot of the
fitted object will be a matrix;
otherwise that slot will be a 0 x 0
matrix.
An alternative to the NBD is the generalized Poisson
distribution,
genpoisson1
,
genpoisson2
and
genpoisson0
,
since that also handles overdispersion wrt Poisson.
It has one advantage in that its EIM can be computed
straightforwardly.
Yet to do: write a family function which uses the methods
of moments estimator for .
These 3 functions implement 2 common parameterizations
of the negative binomial (NB). Some people called the
NB with integer the Pascal distribution,
whereas if
is real then this is the Polya
distribution. I don't. The one matching the details of
rnbinom
in terms of
and
is
polya()
.
For polya()
the code may fail when is close
to 0 or 1. It is not yet compatible with
cqo
or cao
.
Suppose the response is called ymat
.
For negbinomial()
the diagonal element of the expected information matrix
(EIM) for parameter
involves an infinite series; consequently SFS
(see
nsimEIM
) is used as the backup algorithm only.
SFS should be better if max(ymat)
is large,
e.g., max(ymat) > 1000
,
or if there are any outliers in ymat
.
The default algorithm involves a finite series approximation
to the support 0:Inf
;
the arguments
max.memory
,
min.size
and
cutoff.prob
are pertinent.
Regardless of the algorithm used,
convergence problems may occur, especially when the response
has large outliers or is large in magnitude.
If convergence failure occurs, try using arguments
(in recommended decreasing order)
max.support
,
nsimEIM
,
cutoff.prob
,
iprobs.y
,
imethod
,
isize
,
zero
,
max.chunk.MB
.
The function negbinomial
can be used by the
fast algorithm in cqo
, however, setting
eq.tolerances = TRUE
and I.tolerances = FALSE
is recommended.
In the first example below (Bliss and Fisher, 1953), from each of 6 McIntosh apple trees in an orchard that had been sprayed, 25 leaves were randomly selected. On each of the leaves, the number of adult female European red mites were counted.
There are two special uses of negbinomial
for handling
count data.
Firstly,
when used by rrvglm
this
results in a continuum of models in between and
inclusive of quasi-Poisson and negative binomial regression.
This is known as a reduced-rank negative binomial model
(RR-NB). It fits a negative binomial log-linear
regression with variance function
where
and
are parameters to be estimated by MLE.
Confidence intervals are available for
,
therefore it can be decided upon whether the
data are quasi-Poisson or negative binomial, if any.
Secondly,
the use of negbinomial
with parallel = TRUE
inside vglm
can result in a model similar to quasipoisson
.
This is named the NB-1 model.
The dispersion parameter is estimated by MLE whereas
glm
uses the method of moments.
In particular, it fits a negative binomial log-linear regression
with variance function
where
is a parameter to be estimated by MLE.
Confidence intervals are available for
.
Thomas W. Yee,
and with a lot of help by Victor Miranda
to get it going with nbcanlink
.
Bliss, C. and Fisher, R. A. (1953). Fitting the negative binomial distribution to biological data. Biometrics 9, 174–200.
Conlisk, E. and Conlisk, J. and Harte, J. (2007). The impossibility of estimating a negative binomial clustering parameter from presence-absence data: A comment on He and Gaston. The American Naturalist 170, 651–654.
Evans, D. A. (1953). Experimental evidence concerning contagious distributions in ecology. Biometrika, 40(1–2), 186–211.
Hilbe, J. M. (2011). Negative Binomial Regression, 2nd Edition. Cambridge: Cambridge University Press.
Lawless, J. F. (1987). Negative binomial and mixed Poisson regression. The Canadian Journal of Statistics 15, 209–225.
Miranda-Soberanis, V. F. and Yee, T. W. (2023). Two-parameter link functions, with applications to negative binomial, Weibull and quantile regression. Computational Statistics, 38, 1463–1485.
Yee, T. W. (2014). Reduced-rank vector generalized linear models with two linear predictors. Computational Statistics and Data Analysis, 71, 889–902.
Yee, T. W. (2020). The VGAM package for negative binomial regression. Australian & New Zealand Journal of Statistics, 62, 116–131.
quasipoisson
,
gaitdnbinomial
,
poissonff
,
zinegbinomial
,
negbinomial.size
(e.g., NB-G),
nbcanlink
(NB-C),
posnegbinomial
,
genpoisson1
,
genpoisson2
,
genpoisson0
,
inv.binomial
,
NegBinomial
,
rrvglm
,
cao
,
cqo
,
CommonVGAMffArguments
,
simulate.vlm
,
ppoints
,
margeff
.
## Not run: # Example 1: apple tree data (Bliss and Fisher, 1953) appletree <- data.frame(y = 0:7, w = c(70, 38, 17, 10, 9, 3, 2, 1)) fit <- vglm(y ~ 1, negbinomial(deviance = TRUE), data = appletree, weights = w, crit = "coef") # Obtain the deviance fit <- vglm(y ~ 1, negbinomial(deviance = TRUE), data = appletree, weights = w, half.step = FALSE) # Alternative method summary(fit) coef(fit, matrix = TRUE) Coef(fit) # For intercept-only models deviance(fit) # NB2 only; needs 'crit="coef"' & 'deviance=T' above # Example 2: simulated data with multiple responses ndata <- data.frame(x2 = runif(nn <- 200)) ndata <- transform(ndata, y1 = rnbinom(nn, exp(1), mu = exp(3+x2)), y2 = rnbinom(nn, exp(0), mu = exp(2-x2))) fit1 <- vglm(cbind(y1, y2) ~ x2, negbinomial, ndata, trace = TRUE) coef(fit1, matrix = TRUE) # Example 3: large counts implies SFS is used ndata <- transform(ndata, y3 = rnbinom(nn, exp(1), mu = exp(10+x2))) with(ndata, range(y3)) # Large counts fit2 <- vglm(y3 ~ x2, negbinomial, data = ndata, trace = TRUE) coef(fit2, matrix = TRUE) head(weights(fit2, type = "working")) # Non-empty; SFS was used # Example 4: a NB-1 to estimate a NB with Var(Y)=phi0*mu nn <- 200 # Number of observations phi0 <- 10 # Specify this; should be greater than unity delta0 <- 1 / (phi0 - 1) mydata <- data.frame(x2 = runif(nn), x3 = runif(nn)) mydata <- transform(mydata, mu = exp(2 + 3 * x2 + 0 * x3)) mydata <- transform(mydata, y3 = rnbinom(nn, delta0 * mu, mu = mu)) plot(y3 ~ x2, data = mydata, pch = "+", col = "blue", main = paste("Var(Y) = ", phi0, " * mu", sep = ""), las = 1) nb1 <- vglm(y3 ~ x2 + x3, negbinomial(parallel = TRUE, zero = NULL), data = mydata, trace = TRUE) # Extracting out some quantities: cnb1 <- coef(nb1, matrix = TRUE) mydiff <- (cnb1["(Intercept)", "loglink(size)"] - cnb1["(Intercept)", "loglink(mu)"]) delta0.hat <- exp(mydiff) (phi.hat <- 1 + 1 / delta0.hat) # MLE of phi summary(nb1) # Obtain a 95 percent confidence interval for phi0: myvec <- rbind(-1, 1, 0, 0) (se.mydiff <- sqrt(t(myvec) %*% vcov(nb1) %*% myvec)) ci.mydiff <- mydiff + c(-1.96, 1.96) * c(se.mydiff) ci.delta0 <- ci.exp.mydiff <- exp(ci.mydiff) (ci.phi0 <- 1 + 1 / rev(ci.delta0)) # The 95 Confint.nb1(nb1) # Quick way to get it # cf. moment estimator: summary(glm(y3 ~ x2 + x3, quasipoisson, mydata))$disper ## End(Not run)
## Not run: # Example 1: apple tree data (Bliss and Fisher, 1953) appletree <- data.frame(y = 0:7, w = c(70, 38, 17, 10, 9, 3, 2, 1)) fit <- vglm(y ~ 1, negbinomial(deviance = TRUE), data = appletree, weights = w, crit = "coef") # Obtain the deviance fit <- vglm(y ~ 1, negbinomial(deviance = TRUE), data = appletree, weights = w, half.step = FALSE) # Alternative method summary(fit) coef(fit, matrix = TRUE) Coef(fit) # For intercept-only models deviance(fit) # NB2 only; needs 'crit="coef"' & 'deviance=T' above # Example 2: simulated data with multiple responses ndata <- data.frame(x2 = runif(nn <- 200)) ndata <- transform(ndata, y1 = rnbinom(nn, exp(1), mu = exp(3+x2)), y2 = rnbinom(nn, exp(0), mu = exp(2-x2))) fit1 <- vglm(cbind(y1, y2) ~ x2, negbinomial, ndata, trace = TRUE) coef(fit1, matrix = TRUE) # Example 3: large counts implies SFS is used ndata <- transform(ndata, y3 = rnbinom(nn, exp(1), mu = exp(10+x2))) with(ndata, range(y3)) # Large counts fit2 <- vglm(y3 ~ x2, negbinomial, data = ndata, trace = TRUE) coef(fit2, matrix = TRUE) head(weights(fit2, type = "working")) # Non-empty; SFS was used # Example 4: a NB-1 to estimate a NB with Var(Y)=phi0*mu nn <- 200 # Number of observations phi0 <- 10 # Specify this; should be greater than unity delta0 <- 1 / (phi0 - 1) mydata <- data.frame(x2 = runif(nn), x3 = runif(nn)) mydata <- transform(mydata, mu = exp(2 + 3 * x2 + 0 * x3)) mydata <- transform(mydata, y3 = rnbinom(nn, delta0 * mu, mu = mu)) plot(y3 ~ x2, data = mydata, pch = "+", col = "blue", main = paste("Var(Y) = ", phi0, " * mu", sep = ""), las = 1) nb1 <- vglm(y3 ~ x2 + x3, negbinomial(parallel = TRUE, zero = NULL), data = mydata, trace = TRUE) # Extracting out some quantities: cnb1 <- coef(nb1, matrix = TRUE) mydiff <- (cnb1["(Intercept)", "loglink(size)"] - cnb1["(Intercept)", "loglink(mu)"]) delta0.hat <- exp(mydiff) (phi.hat <- 1 + 1 / delta0.hat) # MLE of phi summary(nb1) # Obtain a 95 percent confidence interval for phi0: myvec <- rbind(-1, 1, 0, 0) (se.mydiff <- sqrt(t(myvec) %*% vcov(nb1) %*% myvec)) ci.mydiff <- mydiff + c(-1.96, 1.96) * c(se.mydiff) ci.delta0 <- ci.exp.mydiff <- exp(ci.mydiff) (ci.phi0 <- 1 + 1 / rev(ci.delta0)) # The 95 Confint.nb1(nb1) # Quick way to get it # cf. moment estimator: summary(glm(y3 ~ x2 + x3, quasipoisson, mydata))$disper ## End(Not run)
Maximum likelihood estimation of the mean parameter of a negative binomial distribution with known size parameter.
negbinomial.size(size = Inf, lmu = "loglink", imu = NULL, iprobs.y = 0.35, imethod = 1, ishrinkage = 0.95, zero = NULL)
negbinomial.size(size = Inf, lmu = "loglink", imu = NULL, iprobs.y = 0.35, imethod = 1, ishrinkage = 0.95, zero = NULL)
size |
Numeric, positive.
Same as argument |
lmu , imu
|
Same as |
iprobs.y , imethod
|
Same as |
zero , ishrinkage
|
Same as |
This VGAM family function estimates only the mean parameter of
the negative binomial distribution.
See negbinomial
for general information.
Setting size = 1
gives what might
be called the NB-G (geometric model;
see Hilbe (2011)).
The default, size = Inf
, corresponds to the Poisson distribution.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
rrvglm
and vgam
.
If lmu = "nbcanlink"
in negbinomial.size()
then
the size
argument here should be assigned and
these values are recycled.
Thomas W. Yee
Hilbe, J. M. (2011). Negative Binomial Regression, 2nd Edition. Cambridge: Cambridge University Press.
Yee, T. W. (2014). Reduced-rank vector generalized linear models with two linear predictors. Computational Statistics and Data Analysis, 71, 889–902.
negbinomial
,
nbcanlink
(NB-C model),
poissonff
,
rnbinom
,
simulate.vlm
.
# Simulated data with various multiple responses size1 <- exp(1); size2 <- exp(2); size3 <- exp(0); size4 <- Inf ndata <- data.frame(x2 = runif(nn <- 1000)) ndata <- transform(ndata, eta1 = -1 - 2 * x2, # eta1 must be negative size1 = size1) ndata <- transform(ndata, mu1 = nbcanlink(eta1, size = size1, inv = TRUE)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = mu1, size = size1), # NB-C y2 = rnbinom(nn, mu = exp(2 - x2), size = size2), y3 = rnbinom(nn, mu = exp(3 + x2), size = size3), # NB-G y4 = rpois(nn, lambda = exp(1 + x2))) # Also known as NB-C with size known (Hilbe, 2011) fit1 <- vglm(y1 ~ x2, negbinomial.size(size = size1, lmu = "nbcanlink"), data = ndata, trace = TRUE) coef(fit1, matrix = TRUE) head(fit1@misc$size) # size saved here fit2 <- vglm(cbind(y2, y3, y4) ~ x2, data = ndata, trace = TRUE, negbinomial.size(size = c(size2, size3, size4))) coef(fit2, matrix = TRUE) head(fit2@misc$size) # size saved here
# Simulated data with various multiple responses size1 <- exp(1); size2 <- exp(2); size3 <- exp(0); size4 <- Inf ndata <- data.frame(x2 = runif(nn <- 1000)) ndata <- transform(ndata, eta1 = -1 - 2 * x2, # eta1 must be negative size1 = size1) ndata <- transform(ndata, mu1 = nbcanlink(eta1, size = size1, inv = TRUE)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = mu1, size = size1), # NB-C y2 = rnbinom(nn, mu = exp(2 - x2), size = size2), y3 = rnbinom(nn, mu = exp(3 + x2), size = size3), # NB-G y4 = rpois(nn, lambda = exp(1 + x2))) # Also known as NB-C with size known (Hilbe, 2011) fit1 <- vglm(y1 ~ x2, negbinomial.size(size = size1, lmu = "nbcanlink"), data = ndata, trace = TRUE) coef(fit1, matrix = TRUE) head(fit1@misc$size) # size saved here fit2 <- vglm(cbind(y2, y3, y4) ~ x2, data = ndata, trace = TRUE, negbinomial.size(size = c(size2, size3, size4))) coef(fit2, matrix = TRUE) head(fit2@misc$size) # size saved here
Maximum likelihood estimation of all the coefficients of a LM where each of the usual regression coefficients is modelled with other explanatory variables via parameter link functions. Thus this is a basic varying-coefficient model.
normal.vcm(link.list = list("(Default)" = "identitylink"), earg.list = list("(Default)" = list()), lsd = "loglink", lvar = "loglink", esd = list(), evar = list(), var.arg = FALSE, imethod = 1, icoefficients = NULL, isd = NULL, zero = "sd", sd.inflation.factor = 2.5)
normal.vcm(link.list = list("(Default)" = "identitylink"), earg.list = list("(Default)" = list()), lsd = "loglink", lvar = "loglink", esd = list(), evar = list(), var.arg = FALSE, imethod = 1, icoefficients = NULL, isd = NULL, zero = "sd", sd.inflation.factor = 2.5)
link.list , earg.list
|
Link functions and extra arguments
applied to the coefficients of the LM, excluding
the standard deviation/variance.
See |
lsd , esd , lvar , evar
|
Link function and extra argument
applied to
the standard deviation/variance.
See |
icoefficients |
Optional initial values for the coefficients.
Recycled to length |
var.arg , imethod , isd
|
Same as, or similar to, |
zero |
See |
sd.inflation.factor |
Numeric, should be greater than 1.
The initial value of the standard deviation is multiplied by this,
unless |
This function allows all the usual LM regression coefficients to be modelled as functions of other explanatory variables via parameter link functions. For example, we may want some of them to be positive. Or we may want a subset of them to be positive and add to unity. So a class of such models have been named varying-coefficient models (VCMs).
The usual linear model is specified through argument
form2
. As with all other VGAM family
functions, the linear/additive predictors are specified
through argument formula
.
The multilogitlink
link allows a subset of the
coefficients to be positive and add to unity. Either
none or more than one call to multilogitlink
is allowed. The last variable will be used as the
baseline/reference group, and therefore excluded from
the estimation.
By default, the log of the standard deviation is the last linear/additive predictor. It is recommended that this parameter be estimated as intercept-only, for numerical stability.
Technically, the Fisher information matrix is of unit-rank for all but the last parameter (the standard deviation/variance). Hence an approximation is used that pools over all the observations.
This VGAM family function cannot handle multiple responses.
Also, this function will probably not have the
full capabilities of the class of varying-coefficient models as
described by Hastie and Tibshirani (1993). However, it should
be able to manage some simple models, especially involving the
following links:
identitylink
,
loglink
,
logofflink
,
logloglink
,
logitlink
,
probitlink
,
cauchitlink
.
clogloglink
,
rhobitlink
,
fisherzlink
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
This VGAM family function is fragile.
One should monitor convergence, and possibly enter initial values
especially when there are non-identity
-link functions.
If the initial value of the standard deviation/variance is too
small then numerical problems may occur.
One trick is to fit an intercept-only only model and feed its
predict()
output into argument etastart
of a
more complicated model.
The use of the zero
argument is recommended in order
to keep models as simple as possible.
The standard deviation/variance parameter is best modelled as intercept-only.
Yet to do: allow an argument such as parallel
that enables
many of the coefficients to be equal.
Fix a bug: Coef()
does not work for intercept-only models.
T. W. Yee
Hastie, T. and Tibshirani, R. (1993). Varying-coefficient models. J. Roy. Statist. Soc. Ser. B, 55, 757–796.
ndata <- data.frame(x2 = runif(nn <- 2000)) # Note that coeff1 + coeff2 + coeff5 == 1. So try "multilogitlink". myoffset <- 10 ndata <- transform(ndata, coeff1 = 0.25, # "multilogitlink" coeff2 = 0.25, # "multilogitlink" coeff3 = exp(-0.5), # "loglink" # "logofflink" link: coeff4 = logofflink(+0.5, offset = myoffset, inverse = TRUE), coeff5 = 0.50, # "multilogitlink" coeff6 = 1.00, # "identitylink" v2 = runif(nn), v3 = runif(nn), v4 = runif(nn), v5 = rnorm(nn), v6 = rnorm(nn)) ndata <- transform(ndata, Coeff1 = 0.25 - 0 * x2, Coeff2 = 0.25 - 0 * x2, Coeff3 = logitlink(-0.5 - 1 * x2, inverse = TRUE), Coeff4 = logloglink( 0.5 - 1 * x2, inverse = TRUE), Coeff5 = 0.50 - 0 * x2, Coeff6 = 1.00 + 1 * x2) ndata <- transform(ndata, y1 = coeff1 * 1 + coeff2 * v2 + coeff3 * v3 + coeff4 * v4 + coeff5 * v5 + coeff6 * v6 + rnorm(nn, sd = exp(0)), y2 = Coeff1 * 1 + Coeff2 * v2 + Coeff3 * v3 + Coeff4 * v4 + Coeff5 * v5 + Coeff6 * v6 + rnorm(nn, sd = exp(0))) # An intercept-only model fit1 <- vglm(y1 ~ 1, form2 = ~ 1 + v2 + v3 + v4 + v5 + v6, normal.vcm(link.list = list("(Intercept)" = "multilogitlink", "v2" = "multilogitlink", "v3" = "loglink", "v4" = "logofflink", "(Default)" = "identitylink", "v5" = "multilogitlink"), earg.list = list("(Intercept)" = list(), "v2" = list(), "v4" = list(offset = myoffset), "v3" = list(), "(Default)" = list(), "v5" = list()), zero = c(1:2, 6)), data = ndata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) # This works only for intercept-only models: multilogitlink(rbind(coef(fit1, matrix = TRUE)[1, c(1, 2)]), inverse = TRUE) # A model with covariate x2 for the regression coefficients fit2 <- vglm(y2 ~ 1 + x2, form2 = ~ 1 + v2 + v3 + v4 + v5 + v6, normal.vcm(link.list = list("(Intercept)" = "multilogitlink", "v2" = "multilogitlink", "v3" = "logitlink", "v4" = "logloglink", "(Default)" = "identitylink", "v5" = "multilogitlink"), earg.list = list("(Intercept)" = list(), "v2" = list(), "v3" = list(), "v4" = list(), "(Default)" = list(), "v5" = list()), zero = c(1:2, 6)), data = ndata, trace = TRUE) coef(fit2, matrix = TRUE) summary(fit2)
ndata <- data.frame(x2 = runif(nn <- 2000)) # Note that coeff1 + coeff2 + coeff5 == 1. So try "multilogitlink". myoffset <- 10 ndata <- transform(ndata, coeff1 = 0.25, # "multilogitlink" coeff2 = 0.25, # "multilogitlink" coeff3 = exp(-0.5), # "loglink" # "logofflink" link: coeff4 = logofflink(+0.5, offset = myoffset, inverse = TRUE), coeff5 = 0.50, # "multilogitlink" coeff6 = 1.00, # "identitylink" v2 = runif(nn), v3 = runif(nn), v4 = runif(nn), v5 = rnorm(nn), v6 = rnorm(nn)) ndata <- transform(ndata, Coeff1 = 0.25 - 0 * x2, Coeff2 = 0.25 - 0 * x2, Coeff3 = logitlink(-0.5 - 1 * x2, inverse = TRUE), Coeff4 = logloglink( 0.5 - 1 * x2, inverse = TRUE), Coeff5 = 0.50 - 0 * x2, Coeff6 = 1.00 + 1 * x2) ndata <- transform(ndata, y1 = coeff1 * 1 + coeff2 * v2 + coeff3 * v3 + coeff4 * v4 + coeff5 * v5 + coeff6 * v6 + rnorm(nn, sd = exp(0)), y2 = Coeff1 * 1 + Coeff2 * v2 + Coeff3 * v3 + Coeff4 * v4 + Coeff5 * v5 + Coeff6 * v6 + rnorm(nn, sd = exp(0))) # An intercept-only model fit1 <- vglm(y1 ~ 1, form2 = ~ 1 + v2 + v3 + v4 + v5 + v6, normal.vcm(link.list = list("(Intercept)" = "multilogitlink", "v2" = "multilogitlink", "v3" = "loglink", "v4" = "logofflink", "(Default)" = "identitylink", "v5" = "multilogitlink"), earg.list = list("(Intercept)" = list(), "v2" = list(), "v4" = list(offset = myoffset), "v3" = list(), "(Default)" = list(), "v5" = list()), zero = c(1:2, 6)), data = ndata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) # This works only for intercept-only models: multilogitlink(rbind(coef(fit1, matrix = TRUE)[1, c(1, 2)]), inverse = TRUE) # A model with covariate x2 for the regression coefficients fit2 <- vglm(y2 ~ 1 + x2, form2 = ~ 1 + v2 + v3 + v4 + v5 + v6, normal.vcm(link.list = list("(Intercept)" = "multilogitlink", "v2" = "multilogitlink", "v3" = "logitlink", "v4" = "logloglink", "(Default)" = "identitylink", "v5" = "multilogitlink"), earg.list = list("(Intercept)" = list(), "v2" = list(), "v3" = list(), "v4" = list(), "(Default)" = list(), "v5" = list()), zero = c(1:2, 6)), data = ndata, trace = TRUE) coef(fit2, matrix = TRUE) summary(fit2)
Returns the number of parameters in a fitted model object.
nparam(object, ...) nparam.vlm(object, dpar = TRUE, ...) nparam.vgam(object, dpar = TRUE, linear.only = FALSE, ...) nparam.rrvglm(object, dpar = TRUE, ...) nparam.drrvglm(object, dpar = TRUE, ...) nparam.qrrvglm(object, dpar = TRUE, ...) nparam.rrvgam(object, dpar = TRUE, ...)
nparam(object, ...) nparam.vlm(object, dpar = TRUE, ...) nparam.vgam(object, dpar = TRUE, linear.only = FALSE, ...) nparam.rrvglm(object, dpar = TRUE, ...) nparam.drrvglm(object, dpar = TRUE, ...) nparam.qrrvglm(object, dpar = TRUE, ...) nparam.rrvgam(object, dpar = TRUE, ...)
object |
Some VGAM object, for example, having
class |
... |
Other possible arguments fed into the function. |
dpar |
Logical, include any (estimated) dispersion parameters as a parameter? |
linear.only |
Logical, include only the number of linear (parametric) parameters? |
The code was copied from the AIC()
methods functions.
Returns a numeric value with the corresponding number of parameters.
For vgam
objects, this may be real rather than
integer, because the nonlinear degrees of freedom is real-valued.
This code has not been double-checked.
T. W. Yee.
VGLMs are described in vglm-class
;
VGAMs are described in vgam-class
;
RR-VGLMs are described in rrvglm-class
;
AICvlm
.
pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)) coef(fit1) coef(fit1, matrix = TRUE) nparam(fit1) (fit2 <- vglm(hits ~ 1, poissonff, weights = ofreq, data = V1)) coef(fit2) coef(fit2, matrix = TRUE) nparam(fit2) nparam(fit2, dpar = FALSE)
pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)) coef(fit1) coef(fit1, matrix = TRUE) nparam(fit1) (fit2 <- vglm(hits ~ 1, poissonff, weights = ofreq, data = V1)) coef(fit2) coef(fit2, matrix = TRUE) nparam(fit2) nparam(fit2, dpar = FALSE)
Final medal count, by country, for the Summer 2008 and 2012 Olympic Games.
data(olym08) data(olym12)
data(olym08) data(olym12)
A data frame with 87 or 85 observations on the following 6 variables.
rank
a numeric vector, overall ranking of the countries.
country
a factor.
gold
a numeric vector, number of gold medals.
silver
a numeric vector, number of silver medals.
bronze
a numeric vector, number of bronze medals.
totalmedal
a numeric vector, total number of medals.
The events were held during (i) August 8–24, 2008, in Beijing; and (ii) 27 July–12 August, 2012, in London.
The official English website
was/is http://en.beijing2008.cn
and http://www.london2012.com
.
Help from Viet Hoang Quoc is gratefully acknowledged.
grc
.
summary(olym08) summary(olym12) ## maybe str(olym08) ; plot(olym08) ... ## Not run: par(mfrow = c(1, 2)) myylim <- c(0, 55) with(head(olym08, n = 8), barplot(rbind(gold, silver, bronze), col = c("gold", "grey", "brown"), # No "silver" or "bronze"! # "gold", "grey71", "chocolate4", names.arg = country, cex.names = 0.5, ylim = myylim, beside = TRUE, main = "2008 Summer Olympic Final Medal Count", ylab = "Medal count", las = 1, sub = "Top 8 countries; 'gold'=gold, 'grey'=silver, 'brown'=bronze")) with(head(olym12, n = 8), barplot(rbind(gold, silver, bronze), col = c("gold", "grey", "brown"), # No "silver" or "bronze"! names.arg = country, cex.names = 0.5, ylim = myylim, beside = TRUE, main = "2012 Summer Olympic Final Medal Count", ylab = "Medal count", las = 1, sub = "Top 8 countries; 'gold'=gold, 'grey'=silver, 'brown'=bronze")) ## End(Not run)
summary(olym08) summary(olym12) ## maybe str(olym08) ; plot(olym08) ... ## Not run: par(mfrow = c(1, 2)) myylim <- c(0, 55) with(head(olym08, n = 8), barplot(rbind(gold, silver, bronze), col = c("gold", "grey", "brown"), # No "silver" or "bronze"! # "gold", "grey71", "chocolate4", names.arg = country, cex.names = 0.5, ylim = myylim, beside = TRUE, main = "2008 Summer Olympic Final Medal Count", ylab = "Medal count", las = 1, sub = "Top 8 countries; 'gold'=gold, 'grey'=silver, 'brown'=bronze")) with(head(olym12, n = 8), barplot(rbind(gold, silver, bronze), col = c("gold", "grey", "brown"), # No "silver" or "bronze"! names.arg = country, cex.names = 0.5, ylim = myylim, beside = TRUE, main = "2012 Summer Olympic Final Medal Count", ylab = "Medal count", las = 1, sub = "Top 8 countries; 'gold'=gold, 'grey'=silver, 'brown'=bronze")) ## End(Not run)
Generic function for the optimums (or optima) of a model.
Opt(object, ...)
Opt(object, ...)
object |
An object for which the computation or extraction of an optimum (or optimums) is meaningful. |
... |
Other arguments fed into the specific
methods function of the model. Sometimes they are fed
into the methods function for |
Different models can define an optimum in different ways. Many models have no such notion or definition.
Optimums occur in quadratic and additive ordination, e.g., CQO or CAO. For these models the optimum is the value of the latent variable where the maximum occurs, i.e., where the fitted value achieves its highest value. For quadratic ordination models there is a formula for the optimum but for additive ordination models the optimum must be searched for numerically. If it occurs on the boundary, then the optimum is undefined. At an optimum, the fitted value of the response is called the maximum.
The value returned depends specifically on the methods function invoked.
In ordination, the optimum of a species is sometimes called the species score.
Thomas W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
## Not run: set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Crow1positive = FALSE) Opt(p1) clr <- (1:(ncol(depvar(p1))+1))[-7] # Omits yellow persp(p1, col = clr, las = 1, main = "Vertical lines at the optimums") abline(v = Opt(p1), lty = 2, col = clr) ## End(Not run)
## Not run: set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, family = poissonff, data = hspider, Crow1positive = FALSE) Opt(p1) clr <- (1:(ncol(depvar(p1))+1))[-7] # Omits yellow persp(p1, col = clr, las = 1, main = "Vertical lines at the optimums") abline(v = Opt(p1), lty = 2, col = clr) ## End(Not run)
Fits a Poisson regression where the response is ordinal (the Poisson counts are grouped between known cutpoints).
ordpoisson(cutpoints, countdata = FALSE, NOS = NULL, Levels = NULL, init.mu = NULL, parallel = FALSE, zero = NULL, link = "loglink")
ordpoisson(cutpoints, countdata = FALSE, NOS = NULL, Levels = NULL, init.mu = NULL, parallel = FALSE, zero = NULL, link = "loglink")
cutpoints |
Numeric. The cutpoints, |
countdata |
Logical. Is the response (LHS of formula) in count-data format?
If not then the response is a matrix or vector with values |
NOS |
Integer. The number of species, or more generally, the number of
response random variates.
This argument must be specified when |
Levels |
Integer vector, recycled to length |
init.mu |
Numeric. Initial values for the means of the Poisson regressions.
Recycled to length |
parallel , zero , link
|
See |
This VGAM family function uses maximum likelihood estimation
(Fisher scoring)
to fit a Poisson regression to each column of a matrix response.
The data, however, is ordinal, and is obtained from known integer
cutpoints.
Here, where
(
)
is the number of levels.
In more detail, let
if
where the
are the cutpoints.
We have
and
.
The response for this family function corresponds to
but
we are really interested in the Poisson regression of
.
If NOS=1
then
the argument cutpoints
is a vector
where the last value (
Inf
) is optional. If NOS>1
then
the vector should have NOS-1
Inf
values separating
the cutpoints. For example, if there are NOS=3
responses, then
something like
ordpoisson(cut = c(0, 5, 10, Inf, 20, 30, Inf, 0, 10, 40, Inf))
is valid.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
The input requires care as little to no checking is done.
If fit
is the fitted object, have a look at fit@extra
and
depvar(fit)
to check.
Sometimes there are no observations between two cutpoints. If so,
the arguments Levels
and NOS
need to be specified too.
See below for an example.
Thomas W. Yee
Yee, T. W. (2020). Ordinal ordination with normalizing link functions for count data, (in preparation).
set.seed(123) # Example 1 x2 <- runif(n <- 1000); x3 <- runif(n) mymu <- exp(3 - 1 * x2 + 2 * x3) y1 <- rpois(n, lambda = mymu) cutpts <- c(-Inf, 20, 30, Inf) fcutpts <- cutpts[is.finite(cutpts)] # finite cutpoints ystar <- cut(y1, breaks = cutpts, labels = FALSE) ## Not run: plot(x2, x3, col = ystar, pch = as.character(ystar)) ## End(Not run) table(ystar) / sum(table(ystar)) fit <- vglm(ystar ~ x2 + x3, fam = ordpoisson(cutpoi = fcutpts)) head(depvar(fit)) # This can be input if countdata = TRUE head(fitted(fit)) head(predict(fit)) coef(fit, matrix = TRUE) fit@extra # Example 2: multivariate and there are no obsns between some cutpoints cutpts2 <- c(-Inf, 0, 9, 10, 20, 70, 200, 201, Inf) fcutpts2 <- cutpts2[is.finite(cutpts2)] # finite cutpoints y2 <- rpois(n, lambda = mymu) # Same model as y1 ystar2 <- cut(y2, breaks = cutpts2, labels = FALSE) table(ystar2) / sum(table(ystar2)) fit <- vglm(cbind(ystar,ystar2) ~ x2 + x3, fam = ordpoisson(cutpoi = c(fcutpts,Inf,fcutpts2,Inf), Levels = c(length(fcutpts)+1,length(fcutpts2)+1), parallel = TRUE), trace = TRUE) coef(fit, matrix = TRUE) fit@extra constraints(fit) summary(depvar(fit)) # Some columns have all zeros
set.seed(123) # Example 1 x2 <- runif(n <- 1000); x3 <- runif(n) mymu <- exp(3 - 1 * x2 + 2 * x3) y1 <- rpois(n, lambda = mymu) cutpts <- c(-Inf, 20, 30, Inf) fcutpts <- cutpts[is.finite(cutpts)] # finite cutpoints ystar <- cut(y1, breaks = cutpts, labels = FALSE) ## Not run: plot(x2, x3, col = ystar, pch = as.character(ystar)) ## End(Not run) table(ystar) / sum(table(ystar)) fit <- vglm(ystar ~ x2 + x3, fam = ordpoisson(cutpoi = fcutpts)) head(depvar(fit)) # This can be input if countdata = TRUE head(fitted(fit)) head(predict(fit)) coef(fit, matrix = TRUE) fit@extra # Example 2: multivariate and there are no obsns between some cutpoints cutpts2 <- c(-Inf, 0, 9, 10, 20, 70, 200, 201, Inf) fcutpts2 <- cutpts2[is.finite(cutpts2)] # finite cutpoints y2 <- rpois(n, lambda = mymu) # Same model as y1 ystar2 <- cut(y2, breaks = cutpts2, labels = FALSE) table(ystar2) / sum(table(ystar2)) fit <- vglm(cbind(ystar,ystar2) ~ x2 + x3, fam = ordpoisson(cutpoi = c(fcutpts,Inf,fcutpts2,Inf), Levels = c(length(fcutpts)+1,length(fcutpts2)+1), parallel = TRUE), trace = TRUE) coef(fit, matrix = TRUE) fit@extra constraints(fit) summary(depvar(fit)) # Some columns have all zeros
Ordinal superiority measures for the linear model and cumulative link models: the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model.
ordsup(object, ...) ordsup.vglm(object, all.vars = FALSE, confint = FALSE, ...)
ordsup(object, ...) ordsup.vglm(object, all.vars = FALSE, confint = FALSE, ...)
object |
A |
all.vars |
Logical. The default is to use explanatory variables
which are binary, but all variables are used (except the intercept)
if set to |
confint |
Logical.
If |
... |
Parameters that can be fed into |
Details are given in Agresti and Kateri (2017) and this help
file draws directly from this.
This function returns two quantities for comparing two groups
on an ordinal categorical response variable, while adjusting
for other explanatory variables.
They are called “ordinal superiority” measures, and
the two groups can be compared without supplementary
explanatory variables.
Let and
be independent random
variables from groups A and B, say, for a quantitative ordinal
categorical scale. Then
summarizes their relative size.
A second quantity is
.
Then
.
whereas
.
The range of
is
, while
the range of
is
.
The examples below are based on that paper.
This function is currently implemented for a very limited
number of specific models.
By default,
a list with components
gamma
and
Delta
,
where each is a vector with elements corresponding to
binary explanatory variables (i.e., 0 or 1),
and if no explanatory variables are binary then a
NULL
is returned.
If confint = TRUE
then the list contains 4 more components:
lower.gamma
,
upper.gamma
,
Lower.Delta
,
Upper.Delta
.
Thomas W. Yee
Agresti, A. and Kateri, M. (2017). Ordinal probability effect measures for group comparisons in multinomial cumulative link models. Biometrics, 73, 214–219.
cumulative
,
propodds
,
uninormal
.
## Not run: Mental <- read.table("http://www.stat.ufl.edu/~aa/glm/data/Mental.dat", header = TRUE) # Make take a while to load in Mental$impair <- ordered(Mental$impair) pfit3 <- vglm(impair ~ ses + life, data = Mental, cumulative(link = "probitlink", reverse = FALSE, parallel = TRUE)) coef(pfit3, matrix = TRUE) ordsup(pfit3) # The 'ses' variable is binary # Fit a crude LM fit7 <- vglm(as.numeric(impair) ~ ses + life, uninormal, data = Mental) coef(fit7, matrix = TRUE) # 'sd' is estimated by MLE ordsup(fit7) ordsup(fit7, all.vars = TRUE) # Some output may not be meaningful ordsup(fit7, confint = TRUE, method = "profile") ## End(Not run)
## Not run: Mental <- read.table("http://www.stat.ufl.edu/~aa/glm/data/Mental.dat", header = TRUE) # Make take a while to load in Mental$impair <- ordered(Mental$impair) pfit3 <- vglm(impair ~ ses + life, data = Mental, cumulative(link = "probitlink", reverse = FALSE, parallel = TRUE)) coef(pfit3, matrix = TRUE) ordsup(pfit3) # The 'ses' variable is binary # Fit a crude LM fit7 <- vglm(as.numeric(impair) ~ ses + life, uninormal, data = Mental) coef(fit7, matrix = TRUE) # 'sd' is estimated by MLE ordsup(fit7) ordsup(fit7, all.vars = TRUE) # Some output may not be meaningful ordsup(fit7, confint = TRUE, method = "profile") ## End(Not run)
Annual maximum temperatures collected at Oxford, UK.
data(oxtemp)
data(oxtemp)
A data frame with 80 observations on the following 2 variables.
Annual maximum temperatures (in degrees Fahrenheit).
The values 1901 to 1980.
The data were collected from 1901 to 1980.
Unknown.
## Not run: fit <- vglm(maxtemp ~ 1, gevff, data = oxtemp, trace = TRUE)
## Not run: fit <- vglm(maxtemp ~ 1, gevff, data = oxtemp, trace = TRUE)
Maximum likelihood estimation of the 2-parameter paralogistic distribution.
paralogistic(lscale = "loglink", lshape1.a = "loglink", iscale = NULL, ishape1.a = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
paralogistic(lscale = "loglink", lshape1.a = "loglink", iscale = NULL, ishape1.a = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = seq(0.75, 4, by = 0.25), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
lss |
See |
lshape1.a , lscale
|
Parameter link functions applied to the
(positive) parameters |
iscale , ishape1.a , imethod , zero
|
See |
gscale , gshape1.a
|
See |
probs.y |
See |
The 2-parameter paralogistic distribution is the 4-parameter
generalized beta II distribution with shape parameter and
.
It is the 3-parameter Singh-Maddala distribution with
.
More details can be found in Kleiber and Kotz (2003).
The 2-parameter paralogistic has density
for ,
,
.
Here,
is the scale parameter
scale
,
and is the shape parameter.
The mean is
provided ; these are returned as the fitted values.
This family function handles multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
See the notes in genbetaII
.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
Paralogistic
,
sinmad
,
genbetaII
,
betaII
,
dagum
,
fisk
,
inv.lomax
,
lomax
,
inv.paralogistic
.
## Not run: pdata <- data.frame(y = rparalogistic(n = 3000, exp(1), scale = exp(1))) fit <- vglm(y ~ 1, paralogistic(lss = FALSE), data = pdata, trace = TRUE) fit <- vglm(y ~ 1, paralogistic(ishape1.a = 2.3, iscale = 5), data = pdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
## Not run: pdata <- data.frame(y = rparalogistic(n = 3000, exp(1), scale = exp(1))) fit <- vglm(y ~ 1, paralogistic(lss = FALSE), data = pdata, trace = TRUE) fit <- vglm(y ~ 1, paralogistic(ishape1.a = 2.3, iscale = 5), data = pdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
Density, distribution function, quantile function and random
generation for the paralogistic distribution with shape parameter
a
and scale parameter scale
.
dparalogistic(x, scale = 1, shape1.a, log = FALSE) pparalogistic(q, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) qparalogistic(p, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) rparalogistic(n, scale = 1, shape1.a)
dparalogistic(x, scale = 1, shape1.a, log = FALSE) pparalogistic(q, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) qparalogistic(p, scale = 1, shape1.a, lower.tail = TRUE, log.p = FALSE) rparalogistic(n, scale = 1, shape1.a)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations. If |
shape1.a |
shape parameter. |
scale |
scale parameter. |
log |
Logical.
If |
lower.tail , log.p
|
See paralogistic
, which is the VGAM family function
for estimating the parameters by maximum likelihood estimation.
dparalogistic
gives the density,
pparalogistic
gives the distribution function,
qparalogistic
gives the quantile function, and
rparalogistic
generates random deviates.
The paralogistic distribution is a special case of the 4-parameter generalized beta II distribution.
T. W. Yee and Kai Huang
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
pdata <- data.frame(y = rparalogistic(n = 3000, scale = exp(1), exp(2))) fit <- vglm(y ~ 1, paralogistic(lss = FALSE, ishape1.a = 4.1), data = pdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit)
pdata <- data.frame(y = rparalogistic(n = 3000, scale = exp(1), exp(2))) fit <- vglm(y ~ 1, paralogistic(lss = FALSE, ishape1.a = 4.1), data = pdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit)
Density, distribution function, quantile function and random
generation for the Pareto(I) distribution with parameters
scale
and shape
.
dpareto(x, scale = 1, shape, log = FALSE) ppareto(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qpareto(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rpareto(n, scale = 1, shape)
dpareto(x, scale = 1, shape, log = FALSE) ppareto(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qpareto(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rpareto(n, scale = 1, shape)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as in |
scale , shape
|
the |
log |
Logical.
If |
lower.tail , log.p
|
See paretoff
, the VGAM family function
for estimating the parameter by maximum likelihood estimation,
for the formula of the probability density function and the
range restrictions imposed on the parameters.
dpareto
gives the density,
ppareto
gives the distribution function,
qpareto
gives the quantile function, and
rpareto
generates random deviates.
T. W. Yee and Kai Huang
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
alpha <- 3; k <- exp(1); x <- seq(2.8, 8, len = 300) ## Not run: plot(x, dpareto(x, scale = alpha, shape = k), type = "l", main = "Pareto density split into 10 equal areas") abline(h = 0, col = "blue", lty = 2) qvec <- qpareto(seq(0.1, 0.9, by = 0.1), scale = alpha, shape = k) lines(qvec, dpareto(qvec, scale = alpha, shape = k), col = "purple", lty = 3, type = "h") ## End(Not run) pvec <- seq(0.1, 0.9, by = 0.1) qvec <- qpareto(pvec, scale = alpha, shape = k) ppareto(qvec, scale = alpha, shape = k) qpareto(ppareto(qvec, scale = alpha, shape = k), scale = alpha, shape = k) - qvec # Should be 0
alpha <- 3; k <- exp(1); x <- seq(2.8, 8, len = 300) ## Not run: plot(x, dpareto(x, scale = alpha, shape = k), type = "l", main = "Pareto density split into 10 equal areas") abline(h = 0, col = "blue", lty = 2) qvec <- qpareto(seq(0.1, 0.9, by = 0.1), scale = alpha, shape = k) lines(qvec, dpareto(qvec, scale = alpha, shape = k), col = "purple", lty = 3, type = "h") ## End(Not run) pvec <- seq(0.1, 0.9, by = 0.1) qvec <- qpareto(pvec, scale = alpha, shape = k) ppareto(qvec, scale = alpha, shape = k) qpareto(ppareto(qvec, scale = alpha, shape = k), scale = alpha, shape = k) - qvec # Should be 0
Estimates one of the parameters of the Pareto(I) distribution by maximum likelihood estimation. Also includes the upper truncated Pareto(I) distribution.
paretoff(scale = NULL, lshape = "loglink") truncpareto(lower, upper, lshape = "loglink", ishape = NULL, imethod = 1)
paretoff(scale = NULL, lshape = "loglink") truncpareto(lower, upper, lshape = "loglink", ishape = NULL, imethod = 1)
lshape |
Parameter link function applied to the parameter |
scale |
Numeric.
The parameter |
lower , upper
|
Numeric.
Lower and upper limits for the truncated Pareto distribution.
Each must be positive and of length 1.
They are called |
ishape |
Numeric.
Optional initial value for the shape parameter.
A |
imethod |
See |
A random variable has a Pareto distribution if
for some positive and
.
This model is important in many applications due to the power
law probability tail, especially for large values of
.
The Pareto distribution, which is used a lot in economics, has a probability density function that can be written
for and
.
The
is called the scale parameter, and
it is either assumed known or else
min(y)
is used.
The parameter is called the shape parameter.
The mean of
is
provided
.
Its variance is
provided
.
The upper truncated Pareto distribution has a probability density function that can be written
for
and
.
Possibly, better names for
are
the index and tail parameters.
Here,
and
are known.
The mean of
is
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The usual or unbounded Pareto distribution has two
parameters (called and
here)
but the family function
paretoff
estimates only
using iteratively reweighted least squares.
The MLE of the
parameter lies on the
boundary and is
min(y)
where y
is the
response. Consequently, using the default argument
values, the standard errors are incorrect when one does a
summary
on the fitted object. If the user inputs
a value for alpha
then it is assumed known with
this value and then summary
on the fitted object
should be correct. Numerical problems may occur for small
, e.g.,
.
Outside of economics, the Pareto distribution is known as the Bradford distribution.
For paretoff
,
if the estimate of is less than or equal to unity
then the fitted values will be
NA
s.
Also, paretoff
fits the Pareto(I) distribution.
See paretoIV
for the more general Pareto(IV/III/II)
distributions, but there is a slight change in notation:
and
.
In some applications the Pareto law is truncated by a
natural upper bound on the probability tail.
The upper truncated Pareto distribution has three parameters (called
,
and
here) but the family function
truncpareto()
estimates only .
With known lower and upper limits, the ML estimator of
has
the usual properties of MLEs.
Aban (2006) discusses other inferential details.
T. W. Yee
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
Aban, I. B., Meerschaert, M. M. and Panorska, A. K. (2006). Parameter estimation for the truncated Pareto distribution, Journal of the American Statistical Association, 101(473), 270–277.
Pareto
,
Truncpareto
,
paretoIV
,
gpd
,
benini1
.
alpha <- 2; kay <- exp(3) pdata <- data.frame(y = rpareto(n = 1000, scale = alpha, shape = kay)) fit <- vglm(y ~ 1, paretoff, data = pdata, trace = TRUE) fit@extra # The estimate of alpha is here head(fitted(fit)) with(pdata, mean(y)) coef(fit, matrix = TRUE) summary(fit) # Standard errors are incorrect!! # Here, alpha is assumed known fit2 <- vglm(y ~ 1, paretoff(scale = alpha), data = pdata, trace = TRUE) fit2@extra # alpha stored here head(fitted(fit2)) coef(fit2, matrix = TRUE) summary(fit2) # Standard errors are okay # Upper truncated Pareto distribution lower <- 2; upper <- 8; kay <- exp(2) pdata3 <- data.frame(y = rtruncpareto(n = 100, lower = lower, upper = upper, shape = kay)) fit3 <- vglm(y ~ 1, truncpareto(lower, upper), data = pdata3, trace = TRUE) coef(fit3, matrix = TRUE) c(fit3@misc$lower, fit3@misc$upper)
alpha <- 2; kay <- exp(3) pdata <- data.frame(y = rpareto(n = 1000, scale = alpha, shape = kay)) fit <- vglm(y ~ 1, paretoff, data = pdata, trace = TRUE) fit@extra # The estimate of alpha is here head(fitted(fit)) with(pdata, mean(y)) coef(fit, matrix = TRUE) summary(fit) # Standard errors are incorrect!! # Here, alpha is assumed known fit2 <- vglm(y ~ 1, paretoff(scale = alpha), data = pdata, trace = TRUE) fit2@extra # alpha stored here head(fitted(fit2)) coef(fit2, matrix = TRUE) summary(fit2) # Standard errors are okay # Upper truncated Pareto distribution lower <- 2; upper <- 8; kay <- exp(2) pdata3 <- data.frame(y = rtruncpareto(n = 100, lower = lower, upper = upper, shape = kay)) fit3 <- vglm(y ~ 1, truncpareto(lower, upper), data = pdata3, trace = TRUE) coef(fit3, matrix = TRUE) c(fit3@misc$lower, fit3@misc$upper)
Estimates three of the parameters of the Pareto(IV) distribution by maximum likelihood estimation. Some special cases of this distribution are also handled.
paretoIV(location = 0, lscale = "loglink", linequality = "loglink", lshape = "loglink", iscale = 1, iinequality = 1, ishape = NULL, imethod = 1) paretoIII(location = 0, lscale = "loglink", linequality = "loglink", iscale = NULL, iinequality = NULL) paretoII(location = 0, lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL)
paretoIV(location = 0, lscale = "loglink", linequality = "loglink", lshape = "loglink", iscale = 1, iinequality = 1, ishape = NULL, imethod = 1) paretoIII(location = 0, lscale = "loglink", linequality = "loglink", iscale = NULL, iinequality = NULL) paretoII(location = 0, lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL)
location |
Location parameter, called |
lscale , linequality , lshape
|
Parameter link functions for the
scale parameter (called |
iscale , iinequality , ishape
|
Initial values for the parameters.
A |
imethod |
Method of initialization for the shape parameter. Currently only values 1 and 2 are available. Try the other value if convergence failure occurs. |
The Pareto(IV) distribution, which is used in actuarial science, economics, finance and telecommunications, has a cumulative distribution function that can be written
for ,
,
and
.
The
is called the location parameter,
the scale parameter,
the inequality parameter, and
the shape parameter.
The location parameter is assumed known otherwise the Pareto(IV) distribution will not be a regular family. This assumption is not too restrictive in modelling because in typical applications this parameter is known, e.g., in insurance and reinsurance it is pre-defined by a contract and can be represented as a deductible or a retention level.
The inequality parameter is so-called because of its
interpretation in the economics context. If we choose a
unit shape parameter value and a zero location parameter
value then the inequality parameter is the Gini index of
inequality, provided .
The fitted values are currently the median, e.g.,
qparetoIV
is used for paretoIV()
.
There are a number of special cases of the Pareto(IV) distribution.
These include the Pareto(I), Pareto(II), Pareto(III), and Burr family
of distributions.
Denoting as the Pareto(IV) distribution,
the Burr distribution
is
,
the Pareto(III) distribution
is
,
the Pareto(II) distribution
is
,
and
the Pareto(I) distribution
is
.
Thus the Burr distribution can be fitted using the
negloglink
link
function and using the default location=0
argument.
The Pareto(I) distribution can be fitted using paretoff
but there is a slight change in notation: and
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The Pareto(IV) distribution is very general,
for example, special cases include the Pareto(I), Pareto(II),
Pareto(III), and Burr family of distributions.
[Johnson et al. (1994) says on p.19 that fitting Type IV by ML is
very difficult and rarely attempted].
Consequently, reasonably good initial values are recommended,
and convergence to a local solution may occur. For this
reason setting trace=TRUE
is a good idea for monitoring
the convergence. Large samples are ideally required to get
reasonable results.
The extra
slot of the fitted object has a component called
"location"
which stores the location parameter value(s).
T. W. Yee
Johnson N. L., Kotz S., and Balakrishnan N. (1994). Continuous Univariate Distributions, Volume 1, 2nd ed. New York: Wiley.
Brazauskas, V. (2003). Information matrix for Pareto(IV), Burr, and related distributions. Comm. Statist. Theory and Methods 32, 315–325.
Arnold, B. C. (1983). Pareto Distributions. Fairland, Maryland: International Cooperative Publishing House.
pdata <- data.frame(y = rparetoIV(2000, scale = exp(1), ineq = exp(-0.3), shape = exp(1))) ## Not run: par(mfrow = c(2, 1)) with(pdata, hist(y)); with(pdata, hist(log(y))) ## End(Not run) fit <- vglm(y ~ 1, paretoIV, data = pdata, trace = TRUE) head(fitted(fit)) summary(pdata) coef(fit, matrix = TRUE) Coef(fit) summary(fit)
pdata <- data.frame(y = rparetoIV(2000, scale = exp(1), ineq = exp(-0.3), shape = exp(1))) ## Not run: par(mfrow = c(2, 1)) with(pdata, hist(y)); with(pdata, hist(log(y))) ## End(Not run) fit <- vglm(y ~ 1, paretoIV, data = pdata, trace = TRUE) head(fitted(fit)) summary(pdata) coef(fit, matrix = TRUE) Coef(fit) summary(fit)
Density, distribution function, quantile function and random generation for the Pareto(IV/III/II) distributions.
dparetoIV(x, location = 0, scale = 1, inequality = 1, shape = 1, log = FALSE) pparetoIV(q, location = 0, scale = 1, inequality = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qparetoIV(p, location = 0, scale = 1, inequality = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) rparetoIV(n, location = 0, scale = 1, inequality = 1, shape = 1) dparetoIII(x, location = 0, scale = 1, inequality = 1, log = FALSE) pparetoIII(q, location = 0, scale = 1, inequality = 1, lower.tail = TRUE, log.p = FALSE) qparetoIII(p, location = 0, scale = 1, inequality = 1, lower.tail = TRUE, log.p = FALSE) rparetoIII(n, location = 0, scale = 1, inequality = 1) dparetoII(x, location = 0, scale = 1, shape = 1, log = FALSE) pparetoII(q, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qparetoII(p, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) rparetoII(n, location = 0, scale = 1, shape = 1) dparetoI(x, scale = 1, shape = 1, log = FALSE) pparetoI(q, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qparetoI(p, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) rparetoI(n, scale = 1, shape = 1)
dparetoIV(x, location = 0, scale = 1, inequality = 1, shape = 1, log = FALSE) pparetoIV(q, location = 0, scale = 1, inequality = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qparetoIV(p, location = 0, scale = 1, inequality = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) rparetoIV(n, location = 0, scale = 1, inequality = 1, shape = 1) dparetoIII(x, location = 0, scale = 1, inequality = 1, log = FALSE) pparetoIII(q, location = 0, scale = 1, inequality = 1, lower.tail = TRUE, log.p = FALSE) qparetoIII(p, location = 0, scale = 1, inequality = 1, lower.tail = TRUE, log.p = FALSE) rparetoIII(n, location = 0, scale = 1, inequality = 1) dparetoII(x, location = 0, scale = 1, shape = 1, log = FALSE) pparetoII(q, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qparetoII(p, location = 0, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) rparetoII(n, location = 0, scale = 1, shape = 1) dparetoI(x, scale = 1, shape = 1, log = FALSE) pparetoI(q, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) qparetoI(p, scale = 1, shape = 1, lower.tail = TRUE, log.p = FALSE) rparetoI(n, scale = 1, shape = 1)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as in |
location |
the location parameter. |
scale , shape , inequality
|
the (positive) scale, inequality and shape parameters. |
log |
Logical.
If |
lower.tail , log.p
|
For the formulas and other details
see paretoIV
.
Functions beginning with the
letters d
give the density,
p
give the distribution function,
q
give the quantile function, and
r
generates random deviates.
The functions [dpqr]paretoI
are the same as
[dpqr]pareto
except for a slight change in notation:
and
; see
Pareto
.
T. W. Yee and Kai Huang
Brazauskas, V. (2003). Information matrix for Pareto(IV), Burr, and related distributions. Comm. Statist. Theory and Methods 32, 315–325.
Arnold, B. C. (1983). Pareto Distributions. Fairland, Maryland: International Cooperative Publishing House.
## Not run: x <- seq(-0.2, 4, by = 0.01) loc <- 0; Scale <- 1; ineq <- 1; shape <- 1.0 plot(x, dparetoIV(x, loc, Scale, ineq, shape), type = "l", main = "Blue is density, orange is the CDF", col = "blue", sub = "Purple are 5,10,...,95 percentiles", ylim = 0:1, las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) Q <- qparetoIV(seq(0.05, 0.95,by = 0.05), loc, Scale, ineq, shape) lines(Q, dparetoIV(Q, loc, Scale, ineq, shape), col = "purple", lty = 3, type = "h") lines(x, pparetoIV(x, loc, Scale, ineq, shape), col = "orange") abline(h = 0, lty = 2) ## End(Not run)
## Not run: x <- seq(-0.2, 4, by = 0.01) loc <- 0; Scale <- 1; ineq <- 1; shape <- 1.0 plot(x, dparetoIV(x, loc, Scale, ineq, shape), type = "l", main = "Blue is density, orange is the CDF", col = "blue", sub = "Purple are 5,10,...,95 percentiles", ylim = 0:1, las = 1, ylab = "") abline(h = 0, col = "blue", lty = 2) Q <- qparetoIV(seq(0.05, 0.95,by = 0.05), loc, Scale, ineq, shape) lines(Q, dparetoIV(Q, loc, Scale, ineq, shape), col = "purple", lty = 3, type = "h") lines(x, pparetoIV(x, loc, Scale, ineq, shape), col = "orange") abline(h = 0, lty = 2) ## End(Not run)
Maximum likelihood estimation of the 2-parameter Perks distribution.
perks(lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL, gscale = exp(-5:5), gshape = exp(-5:5), nsimEIM = 500, oim.mean = FALSE, zero = NULL, nowarning = FALSE)
perks(lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL, gscale = exp(-5:5), gshape = exp(-5:5), nsimEIM = 500, oim.mean = FALSE, zero = NULL, nowarning = FALSE)
nowarning |
Logical. Suppress a warning? Ignored for VGAM 0.9-7 and higher. |
lscale , lshape
|
Parameter link functions applied to the
shape parameter |
iscale , ishape
|
Optional initial values.
A |
gscale , gshape
|
|
nsimEIM , zero
|
|
oim.mean |
To be currently ignored. |
The Perks distribution has cumulative distribution function
which leads to a probability density function
for ,
,
.
Here,
is called the scale parameter
scale
,
and is called a shape parameter.
The moments for this distribution do
not appear to be available in closed form.
Simulated Fisher scoring is used and multiple responses are handled.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as
vglm
, and vgam
.
A lot of care is needed because
this is a rather difficult distribution for parameter estimation.
If the self-starting initial values fail then try experimenting
with the initial value arguments, especially iscale
.
Successful convergence depends on having very good initial values.
Also, monitor convergence by setting trace = TRUE
.
T. W. Yee
Perks, W. (1932). On some experiments in the graduation of mortality statistics. Journal of the Institute of Actuaries, 63, 12–40.
Richards, S. J. (2012). A handbook of parametric survival models for actuarial use. Scandinavian Actuarial Journal. 1–25.
## Not run: set.seed(123) pdata <- data.frame(x2 = runif(nn <- 1000)) # x2 unused pdata <- transform(pdata, eta1 = -1, ceta1 = 1) pdata <- transform(pdata, shape1 = exp(eta1), scale1 = exp(ceta1)) pdata <- transform(pdata, y1 = rperks(nn, sh = shape1, sc = scale1)) fit1 <- vglm(y1 ~ 1, perks, data = pdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) ## End(Not run)
## Not run: set.seed(123) pdata <- data.frame(x2 = runif(nn <- 1000)) # x2 unused pdata <- transform(pdata, eta1 = -1, ceta1 = 1) pdata <- transform(pdata, shape1 = exp(eta1), scale1 = exp(ceta1)) pdata <- transform(pdata, y1 = rperks(nn, sh = shape1, sc = scale1)) fit1 <- vglm(y1 ~ 1, perks, data = pdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) ## End(Not run)
Density, cumulative distribution function, quantile function and random generation for the Perks distribution.
dperks(x, scale = 1, shape, log = FALSE) pperks(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qperks(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rperks(n, scale = 1, shape)
dperks(x, scale = 1, shape, log = FALSE) pperks(q, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) qperks(p, scale = 1, shape, lower.tail = TRUE, log.p = FALSE) rperks(n, scale = 1, shape)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as in |
log |
Logical.
If |
lower.tail , log.p
|
|
shape , scale
|
positive shape and scale parameters. |
See perks
for details.
dperks
gives the density,
pperks
gives the cumulative distribution function,
qperks
gives the quantile function, and
rperks
generates random deviates.
T. W. Yee and Kai Huang
probs <- seq(0.01, 0.99, by = 0.01) Shape <- exp(-1.0); Scale <- exp(1); max(abs(pperks(qperks(p = probs, Shape, Scale), Shape, Scale) - probs)) # Should be 0 ## Not run: x <- seq(-0.1, 07, by = 0.01); plot(x, dperks(x, Shape, Scale), type = "l", col = "blue", las = 1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylab = "", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(x, pperks(x, Shape, Scale), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qperks(probs, Shape, Scale) lines(Q, dperks(Q, Shape, Scale), col = "purple", lty = 3, type = "h") pperks(Q, Shape, Scale) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) ## End(Not run)
probs <- seq(0.01, 0.99, by = 0.01) Shape <- exp(-1.0); Scale <- exp(1); max(abs(pperks(qperks(p = probs, Shape, Scale), Shape, Scale) - probs)) # Should be 0 ## Not run: x <- seq(-0.1, 07, by = 0.01); plot(x, dperks(x, Shape, Scale), type = "l", col = "blue", las = 1, main = "Blue is density, orange is cumulative distribution function", sub = "Purple lines are the 10,20,...,90 percentiles", ylab = "", ylim = 0:1) abline(h = 0, col = "blue", lty = 2) lines(x, pperks(x, Shape, Scale), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qperks(probs, Shape, Scale) lines(Q, dperks(Q, Shape, Scale), col = "purple", lty = 3, type = "h") pperks(Q, Shape, Scale) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) ## End(Not run)
Produces a perspective plot for a CQO model (QRR-VGLM). It is only
applicable for rank-1 or rank-2 models with argument noRRR = ~ 1
.
perspqrrvglm(x, varI.latvar = FALSE, refResponse = NULL, show.plot = TRUE, xlim = NULL, ylim = NULL, zlim = NULL, gridlength = if (Rank == 1) 301 else c(51,51), which.species = NULL, xlab = if (Rank == 1) "Latent Variable" else "Latent Variable 1", ylab = if (Rank == 1) "Expected Value" else "Latent Variable 2", zlab = "Expected value", labelSpecies = FALSE, stretch = 1.05, main = "", ticktype = "detailed", col = if (Rank == 1) par()$col else "white", llty = par()$lty, llwd = par()$lwd, add1 = FALSE, ...)
perspqrrvglm(x, varI.latvar = FALSE, refResponse = NULL, show.plot = TRUE, xlim = NULL, ylim = NULL, zlim = NULL, gridlength = if (Rank == 1) 301 else c(51,51), which.species = NULL, xlab = if (Rank == 1) "Latent Variable" else "Latent Variable 1", ylab = if (Rank == 1) "Expected Value" else "Latent Variable 2", zlab = "Expected value", labelSpecies = FALSE, stretch = 1.05, main = "", ticktype = "detailed", col = if (Rank == 1) par()$col else "white", llty = par()$lty, llwd = par()$lwd, add1 = FALSE, ...)
x |
Object of class |
varI.latvar |
Logical that is fed into |
refResponse |
Integer or character that is fed into |
show.plot |
Logical. Plot it? |
xlim , ylim
|
Limits of the x- and y-axis. Both are numeric of length 2.
See |
zlim |
Limits of the z-axis. Numeric of length 2.
Ignored if rank is 1.
See |
gridlength |
Numeric. The fitted values are evaluated on a grid, and this
argument regulates the fineness of the grid. If |
which.species |
Numeric or character vector. Indicates which species are to be
plotted. The default is to plot all of them. If numeric, it should
contain values in the set {1,2,..., |
xlab , ylab
|
Character caption for the x-axis and y-axis. By default, a suitable caption is
found. See the |
zlab |
Character caption for the z-axis.
Used only if |
labelSpecies |
Logical.
Whether the species should be labelled with their names.
Used for |
stretch |
Numeric. A value slightly more than 1, this argument
adjusts the height of the y-axis. Used for |
main |
Character, giving the title of the plot.
See the |
ticktype |
Tick type. Used only if |
col |
Color.
See |
llty |
Line type.
Rank-1 models only.
See the |
llwd |
Line width.
Rank-1 models only.
See the |
add1 |
Logical. Add to an existing plot? Used only for rank-1 models. |
... |
Arguments passed into |
For a rank-1 model, a perspective plot is similar to
lvplot.qrrvglm
but plots the curves along a fine grid
and there is no rugplot to show the site scores.
For a rank-2 model, a perspective plot has the first latent variable as
the x-axis, the second latent variable as the y-axis, and the expected
value (fitted value) as the z-axis. The result of a CQO is that each
species has a response surface with elliptical contours. This function
will, at each grid point, work out the maximum fitted value over all
the species. The resulting response surface is plotted. Thus rare
species will be obscured and abundant species will dominate the plot.
To view rare species, use the which.species
argument to select
a subset of the species.
A perspective plot will be performed if noRRR = ~ 1
, and
Rank = 1
or 2
. Also, all the tolerance matrices of
those species to be plotted must be positive-definite.
For a rank-2 model, a list with the following components.
fitted |
A |
latvar1grid , latvar2grid
|
The grid points for the x-axis and y-axis. |
max.fitted |
A |
For a rank-1 model, the components latvar2grid
and
max.fitted
are NULL
.
Yee (2004) does not refer to perspective plots. Instead, contour plots
via lvplot.qrrvglm
are used.
For rank-1 models, a similar function to this one is
lvplot.qrrvglm
. It plots the fitted values at the actual
site score values rather than on a fine grid here. The result has
the advantage that the user sees the curves as a direct result from a
model fitted to data whereas here, it is easy to think that the smooth
bell-shaped curves are the truth because the data is more of a distance
away.
Thomas W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
persp
,
cqo
,
Coef.qrrvglm
,
lvplot.qrrvglm
,
par
,
title
.
## Not run: hspider[, 1:6] <- scale(hspider[, 1:6]) # Good idea when I.tolerances = TRUE set.seed(111) r1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardmont, Pardnigr, Pardpull, Trocterr) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, trace = FALSE, I.tolerances = TRUE) set.seed(111) # r2 below is an ill-conditioned model r2 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardmont, Pardnigr, Pardpull, Trocterr) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, isd.lv = c(2.4, 1.0), Muxfactor = 3.0, trace = FALSE, poissonff, data = hspider, Rank = 2, eq.tolerances = TRUE) sort(deviance(r1, history = TRUE)) # A history of all the fits sort(deviance(r2, history = TRUE)) # A history of all the fits if (deviance(r2) > 857) stop("suboptimal fit obtained") persp(r1, xlim = c(-6, 5), col = 1:4, label = TRUE) # Involves all species persp(r2, xlim = c(-6, 5), ylim = c(-4, 5), theta = 10, phi = 20, zlim = c(0, 220)) # Omit the two dominant species to see what is behind them persp(r2, xlim = c(-6, 5), ylim = c(-4, 5), theta = 10, phi = 20, zlim = c(0, 220), which = (1:10)[-c(8, 10)]) # Use zlim to retain the original z-scale ## End(Not run)
## Not run: hspider[, 1:6] <- scale(hspider[, 1:6]) # Good idea when I.tolerances = TRUE set.seed(111) r1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardmont, Pardnigr, Pardpull, Trocterr) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, trace = FALSE, I.tolerances = TRUE) set.seed(111) # r2 below is an ill-conditioned model r2 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardmont, Pardnigr, Pardpull, Trocterr) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, isd.lv = c(2.4, 1.0), Muxfactor = 3.0, trace = FALSE, poissonff, data = hspider, Rank = 2, eq.tolerances = TRUE) sort(deviance(r1, history = TRUE)) # A history of all the fits sort(deviance(r2, history = TRUE)) # A history of all the fits if (deviance(r2) > 857) stop("suboptimal fit obtained") persp(r1, xlim = c(-6, 5), col = 1:4, label = TRUE) # Involves all species persp(r2, xlim = c(-6, 5), ylim = c(-4, 5), theta = 10, phi = 20, zlim = c(0, 220)) # Omit the two dominant species to see what is behind them persp(r2, xlim = c(-6, 5), ylim = c(-4, 5), theta = 10, phi = 20, zlim = c(0, 220), which = (1:10)[-c(8, 10)]) # Use zlim to retain the original z-scale ## End(Not run)
The first two derivatives of the incomplete gamma integral.
pgamma.deriv(q, shape, tmax = 100)
pgamma.deriv(q, shape, tmax = 100)
q , shape
|
As in |
tmax |
Maximum number of iterations allowed in the computation
(per |
Write and
shape =
.
The first and second derivatives with respect to
and
are returned. This function is similar in spirit to
pgamma
;
define
so that
is
pgamma(x, a)
.
Currently a 6-column matrix is returned (in the future this
may change and an argument may be supplied so that only what
is required by the user is computed.)
The computations use a series expansion
for or
or
, else
otherwise a continued fraction expansion.
Machine overflow can occur for large values of
when
is much greater than
.
The first 5 columns, running from left to right, are the derivatives
with respect to:
,
,
,
,
.
The 6th column is
(but it is not as accurate
as calling
pgamma
directly).
If convergence does not occur then try increasing the value of
tmax
.
Yet to do: add more arguments to give greater flexibility in the accuracy desired and to compute only quantities that are required by the user.
T. W. Yee wrote the wrapper function to the Fortran subroutine
written by R. J. Moore. The subroutine was modified to run using
double precision.
The original code came from http://lib.stat.cmu.edu/apstat/187
.
but this website has since become stale.
Moore, R. J. (1982). Algorithm AS 187: Derivatives of the Incomplete Gamma Integral. Journal of the Royal Statistical Society, Series C (Applied Statistics), 31(3), 330–335.
pgamma.deriv.unscaled
,
pgamma
.
x <- seq(2, 10, length = 501) head(ans <- pgamma.deriv(x, 2)) ## Not run: par(mfrow = c(2, 3)) for (jay in 1:6) plot(x, ans[, jay], type = "l", col = "blue", cex.lab = 1.5, cex.axis = 1.5, las = 1, log = "x", main = colnames(ans)[jay], xlab = "q", ylab = "") ## End(Not run)
x <- seq(2, 10, length = 501) head(ans <- pgamma.deriv(x, 2)) ## Not run: par(mfrow = c(2, 3)) for (jay in 1:6) plot(x, ans[, jay], type = "l", col = "blue", cex.lab = 1.5, cex.axis = 1.5, las = 1, log = "x", main = colnames(ans)[jay], xlab = "q", ylab = "") ## End(Not run)
The first two derivatives of the incomplete gamma integral with scaling.
pgamma.deriv.unscaled(q, shape)
pgamma.deriv.unscaled(q, shape)
q , shape
|
As in |
Define
so that
is
pgamma(x, a) * gamma(a)
.
Write and
shape =
.
The 0th and first and second derivatives with respect to
of
are returned. This function is similar in spirit to
pgamma.deriv
but here there is no gamma function to scale things.
Currently a 3-column matrix is returned (in the future this
may change and an argument may be supplied so that only what
is required by the user is computed.)
This function is based on Wingo (1989).
The 3 columns, running from left to right, are the 0:2
th derivatives
with respect to .
These function seems inaccurate for q = 1
and q = 2
;
see the plot below.
T. W. Yee.
See truncweibull
.
x <- 3; aa <- seq(0.3, 04, by = 0.01) ans.u <- pgamma.deriv.unscaled(x, aa) head(ans.u) ## Not run: par(mfrow = c(1, 3)) for (jay in 1:3) { plot(aa, ans.u[, jay], type = "l", col = "blue", cex.lab = 1.5, cex.axis = 1.5, las = 1, main = colnames(ans.u)[jay], log = "", xlab = "shape", ylab = "") abline(h = 0, v = 1:2, lty = "dashed", col = "gray") # Inaccurate at 1 and 2 } ## End(Not run)
x <- 3; aa <- seq(0.3, 04, by = 0.01) ans.u <- pgamma.deriv.unscaled(x, aa) head(ans.u) ## Not run: par(mfrow = c(1, 3)) for (jay in 1:3) { plot(aa, ans.u[, jay], type = "l", col = "blue", cex.lab = 1.5, cex.axis = 1.5, las = 1, main = colnames(ans.u)[jay], log = "", xlab = "shape", ylab = "") abline(h = 0, v = 1:2, lty = "dashed", col = "gray") # Inaccurate at 1 and 2 } ## End(Not run)
Plots a probability density function associated with a LMS quantile regression.
plotdeplot.lmscreg(answer, y.arg, add.arg = FALSE, xlab = "", ylab = "density", xlim = NULL, ylim = NULL, llty.arg = par()$lty, col.arg = par()$col, llwd.arg = par()$lwd, ...)
plotdeplot.lmscreg(answer, y.arg, add.arg = FALSE, xlab = "", ylab = "density", xlim = NULL, ylim = NULL, llty.arg = par()$lty, col.arg = par()$col, llwd.arg = par()$lwd, ...)
answer |
Output from functions of the form
|
y.arg |
Numerical vector. The values of the response variable at which to evaluate the density. This should be a grid that is fine enough to ensure the plotted curves are smooth. |
add.arg |
Logical. Add the density to an existing plot? |
xlab , ylab
|
Caption for the x- and y-axes. See |
xlim , ylim
|
Limits of the x- and y-axes. See |
llty.arg |
Line type.
See the |
col.arg |
Line color.
See the |
llwd.arg |
Line width.
See the |
... |
Arguments passed into the |
The above graphical parameters offer some flexibility when plotting the quantiles.
The list answer
, which has components
newdata |
The argument |
y |
The argument |
density |
Vector of the density function values evaluated at |
While the graphical arguments of this function are useful to the user, this function should not be called directly.
Thomas W. Yee
Yee, T. W. (2004). Quantile regression via vector generalized additive models. Statistics in Medicine, 23, 2295–2315.
fit <- vgam(BMI ~ s(age, df = c(4,2)), lms.bcn(zero = 1), bmi.nz) ## Not run: y = seq(15, 43, by = 0.25) deplot(fit, x0 = 20, y = y, xlab = "BMI", col = "green", llwd = 2, main = "BMI distribution at ages 20 (green), 40 (blue), 60 (orange)") deplot(fit, x0 = 40, y = y, add = TRUE, col = "blue", llwd = 2) deplot(fit, x0 = 60, y = y, add = TRUE, col = "orange", llwd = 2) -> aa names(aa@post$deplot) aa@post$deplot$newdata head(aa@post$deplot$y) head(aa@post$deplot$density) ## End(Not run)
fit <- vgam(BMI ~ s(age, df = c(4,2)), lms.bcn(zero = 1), bmi.nz) ## Not run: y = seq(15, 43, by = 0.25) deplot(fit, x0 = 20, y = y, xlab = "BMI", col = "green", llwd = 2, main = "BMI distribution at ages 20 (green), 40 (blue), 60 (orange)") deplot(fit, x0 = 40, y = y, add = TRUE, col = "blue", llwd = 2) deplot(fit, x0 = 60, y = y, add = TRUE, col = "orange", llwd = 2) -> aa names(aa@post$deplot) aa@post$deplot$newdata head(aa@post$deplot$y) head(aa@post$deplot$density) ## End(Not run)
Given a GAITD regression object, plots the probability mass function.
plotdgaitd(object, ...) plotdgaitd.vglm(object, ...)
plotdgaitd(object, ...) plotdgaitd.vglm(object, ...)
object |
A fitted GAITD combo regression, e.g.,
|
... |
Graphical arguments passed into |
This is meant to be a more convenient function for plotting
the PMF of the GAITD combo model from a fitted regression model.
The fit should be intercept-only and the distribution
should have 1 or 2 parameters.
Currently it should work for a gaitdpoisson
fit.
As much information as needed
such as the special values
is extracted from the object
and fed into dgaitdplot
.
Same as dgaitdplot
.
This function is subject to change.
T. W. Yee.
dgaitdplot
,
spikeplot
,
gaitdpoisson
.
## Not run: example(gaitdpoisson) gaitpfit2 <- vglm(y1 ~ 1, crit = "coef", trace = TRUE, data = gdata, gaitdpoisson(a.mix = a.mix, i.mix = i.mix, i.mlm = i.mlm, eq.ap = TRUE, eq.ip = TRUE, truncate = tvec, max.support = max.support)) plotdgaitd(gaitpfit2) ## End(Not run)
## Not run: example(gaitdpoisson) gaitpfit2 <- vglm(y1 ~ 1, crit = "coef", trace = TRUE, data = gdata, gaitdpoisson(a.mix = a.mix, i.mix = i.mix, i.mlm = i.mlm, eq.ap = TRUE, eq.ip = TRUE, truncate = tvec, max.support = max.support)) plotdgaitd(gaitpfit2) ## End(Not run)
The residuals of a QRR-VGLM are plotted for model diagnostic purposes.
plotqrrvglm(object, rtype = c("response", "pearson", "deviance", "working"), ask = FALSE, main = paste(Rtype, "residuals vs latent variable(s)"), xlab = "Latent Variable", I.tolerances = object@control$eq.tolerances, ...)
plotqrrvglm(object, rtype = c("response", "pearson", "deviance", "working"), ask = FALSE, main = paste(Rtype, "residuals vs latent variable(s)"), xlab = "Latent Variable", I.tolerances = object@control$eq.tolerances, ...)
object |
An object of class |
rtype |
Character string giving residual type. By default, the first one is chosen. |
ask |
Logical. If |
main |
Character string giving the title of the plot. |
xlab |
Character string giving the x-axis caption. |
I.tolerances |
Logical. This argument is fed into
|
... |
Other plotting arguments (see |
Plotting the residuals can be potentially very useful for checking that the model fit is adequate.
The original object.
An ordination plot of a QRR-VGLM can be obtained
by lvplot.qrrvglm
.
Thomas W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
## Not run: # QRR-VGLM on the hunting spiders data # This is computationally expensive set.seed(111) # This leads to the global solution hspider[, 1:6] <- scale(hspider[, 1:6]) # Standardize environ vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE) par(mfrow = c(3, 4)) plot(p1, rtype = "response", col = "blue", pch = 4, las = 1, main = "") ## End(Not run)
## Not run: # QRR-VGLM on the hunting spiders data # This is computationally expensive set.seed(111) # This leads to the global solution hspider[, 1:6] <- scale(hspider[, 1:6]) # Standardize environ vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE) par(mfrow = c(3, 4)) plot(p1, rtype = "response", col = "blue", pch = 4, las = 1, main = "") ## End(Not run)
Plots the quantiles associated with a LMS quantile regression.
plotqtplot.lmscreg(fitted.values, object, newdata = NULL, percentiles = object@misc$percentiles, lp = NULL, add.arg = FALSE, y = if (length(newdata)) FALSE else TRUE, spline.fit = FALSE, label = TRUE, size.label = 0.06, xlab = NULL, ylab = "", pch = par()$pch, pcex = par()$cex, pcol.arg = par()$col, xlim = NULL, ylim = NULL, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, tcol.arg = par()$col, tadj = 1, ...)
plotqtplot.lmscreg(fitted.values, object, newdata = NULL, percentiles = object@misc$percentiles, lp = NULL, add.arg = FALSE, y = if (length(newdata)) FALSE else TRUE, spline.fit = FALSE, label = TRUE, size.label = 0.06, xlab = NULL, ylab = "", pch = par()$pch, pcex = par()$cex, pcol.arg = par()$col, xlim = NULL, ylim = NULL, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, tcol.arg = par()$col, tadj = 1, ...)
fitted.values |
Matrix of fitted values. |
object |
A VGAM quantile regression model, i.e.,
an object produced by modelling functions such as |
newdata |
Data frame at which predictions are made. By default, the original data are used. |
percentiles |
Numerical vector with values between 0 and 100 that specify the percentiles (quantiles). The default is to use the percentiles when fitting the model. For example, the value 50 corresponds to the median. |
lp |
Length of |
add.arg |
Logical. Add the quantiles to an existing plot? |
y |
Logical. Add the response as points to the plot? |
spline.fit |
Logical. Add a spline curve to the plot? |
label |
Logical. Add the percentiles (as text) to the plot? |
size.label |
Numeric. How much room to leave at the RHS for the label. It is in percent (of the range of the primary variable). |
xlab |
Caption for the x-axis. See |
ylab |
Caption for the x-axis. See |
pch |
Plotting character. See |
pcex |
Character expansion of the points.
See |
pcol.arg |
Color of the points.
See the |
xlim |
Limits of the x-axis. See |
ylim |
Limits of the y-axis. See |
llty.arg |
Line type. Line type.
See the |
lcol.arg |
Color of the lines.
See the |
llwd.arg |
Line width.
See the |
tcol.arg |
Color of the text
(if |
tadj |
Text justification.
See the |
... |
Arguments passed into the |
The above graphical parameters offer some flexibility when plotting the quantiles.
The matrix of fitted values.
While the graphical arguments of this function are useful to the user, this function should not be called directly.
Thomas W. Yee
Yee, T. W. (2004). Quantile regression via vector generalized additive models. Statistics in Medicine, 23, 2295–2315.
## Not run: fit <- vgam(BMI ~ s(age, df = c(4,2)), lms.bcn(zero = 1), data = bmi.nz) qtplot(fit) qtplot(fit, perc = c(25,50,75,95), lcol = "blue", tcol = "blue", llwd = 2) ## End(Not run)
## Not run: fit <- vgam(BMI ~ s(age, df = c(4,2)), lms.bcn(zero = 1), data = bmi.nz) qtplot(fit) qtplot(fit, perc = c(25,50,75,95), lcol = "blue", tcol = "blue", llwd = 2) ## End(Not run)
Produces a main effects plot for Row-Column Interaction Models (RCIMs).
plotrcim0(object, centered = TRUE, which.plots = c(1, 2), hline0 = TRUE, hlty = "dashed", hcol = par()$col, hlwd = par()$lwd, rfirst = 1, cfirst = 1, rtype = "h", ctype = "h", rcex.lab = 1, rcex.axis = 1, rtick = FALSE, ccex.lab = 1, ccex.axis = 1, ctick = FALSE, rmain = "Row effects", rsub = "", rxlab = "", rylab = "Row effects", cmain = "Column effects", csub = "", cxlab= "", cylab = "Column effects", rcol = par()$col, ccol = par()$col, no.warning = FALSE, ...)
plotrcim0(object, centered = TRUE, which.plots = c(1, 2), hline0 = TRUE, hlty = "dashed", hcol = par()$col, hlwd = par()$lwd, rfirst = 1, cfirst = 1, rtype = "h", ctype = "h", rcex.lab = 1, rcex.axis = 1, rtick = FALSE, ccex.lab = 1, ccex.axis = 1, ctick = FALSE, rmain = "Row effects", rsub = "", rxlab = "", rylab = "Row effects", cmain = "Column effects", csub = "", cxlab= "", cylab = "Column effects", rcol = par()$col, ccol = par()$col, no.warning = FALSE, ...)
object |
An |
which.plots |
Numeric, describing which plots are to be plotted.
The row effects plot is 1 and the column effects plot is 2.
Set the value |
centered |
Logical.
If |
hline0 , hlty , hcol , hlwd
|
|
rfirst , cfirst
|
|
rmain , cmain
|
Character.
|
rtype , ctype , rsub , csub
|
See the |
rxlab , rylab , cxlab , cylab
|
Character.
For the row effects plot,
|
rcex.lab , ccex.lab
|
Numeric.
|
rcex.axis , ccex.axis
|
Numeric.
|
rtick , ctick
|
Logical.
If |
rcol , ccol
|
|
no.warning |
Logical. If |
... |
Arguments fed into
|
This function plots the row and column effects of a rank-0 RCIM.
As the result is a main effects plot of a regression analysis, its
interpretation when centered = FALSE
is relative
to the baseline (reference level) of a row and column, and
should also be considered in light of the link function used.
Many arguments that start with "r"
refer to the row
effects plot, and "c"
for the column
effects plot.
The original object with the post
slot
assigned additional information from the plot.
This function should be only used to plot the object of rank-0 RCIM. If the rank is positive then it will issue a warning.
Using an argument ylim
will mean the row and column
effects are plotted on a common scale;
see plot.window
.
T. W. Yee, A. F. Hadi.
alcoff.e <- moffset(alcoff, "6", "Mon", postfix = "*") # Effective day fit0 <- rcim(alcoff.e, family = poissonff) ## Not run: par(oma = c(0, 0, 4, 0), mfrow = 1:2) # For all plots below too ii <- plot(fit0, rcol = "blue", ccol = "orange", lwd = 4, ylim = c(-2, 2), # A common ylim cylab = "Effective daily effects", rylab = "Hourly effects", rxlab = "Hour", cxlab = "Effective day") ii@post # Endowed with additional information ## End(Not run) # Negative binomial example ## Not run: fit1 <- rcim(alcoff.e, negbinomial, trace = TRUE) plot(fit1, ylim = c(-2, 2)) ## End(Not run) # Univariate normal example fit2 <- rcim(alcoff.e, uninormal, trace = TRUE) ## Not run: plot(fit2, ylim = c(-200, 400)) # Median-polish example ## Not run: fit3 <- rcim(alcoff.e, alaplace1(tau = 0.5), maxit = 1000, trace = FALSE) plot(fit3, ylim = c(-200, 250)) ## End(Not run) # Zero-inflated Poisson example on "crashp" (no 0s in alcoff) ## Not run: cbind(rowSums(crashp)) # Easy to see the data cbind(colSums(crashp)) # Easy to see the data fit4 <- rcim(Rcim(crashp, rbaseline = "5", cbaseline = "Sun"), zipoissonff, trace = TRUE) plot(fit4, ylim = c(-3, 3)) ## End(Not run)
alcoff.e <- moffset(alcoff, "6", "Mon", postfix = "*") # Effective day fit0 <- rcim(alcoff.e, family = poissonff) ## Not run: par(oma = c(0, 0, 4, 0), mfrow = 1:2) # For all plots below too ii <- plot(fit0, rcol = "blue", ccol = "orange", lwd = 4, ylim = c(-2, 2), # A common ylim cylab = "Effective daily effects", rylab = "Hourly effects", rxlab = "Hour", cxlab = "Effective day") ii@post # Endowed with additional information ## End(Not run) # Negative binomial example ## Not run: fit1 <- rcim(alcoff.e, negbinomial, trace = TRUE) plot(fit1, ylim = c(-2, 2)) ## End(Not run) # Univariate normal example fit2 <- rcim(alcoff.e, uninormal, trace = TRUE) ## Not run: plot(fit2, ylim = c(-200, 400)) # Median-polish example ## Not run: fit3 <- rcim(alcoff.e, alaplace1(tau = 0.5), maxit = 1000, trace = FALSE) plot(fit3, ylim = c(-200, 250)) ## End(Not run) # Zero-inflated Poisson example on "crashp" (no 0s in alcoff) ## Not run: cbind(rowSums(crashp)) # Easy to see the data cbind(colSums(crashp)) # Easy to see the data fit4 <- rcim(Rcim(crashp, rbaseline = "5", cbaseline = "Sun"), zipoissonff, trace = TRUE) plot(fit4, ylim = c(-3, 3)) ## End(Not run)
Component functions of a vgam-class
object can
be plotted with plotvgam()
. These are on the scale of
the linear/additive predictor.
plotvgam(x, newdata = NULL, y = NULL, residuals = NULL, rugplot = TRUE, se = FALSE, scale = 0, raw = TRUE, offset.arg = 0, deriv.arg = 0, overlay = FALSE, type.residuals = c("deviance", "working", "pearson", "response"), plot.arg = TRUE, which.term = NULL, which.cf = NULL, control = plotvgam.control(...), varxij = 1, ...)
plotvgam(x, newdata = NULL, y = NULL, residuals = NULL, rugplot = TRUE, se = FALSE, scale = 0, raw = TRUE, offset.arg = 0, deriv.arg = 0, overlay = FALSE, type.residuals = c("deviance", "working", "pearson", "response"), plot.arg = TRUE, which.term = NULL, which.cf = NULL, control = plotvgam.control(...), varxij = 1, ...)
x |
A fitted VGAM object, e.g., produced by
|
newdata |
Data frame. May be used to reconstruct the original data set. |
y |
Unused. |
residuals |
Logical. If |
rugplot |
Logical. If |
se |
Logical. If |
scale |
Numerical. By default, each plot will have its own
y-axis scale. However, by specifying a value, each plot's y-axis
scale will be at least |
raw |
Logical. If |
offset.arg |
Numerical vector of length |
deriv.arg |
Numerical. The order of the derivative.
Should be assigned an small
integer such as 0, 1, 2. Only applying to |
overlay |
Logical. If |
type.residuals |
if |
plot.arg |
Logical. If |
which.term |
Character or integer vector containing all terms to be
plotted, e.g., |
which.cf |
An integer-valued vector specifying which
linear/additive predictors are to be plotted.
The values must be from the set {1,2,..., |
control |
Other control parameters. See |
... |
Other arguments that can be fed into
|
varxij |
Positive integer.
Used if |
In this help file is the number of linear/additive
predictors, and
is the number of columns of the
constraint matrix of interest.
Many of plotvgam()
's options can be found in
plotvgam.control
, e.g., line types, line widths,
colors.
The original object, but with the preplot
slot of the object
assigned information regarding the plot.
While plot(fit)
will work if class(fit)
is "vgam"
, it is necessary to use plotvgam(fit)
explicitly otherwise.
plotvgam()
is quite buggy at the moment.
Thomas W. Yee
vgam
,
plotvgam.control
,
predict.vgam
,
plotvglm
,
vglm
.
coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vgam(cbind(nBnW, nBW, BnW, BW) ~ s(Age), binom2.or(zero = NULL), data = coalminers) ## Not run: par(mfrow = c(1,3)) plot(fit, se = TRUE, ylim = c(-3, 2), las = 1) plot(fit, se = TRUE, which.cf = 1:2, lcol = "blue", scol = "orange", ylim = c(-3, 2)) plot(fit, se = TRUE, which.cf = 1:2, lcol = "blue", scol = "orange", overlay = TRUE) ## End(Not run)
coalminers <- transform(coalminers, Age = (age - 42) / 5) fit <- vgam(cbind(nBnW, nBW, BnW, BW) ~ s(Age), binom2.or(zero = NULL), data = coalminers) ## Not run: par(mfrow = c(1,3)) plot(fit, se = TRUE, ylim = c(-3, 2), las = 1) plot(fit, se = TRUE, which.cf = 1:2, lcol = "blue", scol = "orange", ylim = c(-3, 2)) plot(fit, se = TRUE, which.cf = 1:2, lcol = "blue", scol = "orange", overlay = TRUE) ## End(Not run)
Provides default values for many arguments available for
plotvgam()
.
plotvgam.control(which.cf = NULL, xlim = NULL, ylim = NULL, llty = par()$lty, slty = "dashed", pcex = par()$cex, pch = par()$pch, pcol = par()$col, lcol = par()$col, rcol = par()$col, scol = par()$col, llwd = par()$lwd, slwd = par()$lwd, add.arg = FALSE, one.at.a.time = FALSE, .include.dots = TRUE, noxmean = FALSE, shade = FALSE, shcol = "gray80", main = "", ...)
plotvgam.control(which.cf = NULL, xlim = NULL, ylim = NULL, llty = par()$lty, slty = "dashed", pcex = par()$cex, pch = par()$pch, pcol = par()$col, lcol = par()$col, rcol = par()$col, scol = par()$col, llwd = par()$lwd, slwd = par()$lwd, add.arg = FALSE, one.at.a.time = FALSE, .include.dots = TRUE, noxmean = FALSE, shade = FALSE, shcol = "gray80", main = "", ...)
which.cf |
Integer vector specifying which component
functions are to be plotted (for each covariate). Must
have values from the
set {1,2,..., |
xlim |
Range for the x-axis. |
ylim |
Range for the y-axis. |
llty |
Line type for the fitted functions (lines).
Fed into |
slty |
Line type for the standard error bands.
Fed into |
pcex |
Character expansion for the points (residuals).
Fed into |
pch |
Character used for the points (residuals).
Same as |
pcol |
Color of the points.
Fed into |
lcol |
Color of the fitted functions (lines).
Fed into |
rcol |
Color of the rug plot.
Fed into |
scol |
Color of the standard error bands.
Fed into |
llwd |
Line width of the fitted functions (lines).
Fed into |
slwd |
Line width of the standard error bands.
Fed into |
add.arg |
Logical.
If |
one.at.a.time |
Logical. If |
.include.dots |
Not to be used by the user. |
noxmean |
Logical. If |
shade , shcol
|
|
main |
Character vector, recycled to the number needed. |
... |
Other arguments that may be fed into |
In the above, is the number of linear/additive predictors.
The most obvious features of plotvgam
can be
controlled by the above arguments.
A list with values matching the arguments.
Thomas W. Yee
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
plotvgam.control(lcol = c("red", "blue"), scol = "darkgreen", se = TRUE)
plotvgam.control(lcol = c("red", "blue"), scol = "darkgreen", se = TRUE)
Currently this function plots the Pearson residuals versus
the linear predictors ( plots) and
plots the Pearson residuals versus
the hat values (
plots).
plotvglm(x, which = "(All)", ...)
plotvglm(x, which = "(All)", ...)
x |
An object of class |
which |
If a subset of the plots is required, specify a subset of the
numbers |
... |
Arguments fed into the primitive |
This function is under development.
Currently it plots the Pearson residuals
against the predicted
values (on the transformed scale) and the hat values.
There are plots in total, therefore
users should call
par
to assign, e.g., the mfrow
argument.
Note: Section 3.7 of Yee (2015) describes the
Pearson residuals and hat values for VGLMs.
Returns the object invisibly.
T. W. Yee
plotvgam
,
plotvgam.control
,
vglm
.
## Not run: ndata <- data.frame(x2 = runif(nn <- 200)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x2), size = exp(1))) fit1 <- vglm(y1 ~ x2, negbinomial, data = ndata, trace = TRUE) coef(fit1, matrix = TRUE) par(mfrow = c(2, 2)) plot(fit1) # Manually produce the four plots plot(fit1, which = 1, col = "blue", las = 1, main = "main1") abline(h = 0, lty = "dashed", col = "gray50") plot(fit1, which = 2, col = "blue", las = 1, main = "main2") abline(h = 0, lty = "dashed", col = "gray50") plot(fit1, which = 3, col = "blue", las = 1, main = "main3") plot(fit1, which = 4, col = "blue", las = 1, main = "main4") ## End(Not run)
## Not run: ndata <- data.frame(x2 = runif(nn <- 200)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x2), size = exp(1))) fit1 <- vglm(y1 ~ x2, negbinomial, data = ndata, trace = TRUE) coef(fit1, matrix = TRUE) par(mfrow = c(2, 2)) plot(fit1) # Manually produce the four plots plot(fit1, which = 1, col = "blue", las = 1, main = "main1") abline(h = 0, lty = "dashed", col = "gray50") plot(fit1, which = 2, col = "blue", las = 1, main = "main2") abline(h = 0, lty = "dashed", col = "gray50") plot(fit1, which = 3, col = "blue", las = 1, main = "main3") plot(fit1, which = 4, col = "blue", las = 1, main = "main4") ## End(Not run)
The pneumo
data frame has 8 rows and 4 columns.
Exposure time is explanatory, and there are 3 ordinal response variables.
data(pneumo)
data(pneumo)
This data frame contains the following columns:
a numeric vector, in years
a numeric vector, counts
a numeric vector, counts
a numeric vector, counts
These were collected from coalface workers. In the original data set, the two most severe categories were combined.
Ashford, J.R., 1959. An approach to the analysis of data for semi-quantal responses in biological assay. Biometrics, 15, 573–581.
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
# Fit the proportional odds model, p.179, in McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)
# Fit the proportional odds model, p.179, in McCullagh and Nelder (1989) pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)
Estimating the density parameter of the distances from a fixed point to the u-th nearest point, in a plane or volume.
poisson.points(ostatistic, dimension = 2, link = "loglink", idensity = NULL, imethod = 1)
poisson.points(ostatistic, dimension = 2, link = "loglink", idensity = NULL, imethod = 1)
ostatistic |
Order statistic.
A single positive value, usually an integer.
For example, the value 5 means the response are the distances
of the fifth nearest value to that point (usually over many
planes or volumes).
Non-integers are allowed because the value 1.5 coincides
with |
dimension |
The value 2 or 3; 2 meaning a plane and 3 meaning a volume. |
link |
Parameter link function applied to the (positive) density parameter,
called |
idensity |
Optional initial value for the parameter.
A |
imethod |
An integer with value |
Suppose the number of points in any region of area of the
plane is a Poisson random variable with mean
(i.e.,
is the density of the points).
Given a fixed point
, define
,
,... to be
the distance to the nearest point to
, second nearest to
,
etc. This VGAM family function estimates
since the probability density function for
is easily derived,
. Here,
corresponds to the
argument
ostatistic
.
Similarly, suppose the number of points in any volume is a
Poisson random variable with mean
where, once again,
is the density of the points.
This VGAM family function estimates
by
specifying the argument
ostatistic
and using
dimension = 3
.
The mean of is returned as the fitted values.
Newton-Raphson is the same as Fisher-scoring.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
rrvglm
and vgam
.
Convergence may be slow if the initial values are far from the solution. This often corresponds to the situation when the response values are all close to zero, i.e., there is a high density of points.
Formulae such as the means have not been fully checked.
T. W. Yee
pdata <- data.frame(y = rgamma(10, shape = exp(-1))) # Not proper data! ostat <- 2 fit <- vglm(y ~ 1, poisson.points(ostat, 2), data = pdata, trace = TRUE, crit = "coef") fit <- vglm(y ~ 1, poisson.points(ostat, 3), data = pdata, trace = TRUE, crit = "coef") # Slow convergence? fit <- vglm(y ~ 1, poisson.points(ostat, 3, idensi = 1), data = pdata, trace = TRUE, crit = "coef") head(fitted(fit)) with(pdata, mean(y)) coef(fit, matrix = TRUE) Coef(fit)
pdata <- data.frame(y = rgamma(10, shape = exp(-1))) # Not proper data! ostat <- 2 fit <- vglm(y ~ 1, poisson.points(ostat, 2), data = pdata, trace = TRUE, crit = "coef") fit <- vglm(y ~ 1, poisson.points(ostat, 3), data = pdata, trace = TRUE, crit = "coef") # Slow convergence? fit <- vglm(y ~ 1, poisson.points(ostat, 3, idensi = 1), data = pdata, trace = TRUE, crit = "coef") head(fitted(fit)) with(pdata, mean(y)) coef(fit, matrix = TRUE) Coef(fit)
Family function for a generalized linear model fitted to Poisson responses.
poissonff(link = "loglink", imu = NULL, imethod = 1, parallel = FALSE, zero = NULL, bred = FALSE, earg.link = FALSE, type.fitted = c("mean", "quantiles"), percentiles = c(25, 50, 75))
poissonff(link = "loglink", imu = NULL, imethod = 1, parallel = FALSE, zero = NULL, bred = FALSE, earg.link = FALSE, type.fitted = c("mean", "quantiles"), percentiles = c(25, 50, 75))
link |
Link function applied to the mean or means.
See |
parallel |
A logical or formula. Used only if the response is a matrix. |
imu , imethod
|
See |
zero |
Can be an integer-valued vector specifying which linear/additive
predictors
are modelled as intercepts only. The values must be from the set
{1,2,..., |
bred , earg.link
|
Details at |
type.fitted , percentiles
|
Details at |
defined above is the number of linear/additive predictors.
With overdispersed data try
negbinomial
.
An object of class "vglmff"
(see
vglmff-class
).
The object is used by modelling functions
such as
vglm
, vgam
,
rrvglm
, cqo
,
and cao
.
With multiple responses, assigning a known dispersion parameter for each response is not handled well yet. Currently, only a single known dispersion parameter is handled well.
This function will handle a matrix response automatically.
Regardless of whether the dispersion
parameter is to be estimated or not, its
value can be seen from the output from the
summary()
of the object.
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Links
,
hdeff.vglm
,
negbinomial
,
genpoisson1
,
genpoisson2
,
genpoisson0
,
gaitdpoisson
,
zipoisson
,
N1poisson
,
pospoisson
,
skellam
,
mix2poisson
,
cens.poisson
,
ordpoisson
,
amlpoisson
,
inv.binomial
,
simulate.vlm
,
loglink
,
polf
,
rrvglm
,
cqo
,
cao
,
binomialff
,
poisson
,
Poisson
,
poisson.points
,
ruge
,
V1
,
V2
,
residualsvglm
,
margeff
.
poissonff() set.seed(123) pdata <- data.frame(x2 = rnorm(nn <- 100)) pdata <- transform(pdata, y1 = rpois(nn, exp(1 + x2)), y2 = rpois(nn, exp(1 + x2))) (fit1 <- vglm(cbind(y1, y2) ~ x2, poissonff, data = pdata)) (fit2 <- vglm(y1 ~ x2, poissonff(bred = TRUE), data = pdata)) coef(fit1, matrix = TRUE) coef(fit2, matrix = TRUE) nn <- 200 cdata <- data.frame(x2 = rnorm(nn), x3 = rnorm(nn), x4 = rnorm(nn)) cdata <- transform(cdata, lv1 = 0 + x3 - 2*x4) cdata <- transform(cdata, lambda1 = exp(3 - 0.5 * (lv1-0)^2), lambda2 = exp(2 - 0.5 * (lv1-1)^2), lambda3 = exp(2 - 0.5 * ((lv1+4)/2)^2)) cdata <- transform(cdata, y1 = rpois(nn, lambda1), y2 = rpois(nn, lambda2), y3 = rpois(nn, lambda3)) ## Not run: lvplot(p1, y = TRUE, lcol = 2:4, pch = 2:4, pcol = 2:4, rug = FALSE)
poissonff() set.seed(123) pdata <- data.frame(x2 = rnorm(nn <- 100)) pdata <- transform(pdata, y1 = rpois(nn, exp(1 + x2)), y2 = rpois(nn, exp(1 + x2))) (fit1 <- vglm(cbind(y1, y2) ~ x2, poissonff, data = pdata)) (fit2 <- vglm(y1 ~ x2, poissonff(bred = TRUE), data = pdata)) coef(fit1, matrix = TRUE) coef(fit2, matrix = TRUE) nn <- 200 cdata <- data.frame(x2 = rnorm(nn), x3 = rnorm(nn), x4 = rnorm(nn)) cdata <- transform(cdata, lv1 = 0 + x3 - 2*x4) cdata <- transform(cdata, lambda1 = exp(3 - 0.5 * (lv1-0)^2), lambda2 = exp(2 - 0.5 * (lv1-1)^2), lambda3 = exp(2 - 0.5 * ((lv1+4)/2)^2)) cdata <- transform(cdata, y1 = rpois(nn, lambda1), y2 = rpois(nn, lambda2), y3 = rpois(nn, lambda3)) ## Not run: lvplot(p1, y = TRUE, lcol = 2:4, pch = 2:4, pcol = 2:4, rug = FALSE)
Density for the PoissonPoints distribution.
dpois.points(x, lambda, ostatistic, dimension = 2, log = FALSE)
dpois.points(x, lambda, ostatistic, dimension = 2, log = FALSE)
x |
vector of quantiles. |
lambda |
the mean density of points. |
ostatistic |
positive values, usually integers. |
dimension |
Either 2 and/or 3. |
log |
Logical; if TRUE, the logarithm is returned. |
See poisson.points
, the VGAM family function
for estimating the parameters,
for the formula of the probability density function and other details.
dpois.points
gives the density.
poisson.points
,
dpois
,
Maxwell
.
## Not run: lambda <- 1; xvec <- seq(0, 2, length = 400) plot(xvec, dpois.points(xvec, lambda, ostat = 1, dimension = 2), type = "l", las = 1, col = "blue", sub = "First order statistic", main = paste("PDF of PoissonPoints distribution with lambda = ", lambda, " and on the plane", sep = "")) ## End(Not run)
## Not run: lambda <- 1; xvec <- seq(0, 2, length = 400) plot(xvec, dpois.points(xvec, lambda, ostat = 1, dimension = 2), type = "l", las = 1, col = "blue", sub = "First order statistic", main = paste("PDF of PoissonPoints distribution with lambda = ", lambda, " and on the plane", sep = "")) ## End(Not run)
Density, distribution function and random generation for the Poisson lognormal distribution.
dpolono(x, meanlog = 0, sdlog = 1, bigx = 170, ...) ppolono(q, meanlog = 0, sdlog = 1, isOne = 1 - sqrt( .Machine$double.eps ), ...) rpolono(n, meanlog = 0, sdlog = 1)
dpolono(x, meanlog = 0, sdlog = 1, bigx = 170, ...) ppolono(q, meanlog = 0, sdlog = 1, isOne = 1 - sqrt( .Machine$double.eps ), ...) rpolono(n, meanlog = 0, sdlog = 1)
x , q
|
vector of quantiles. |
n |
number of observations.
If |
meanlog , sdlog
|
the mean and standard deviation of the normal distribution
(on the log scale).
They match the arguments in
|
bigx |
Numeric.
This argument is for handling large values of |
isOne |
Used to test whether the cumulative probabilities have effectively reached unity. |
... |
Arguments passed into
|
The Poisson lognormal distribution is similar to the negative binomial in that it can be motivated by a Poisson distribution whose mean parameter comes from a right skewed distribution (gamma for the negative binomial and lognormal for the Poisson lognormal distribution).
dpolono
gives the density,
ppolono
gives the distribution function, and
rpolono
generates random deviates.
By default,
dpolono
involves numerical integration that is performed using
integrate
. Consequently, computations are very
slow and numerical problems may occur
(if so then the use of ...
may be needed).
Alternatively, for extreme values of x
, meanlog
,
sdlog
, etc., the use of bigx = Inf
avoids the call to
integrate
, however the answer may be a little
inaccurate.
For the maximum likelihood estimation of the 2 parameters a VGAM
family function called polono()
, say, has not been written yet.
T. W. Yee.
Some anonymous soul kindly wrote ppolono()
and
improved the original dpolono()
.
Bulmer, M. G. (1974). On fitting the Poisson lognormal distribution to species-abundance data. Biometrics, 30, 101–110.
lognormal
,
poissonff
,
negbinomial
.
meanlog <- 0.5; sdlog <- 0.5; yy <- 0:19 sum(proby <- dpolono(yy, m = meanlog, sd = sdlog)) # Should be 1 max(abs(cumsum(proby) - ppolono(yy, m = meanlog, sd = sdlog))) # 0? ## Not run: opar = par(no.readonly = TRUE) par(mfrow = c(2, 2)) plot(yy, proby, type = "h", col = "blue", ylab = "P[Y=y]", log = "", main = paste0("Poisson lognormal(m = ", meanlog, ", sdl = ", sdlog, ")")) y <- 0:190 # More extreme values; use the approxn & plot on a log scale (sum(proby <- dpolono(y, m = meanlog, sd = sdlog, bigx = 100))) # 1? plot(y, proby, type = "h", col = "blue", ylab = "P[Y=y] (log)", log = "y", main = paste0("Poisson lognormal(m = ", meanlog, ", sdl = ", sdlog, ")")) # Note the kink at bigx # Random number generation table(y <- rpolono(n = 1000, m = meanlog, sd = sdlog)) hist(y, breaks = ((-1):max(y))+0.5, prob = TRUE, border = "blue") par(opar) ## End(Not run)
meanlog <- 0.5; sdlog <- 0.5; yy <- 0:19 sum(proby <- dpolono(yy, m = meanlog, sd = sdlog)) # Should be 1 max(abs(cumsum(proby) - ppolono(yy, m = meanlog, sd = sdlog))) # 0? ## Not run: opar = par(no.readonly = TRUE) par(mfrow = c(2, 2)) plot(yy, proby, type = "h", col = "blue", ylab = "P[Y=y]", log = "", main = paste0("Poisson lognormal(m = ", meanlog, ", sdl = ", sdlog, ")")) y <- 0:190 # More extreme values; use the approxn & plot on a log scale (sum(proby <- dpolono(y, m = meanlog, sd = sdlog, bigx = 100))) # 1? plot(y, proby, type = "h", col = "blue", ylab = "P[Y=y] (log)", log = "y", main = paste0("Poisson lognormal(m = ", meanlog, ", sdl = ", sdlog, ")")) # Note the kink at bigx # Random number generation table(y <- rpolono(n = 1000, m = meanlog, sd = sdlog)) hist(y, breaks = ((-1):max(y))+0.5, prob = TRUE, border = "blue") par(opar) ## End(Not run)
Fits a GLM-/GAM-like model to multiple Bernoulli responses where each row in the capture history matrix response has at least one success (capture). Capture history behavioural effects are accommodated.
posbernoulli.b(link = "logitlink", drop.b = FALSE ~ 1, type.fitted = c("likelihood.cond", "mean.uncond"), I2 = FALSE, ipcapture = NULL, iprecapture = NULL, p.small = 1e-4, no.warning = FALSE)
posbernoulli.b(link = "logitlink", drop.b = FALSE ~ 1, type.fitted = c("likelihood.cond", "mean.uncond"), I2 = FALSE, ipcapture = NULL, iprecapture = NULL, p.small = 1e-4, no.warning = FALSE)
link , drop.b , ipcapture , iprecapture
|
See |
I2 |
Logical.
This argument is used for terms that are not parallel.
If |
type.fitted |
Details at |
p.small , no.warning
|
See |
This model
(commonly known as /
in the
capture–recapture literature)
operates on a capture history matrix response of 0s and 1s
(
).
See
posbernoulli.t
for details,
e.g., common assumptions with other models.
Once an animal is captured for the first time,
it is marked/tagged so that its future
capture history can be recorded. The effect of the recapture
probability is modelled through a second linear/additive
predictor. It is well-known that some species of animals are
affected by capture,
e.g., trap-shy or trap-happy. This VGAM family function
does allow the capture history to be modelled via such
behavioural effects.
So does posbernoulli.tb
but
posbernoulli.t
cannot.
The number of linear/additive predictors is ,
and the default links are
where
is the probability of capture and
is the probability of recapture.
The fitted value returned is of the same dimension as
the response matrix, and depends on the capture history:
prior to being first captured, it is
pcapture
.
Afterwards, it is precapture
.
By default, the constraint matrices for the intercept term
and the other covariates are set up so that
differs from
by a simple binary effect,
on a logit scale.
However, this difference (the behavioural effect) is more
directly estimated by having
I2 = FALSE
.
Then it allows an estimate of the trap-happy/trap-shy effect;
these are positive/negative values respectively.
If I2 = FALSE
then
the (nonstandard) constraint matrix used is
cbind(0:1, 1)
,
meaning the first element can be interpreted as the behavioural
effect.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
The dependent variable is not scaled to row proportions.
This is the same as posbernoulli.t
and posbernoulli.tb
but different from posbinomial
and binomialff
.
Thomas W. Yee.
See posbernoulli.t
.
posbernoulli.t
and
posbernoulli.tb
(including estimating ),
deermice
,
dposbern
,
rposbern
,
posbinomial
,
aux.posbernoulli.t
,
prinia
.
## Not run: # deermice data ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, # Fit a M_b model M.b <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ 1, posbernoulli.b, data = deermice, trace = TRUE) coef(M.b)["(Intercept):1"] # Behavioural effect on logit scale coef(M.b, matrix = TRUE) constraints(M.b, matrix = TRUE) summary(M.b, presid = FALSE) # Fit a M_bh model M.bh <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.b, data = deermice, trace = TRUE) coef(M.bh, matrix = TRUE) coef(M.bh)["(Intercept):1"] # Behavioural effect on logit scale # (2,1) elt is for the behavioural effect: constraints(M.bh)[["(Intercept)"]] summary(M.bh, presid = FALSE) # Significant trap-happy effect # Approx. 95 percent confidence for the behavioural effect: SE.M.bh <- coef(summary(M.bh))["(Intercept):1", "Std. Error"] coef(M.bh)["(Intercept):1"] + c(-1, 1) * 1.96 * SE.M.bh # Fit a M_h model M.h <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.b(drop.b = TRUE ~ sex + weight), data = deermice, trace = TRUE) coef(M.h, matrix = TRUE) constraints(M.h, matrix = TRUE) summary(M.h, presid = FALSE) # Fit a M_0 model M.0 <- vglm(cbind( y1 + y2 + y3 + y4 + y5 + y6, 6 - y1 - y2 - y3 - y4 - y5 - y6) ~ 1, posbinomial, data = deermice, trace = TRUE) coef(M.0, matrix = TRUE) summary(M.0, presid = FALSE) # Simulated data set ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, set.seed(123); nTimePts <- 5; N <- 1000 # N is the popn size pdata <- rposbern(N, nTimePts=nTimePts, pvars=2, is.popn=TRUE) nrow(pdata) # < N (because some animals were never captured) # The truth: xcoeffs are c(-2, 1, 2) and cap.effect = +1 M.bh.2 <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2, posbernoulli.b, data = pdata, trace = TRUE) coef(M.bh.2) coef(M.bh.2, matrix = TRUE) constraints(M.bh.2, matrix = TRUE) summary(M.bh.2, presid = FALSE) head(depvar(M.bh.2)) # Capture history response matrix head(M.bh.2@extra$cap.hist1) # Info on its capture history head(M.bh.2@extra$cap1) # When it was first captured head(fitted(M.bh.2)) # Depends on capture history (trap.effect <- coef(M.bh.2)["(Intercept):1"]) # Should be +1 head(model.matrix(M.bh.2, type = "vlm"), 21) head(pdata) summary(pdata) dim(depvar(M.bh.2)) vcov(M.bh.2) M.bh.2@extra$N.hat # Population size estimate; should be about N M.bh.2@extra$SE.N.hat # SE of the estimate of the population size # An approximate 95 percent confidence interval: round(M.bh.2@extra$N.hat + c(-1, 1)*1.96* M.bh.2@extra$SE.N.hat, 1) ## End(Not run)
## Not run: # deermice data ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, # Fit a M_b model M.b <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ 1, posbernoulli.b, data = deermice, trace = TRUE) coef(M.b)["(Intercept):1"] # Behavioural effect on logit scale coef(M.b, matrix = TRUE) constraints(M.b, matrix = TRUE) summary(M.b, presid = FALSE) # Fit a M_bh model M.bh <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.b, data = deermice, trace = TRUE) coef(M.bh, matrix = TRUE) coef(M.bh)["(Intercept):1"] # Behavioural effect on logit scale # (2,1) elt is for the behavioural effect: constraints(M.bh)[["(Intercept)"]] summary(M.bh, presid = FALSE) # Significant trap-happy effect # Approx. 95 percent confidence for the behavioural effect: SE.M.bh <- coef(summary(M.bh))["(Intercept):1", "Std. Error"] coef(M.bh)["(Intercept):1"] + c(-1, 1) * 1.96 * SE.M.bh # Fit a M_h model M.h <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.b(drop.b = TRUE ~ sex + weight), data = deermice, trace = TRUE) coef(M.h, matrix = TRUE) constraints(M.h, matrix = TRUE) summary(M.h, presid = FALSE) # Fit a M_0 model M.0 <- vglm(cbind( y1 + y2 + y3 + y4 + y5 + y6, 6 - y1 - y2 - y3 - y4 - y5 - y6) ~ 1, posbinomial, data = deermice, trace = TRUE) coef(M.0, matrix = TRUE) summary(M.0, presid = FALSE) # Simulated data set ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, set.seed(123); nTimePts <- 5; N <- 1000 # N is the popn size pdata <- rposbern(N, nTimePts=nTimePts, pvars=2, is.popn=TRUE) nrow(pdata) # < N (because some animals were never captured) # The truth: xcoeffs are c(-2, 1, 2) and cap.effect = +1 M.bh.2 <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2, posbernoulli.b, data = pdata, trace = TRUE) coef(M.bh.2) coef(M.bh.2, matrix = TRUE) constraints(M.bh.2, matrix = TRUE) summary(M.bh.2, presid = FALSE) head(depvar(M.bh.2)) # Capture history response matrix head(M.bh.2@extra$cap.hist1) # Info on its capture history head(M.bh.2@extra$cap1) # When it was first captured head(fitted(M.bh.2)) # Depends on capture history (trap.effect <- coef(M.bh.2)["(Intercept):1"]) # Should be +1 head(model.matrix(M.bh.2, type = "vlm"), 21) head(pdata) summary(pdata) dim(depvar(M.bh.2)) vcov(M.bh.2) M.bh.2@extra$N.hat # Population size estimate; should be about N M.bh.2@extra$SE.N.hat # SE of the estimate of the population size # An approximate 95 percent confidence interval: round(M.bh.2@extra$N.hat + c(-1, 1)*1.96* M.bh.2@extra$SE.N.hat, 1) ## End(Not run)
Fits a GLM/GAM-like model to multiple Bernoulli responses where each row in the capture history matrix response has at least one success (capture). Sampling occasion effects are accommodated.
posbernoulli.t(link = "logitlink", parallel.t = FALSE ~ 1, iprob = NULL, p.small = 1e-4, no.warning = FALSE, type.fitted = c("probs", "onempall0"))
posbernoulli.t(link = "logitlink", parallel.t = FALSE ~ 1, iprob = NULL, p.small = 1e-4, no.warning = FALSE, type.fitted = c("probs", "onempall0"))
link , iprob , parallel.t
|
See |
p.small , no.warning
|
A small probability value used to give a warning for the
Horvitz–Thompson estimator.
Any estimated probability value less than |
type.fitted |
See |
These models (commonly known as or
(no prefix
means it is an intercept-only model)
in the capture–recapture literature) operate on a capture
history matrix response of 0s and 1s
(
).
Each column is a
sampling occasion where animals are potentially captured
(e.g., a field trip), and each row is an individual animal.
Capture is a 1, else a 0. No removal of animals from
the population is made (closed population), e.g., no
immigration or emigration. Each row of the response
matrix has at least one capture.
Once an animal is captured for the first time,
it is marked/tagged so that its future capture history can
be recorded. Then it is released immediately back into the
population to remix. It is released immediately after each
recapture too. It is assumed that the animals are independent
and that, for a given animal, each sampling occasion is
independent. And animals do not lose their marks/tags, and
all marks/tags are correctly recorded.
The number of linear/additive predictors is equal to the number
of sampling occasions, i.e., , say.
The default link functions are
where each
denotes the probability of capture at
time point
.
The fitted value returned is a matrix of probabilities
of the same dimension as the response matrix.
A conditional likelihood is maximized here using Fisher scoring.
Each sampling occasion has a separate probability that
is modelled here. The probabilities can be constrained
to be equal by setting parallel.t = FALSE ~ 0
;
then the results are effectively the same as
posbinomial
except the binomial constants are
not included in the log-likelihood.
If parallel.t = TRUE ~ 0
then each column should have
at least one 1 and at least one 0.
It is well-known that some species of animals are affected
by capture, e.g., trap-shy or trap-happy. This VGAM
family function does not allow any behavioral effect to be
modelled (posbernoulli.b
and posbernoulli.tb
do) because the
denominator of the likelihood function must be free of
behavioral effects.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
Upon fitting the extra
slot has a (list) component
called N.hat
which is a point estimate of the population size
(it is the Horvitz-Thompson (1952) estimator).
And there is a component called
SE.N.hat
containing its standard error.
The weights
argument of vglm
need not be
assigned, and the default is just a matrix of ones.
Fewer numerical problems are likely to occur
for parallel.t = TRUE
.
Data-wise, each sampling occasion may need at least one success
(capture) and one failure.
Less stringent conditions in the data are needed when
parallel.t = TRUE
.
Ditto when parallelism is applied to the intercept too.
The response matrix is returned unchanged;
i.e., not converted into proportions like
posbinomial
. If the response matrix has column
names then these are used in the labelling, else prob1
,
prob2
, etc. are used.
Using AIC()
or BIC()
to compare
posbernoulli.t
,
posbernoulli.b
,
posbernoulli.tb
models with a
posbinomial
model requires posbinomial(omit.constant = TRUE)
because one needs to remove the normalizing constant from the
log-likelihood function.
See posbinomial
for an example.
Thomas W. Yee.
Huggins, R. M. (1991). Some practical aspects of a conditional likelihood approach to capture experiments. Biometrics, 47, 725–732.
Huggins, R. M. and Hwang, W.-H. (2011). A review of the use of conditional likelihood in capture–recapture experiments. International Statistical Review, 79, 385–400.
Otis, D. L. and Burnham, K. P. and White, G. C. and Anderson, D. R. (1978). Statistical inference from capture data on closed animal populations, Wildlife Monographs, 62, 3–135.
Yee, T. W. and Stoklosa, J. and Huggins, R. M. (2015). The VGAM package for capture–recapture data using the conditional likelihood. Journal of Statistical Software, 65, 1–33. doi:10.18637/jss.v065.i05.
posbernoulli.b
,
posbernoulli.tb
,
Select
,
deermice
,
Huggins89table1
,
Huggins89.t1
,
dposbern
,
rposbern
,
posbinomial
,
AICvlm
,
BICvlm
,
prinia
.
M.t <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ 1, posbernoulli.t, data = deermice, trace = TRUE) coef(M.t, matrix = TRUE) constraints(M.t, matrix = TRUE) summary(M.t, presid = FALSE) M.h.1 <- vglm(Select(deermice, "y") ~ sex + weight, trace = TRUE, posbernoulli.t(parallel.t = FALSE ~ -1), deermice) coef(M.h.1, matrix = TRUE) constraints(M.h.1) summary(M.h.1, presid = FALSE) head(depvar(M.h.1)) # Response capture history matrix dim(depvar(M.h.1)) M.th.2 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.t(parallel.t = FALSE), deermice) # Test the parallelism assumption wrt sex and weight: lrtest(M.h.1, M.th.2) coef(M.th.2) coef(M.th.2, matrix = TRUE) constraints(M.th.2) summary(M.th.2, presid = FALSE) head(model.matrix(M.th.2, type = "vlm"), 21) M.th.2@extra$N.hat # Population size estimate; should be about N M.th.2@extra$SE.N.hat # SE of the estimate of the population size # An approximate 95 percent confidence interval: round(M.th.2@extra$N.hat + c(-1, 1)*1.96* M.th.2@extra$SE.N.hat, 1) # Fit a M_h model, effectively the parallel M_t model: deermice <- transform(deermice, ysum = y1 + y2 + y3 + y4 + y5 + y6, tau = 6) M.h.3 <- vglm(cbind(ysum, tau - ysum) ~ sex + weight, posbinomial(omit.constant = TRUE), data = deermice) max(abs(coef(M.h.1) - coef(M.h.3))) # Should be zero # Difference is due to the binomial constants: logLik(M.h.3) - logLik(M.h.1)
M.t <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ 1, posbernoulli.t, data = deermice, trace = TRUE) coef(M.t, matrix = TRUE) constraints(M.t, matrix = TRUE) summary(M.t, presid = FALSE) M.h.1 <- vglm(Select(deermice, "y") ~ sex + weight, trace = TRUE, posbernoulli.t(parallel.t = FALSE ~ -1), deermice) coef(M.h.1, matrix = TRUE) constraints(M.h.1) summary(M.h.1, presid = FALSE) head(depvar(M.h.1)) # Response capture history matrix dim(depvar(M.h.1)) M.th.2 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.t(parallel.t = FALSE), deermice) # Test the parallelism assumption wrt sex and weight: lrtest(M.h.1, M.th.2) coef(M.th.2) coef(M.th.2, matrix = TRUE) constraints(M.th.2) summary(M.th.2, presid = FALSE) head(model.matrix(M.th.2, type = "vlm"), 21) M.th.2@extra$N.hat # Population size estimate; should be about N M.th.2@extra$SE.N.hat # SE of the estimate of the population size # An approximate 95 percent confidence interval: round(M.th.2@extra$N.hat + c(-1, 1)*1.96* M.th.2@extra$SE.N.hat, 1) # Fit a M_h model, effectively the parallel M_t model: deermice <- transform(deermice, ysum = y1 + y2 + y3 + y4 + y5 + y6, tau = 6) M.h.3 <- vglm(cbind(ysum, tau - ysum) ~ sex + weight, posbinomial(omit.constant = TRUE), data = deermice) max(abs(coef(M.h.1) - coef(M.h.3))) # Should be zero # Difference is due to the binomial constants: logLik(M.h.3) - logLik(M.h.1)
Fits a GLM/GAM-like model to multiple Bernoulli responses where each row in the capture history matrix response has at least one success (capture). Sampling occasion effects and behavioural effects are accommodated.
posbernoulli.tb(link = "logitlink", parallel.t = FALSE ~ 1, parallel.b = FALSE ~ 0, drop.b = FALSE ~ 1, type.fitted = c("likelihood.cond", "mean.uncond"), imethod = 1, iprob = NULL, p.small = 1e-4, no.warning = FALSE, ridge.constant = 0.0001, ridge.power = -4)
posbernoulli.tb(link = "logitlink", parallel.t = FALSE ~ 1, parallel.b = FALSE ~ 0, drop.b = FALSE ~ 1, type.fitted = c("likelihood.cond", "mean.uncond"), imethod = 1, iprob = NULL, p.small = 1e-4, no.warning = FALSE, ridge.constant = 0.0001, ridge.power = -4)
link , imethod , iprob
|
See |
parallel.t , parallel.b , drop.b
|
A logical, or formula with a logical as the response.
See Suppose the model is intercept-only.
Setting The default model has a different intercept for each sampling occasion, a time-parallelism assumption for all other covariates, and a dummy variable representing a single behavioural effect (also in the intercept). The most flexible model is to set
|
type.fitted |
Character, one of the choices for the type of fitted value
returned.
The default is the first one.
Partial matching is okay.
For |
ridge.constant , ridge.power
|
Determines the ridge parameters at each IRLS iteration.
They are the constant and power (exponent) for the ridge
adjustment for the working weight matrices (the capture
probability block matrix, hence the first |
p.small , no.warning
|
See |
This model
(commonly known as /
in the capture–recapture literature)
operates on a response matrix of 0s and 1s
(
).
See
posbernoulli.t
for information that is in common.
It allows time and behavioural effects to be modelled.
Evidently,
the expected information matrix (EIM) seems not
of full rank (especially in early iterations), so
ridge.constant
and ridge.power
are used to
try fix up the problem.
The default link functions are
where the subscript
denotes capture,
the subscript
denotes recapture,
and it is not possible to recapture the animal at sampling
occasion 1.
Thus
.
The parameters are currently prefixed by
pcapture
and precapture
for the capture and recapture probabilities.
This VGAM family function may be further modified in
the future.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
It is a good idea to apply the parallelism assumption to each
sampling occasion except possibly with respect to the intercepts.
Also, a simple behavioural effect such as being modelled
using the intercept is recommended; if the behavioural effect
is not parallel and/or allowed to apply to other covariates
then there will probably be too many parameters, and hence,
numerical problems. See M_tbh.1
below.
It is a good idea to monitor convergence.
Simpler models such as the /
models
are best fitted with
posbernoulli.t
or
posbernoulli.b
or
posbinomial
.
Thomas W. Yee.
See posbernoulli.t
.
posbernoulli.b
(including N.hat
),
posbernoulli.t
,
posbinomial
,
Select
,
fill1
,
Huggins89table1
,
Huggins89.t1
,
deermice
,
prinia
.
## Not run: # Example 1: simulated data nTimePts <- 5 # (aka tau == # of sampling occasions) nnn <- 1000 # Number of animals pdata <- rposbern(n = nnn, nTimePts = nTimePts, pvars = 2) dim(pdata); head(pdata) M_tbh.1 <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2, posbernoulli.tb, data = pdata, trace = TRUE) coef(M_tbh.1) # First element is the behavioural effect coef(M_tbh.1, matrix = TRUE) constraints(M_tbh.1, matrix = TRUE) summary(M_tbh.1, presid = FALSE) # Std errors are approximate head(fitted(M_tbh.1)) head(model.matrix(M_tbh.1, type = "vlm"), 21) dim(depvar(M_tbh.1)) M_tbh.2 <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2, posbernoulli.tb(parallel.t = FALSE ~ 0), data = pdata, trace = TRUE) coef(M_tbh.2) # First element is the behavioural effect coef(M_tbh.2, matrix = TRUE) constraints(M_tbh.2, matrix = TRUE) summary(M_tbh.2, presid = FALSE) # Std errors are approximate head(fitted(M_tbh.2)) head(model.matrix(M_tbh.2, type = "vlm"), 21) dim(depvar(M_tbh.2)) # Example 2: deermice subset data fit1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.t, data = deermice, trace = TRUE) coef(fit1) coef(fit1, matrix = TRUE) constraints(fit1, matrix = TRUE) summary(fit1, presid = FALSE) # Standard errors are approximate # fit1 is the same as Fit1 (a M_{th} model): Fit1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.tb(drop.b = TRUE ~ sex + weight, parallel.t = TRUE), # But not for the intercept data = deermice, trace = TRUE) constraints(Fit1) ## End(Not run)
## Not run: # Example 1: simulated data nTimePts <- 5 # (aka tau == # of sampling occasions) nnn <- 1000 # Number of animals pdata <- rposbern(n = nnn, nTimePts = nTimePts, pvars = 2) dim(pdata); head(pdata) M_tbh.1 <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2, posbernoulli.tb, data = pdata, trace = TRUE) coef(M_tbh.1) # First element is the behavioural effect coef(M_tbh.1, matrix = TRUE) constraints(M_tbh.1, matrix = TRUE) summary(M_tbh.1, presid = FALSE) # Std errors are approximate head(fitted(M_tbh.1)) head(model.matrix(M_tbh.1, type = "vlm"), 21) dim(depvar(M_tbh.1)) M_tbh.2 <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2, posbernoulli.tb(parallel.t = FALSE ~ 0), data = pdata, trace = TRUE) coef(M_tbh.2) # First element is the behavioural effect coef(M_tbh.2, matrix = TRUE) constraints(M_tbh.2, matrix = TRUE) summary(M_tbh.2, presid = FALSE) # Std errors are approximate head(fitted(M_tbh.2)) head(model.matrix(M_tbh.2, type = "vlm"), 21) dim(depvar(M_tbh.2)) # Example 2: deermice subset data fit1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.t, data = deermice, trace = TRUE) coef(fit1) coef(fit1, matrix = TRUE) constraints(fit1, matrix = TRUE) summary(fit1, presid = FALSE) # Standard errors are approximate # fit1 is the same as Fit1 (a M_{th} model): Fit1 <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ sex + weight, posbernoulli.tb(drop.b = TRUE ~ sex + weight, parallel.t = TRUE), # But not for the intercept data = deermice, trace = TRUE) constraints(Fit1) ## End(Not run)
Density, and random generation for multiple Bernoulli responses where each row in the response matrix has at least one success.
rposbern(n, nTimePts = 5, pvars = length(xcoeff), xcoeff = c(-2, 1, 2), Xmatrix = NULL, cap.effect = 1, is.popn = FALSE, link = "logitlink", earg.link = FALSE) dposbern(x, prob, prob0 = prob, log = FALSE)
rposbern(n, nTimePts = 5, pvars = length(xcoeff), xcoeff = c(-2, 1, 2), Xmatrix = NULL, cap.effect = 1, is.popn = FALSE, link = "logitlink", earg.link = FALSE) dposbern(x, prob, prob0 = prob, log = FALSE)
x |
response vector or matrix. Should only have 0 and 1 values, at least two columns, and each row should have at least one 1. |
nTimePts |
Number of sampling occasions.
Called |
n |
number of observations.
Usually a single positive integer, else the length of the vector
is used.
See argument |
is.popn |
Logical.
If |
Xmatrix |
Optional X matrix. If given, the X matrix is not generated internally. |
cap.effect |
Numeric, the capture effect. Added to the linear predictor if captured previously. A positive or negative value corresponds to a trap-happy and trap-shy effect respectively. |
pvars |
Number of other numeric covariates that make up
the linear predictor.
Labelled |
xcoeff |
The regression coefficients of the linear predictor.
These correspond to |
link , earg.link
|
The former is used to generate the probabilities for capture
at each occasion.
Other details at |
prob , prob0
|
Matrix of probabilities for the numerator and denominators
respectively.
The default does not correspond to the
|
log |
Logical. Return the logarithm of the answer? |
The form of the conditional likelihood is described in
posbernoulli.b
and/or
posbernoulli.t
and/or
posbernoulli.tb
.
The denominator is equally shared among the elements of
the matrix x
.
rposbern
returns a data frame with some attributes.
The function generates random deviates
( columns labelled
y1
, y2
, ...)
for the response.
Some indicator columns are also included
(those starting with ch
are for previous capture history).
The default setting corresponds to a model that
has a single trap-happy effect.
Covariates
x1
, x2
, ... have the same
affect on capture/recapture at every sampling occasion
(see the argument parallel.t
in, e.g.,
posbernoulli.tb
).
The function dposbern
gives the density,
The r
-type function is experimental only and does not
follow the usual conventions of r
-type R functions.
It may change a lot in the future.
The d
-type function is more conventional and is less
likely to change.
Thomas W. Yee.
posbernoulli.tb
,
posbernoulli.b
,
posbernoulli.t
.
rposbern(n = 10) attributes(pdata <- rposbern(n = 100)) M.bh <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2 + x3, posbernoulli.b(I2 = FALSE), pdata, trace = TRUE) constraints(M.bh) summary(M.bh)
rposbern(n = 10) attributes(pdata <- rposbern(n = 100)) M.bh <- vglm(cbind(y1, y2, y3, y4, y5) ~ x2 + x3, posbernoulli.b(I2 = FALSE), pdata, trace = TRUE) constraints(M.bh) summary(M.bh)
Fits a positive binomial distribution.
posbinomial(link = "logitlink", multiple.responses = FALSE, parallel = FALSE, omit.constant = FALSE, p.small = 1e-4, no.warning = FALSE, zero = NULL)
posbinomial(link = "logitlink", multiple.responses = FALSE, parallel = FALSE, omit.constant = FALSE, p.small = 1e-4, no.warning = FALSE, zero = NULL)
link , multiple.responses , parallel , zero
|
Details at |
omit.constant |
Logical.
If |
p.small , no.warning
|
See |
The positive binomial distribution is the ordinary binomial
distribution
but with the probability of zero being zero.
Thus the other probabilities are scaled up
(i.e., divided by ).
The fitted values are the ordinary binomial distribution fitted
values, i.e., the usual mean.
In the capture–recapture literature this model is called
the if it is an intercept-only model.
Otherwise it is called the
when there are covariates.
It arises from a sum of a sequence of
-Bernoulli random variates subject to at least
one success (capture).
Here, each animal has the same probability of capture or
recapture, regardless of the
sampling occasions.
Independence between animals and between sampling occasions etc.
is assumed.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
Under- or over-flow may occur if the data is ill-conditioned.
The input for this family function is the same as
binomialff
.
If multiple.responses = TRUE
then each column of the
matrix response should be a count (the number of successes),
and the weights
argument should be a matrix of the same
dimension as the response containing the number of trials.
If multiple.responses = FALSE
then the response input
should be the same as binomialff
.
Yet to be done: a quasi.posbinomial()
which estimates a
dispersion parameter.
Thomas W. Yee
Otis, D. L. et al. (1978). Statistical inference from capture data on closed animal populations, Wildlife Monographs, 62, 3–135.
Patil, G. P. (1962). Maximum likelihood estimation for generalised power series distributions and its application to a truncated binomial distribution. Biometrika, 49, 227–237.
Pearson, K. (1913). A Monograph on Albinism in Man. Drapers Company Research Memoirs.
posbernoulli.b
,
posbernoulli.t
,
posbernoulli.tb
,
binomialff
,
AICvlm
, BICvlm
,
simulate.vlm
.
# Albinotic children in families with 5 kids (from Patil, 1962) ,,,, albinos <- data.frame(y = c(rep(1, 25), rep(2, 23), rep(3, 10), 4, 5), n = rep(5, 60)) fit1 <- vglm(cbind(y, n-y) ~ 1, posbinomial, albinos, trace = TRUE) summary(fit1) Coef(fit1) # = MLE of p = 0.3088 head(fitted(fit1)) sqrt(vcov(fit1, untransform = TRUE)) # SE = 0.0322 # Fit a M_0 model (Otis et al. 1978) to the deermice data ,,,,,,,,,, M.0 <- vglm(cbind( y1 + y2 + y3 + y4 + y5 + y6, 6 - y1 - y2 - y3 - y4 - y5 - y6) ~ 1, trace = TRUE, posbinomial(omit.constant = TRUE), data = deermice) coef(M.0, matrix = TRUE) Coef(M.0) constraints(M.0, matrix = TRUE) summary(M.0) c( N.hat = M.0@extra$N.hat, # As tau = 6, i.e., 6 Bernoulli trials SE.N.hat = M.0@extra$SE.N.hat) # per obsn is the same for each obsn # Compare it to the M_b using AIC and BIC M.b <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ 1, trace = TRUE, posbernoulli.b, data = deermice) sort(c(M.0 = AIC(M.0), M.b = AIC(M.b))) # Ok since omit.constant=TRUE sort(c(M.0 = BIC(M.0), M.b = BIC(M.b))) # Ok since omit.constant=TRUE
# Albinotic children in families with 5 kids (from Patil, 1962) ,,,, albinos <- data.frame(y = c(rep(1, 25), rep(2, 23), rep(3, 10), 4, 5), n = rep(5, 60)) fit1 <- vglm(cbind(y, n-y) ~ 1, posbinomial, albinos, trace = TRUE) summary(fit1) Coef(fit1) # = MLE of p = 0.3088 head(fitted(fit1)) sqrt(vcov(fit1, untransform = TRUE)) # SE = 0.0322 # Fit a M_0 model (Otis et al. 1978) to the deermice data ,,,,,,,,,, M.0 <- vglm(cbind( y1 + y2 + y3 + y4 + y5 + y6, 6 - y1 - y2 - y3 - y4 - y5 - y6) ~ 1, trace = TRUE, posbinomial(omit.constant = TRUE), data = deermice) coef(M.0, matrix = TRUE) Coef(M.0) constraints(M.0, matrix = TRUE) summary(M.0) c( N.hat = M.0@extra$N.hat, # As tau = 6, i.e., 6 Bernoulli trials SE.N.hat = M.0@extra$SE.N.hat) # per obsn is the same for each obsn # Compare it to the M_b using AIC and BIC M.b <- vglm(cbind(y1, y2, y3, y4, y5, y6) ~ 1, trace = TRUE, posbernoulli.b, data = deermice) sort(c(M.0 = AIC(M.0), M.b = AIC(M.b))) # Ok since omit.constant=TRUE sort(c(M.0 = BIC(M.0), M.b = BIC(M.b))) # Ok since omit.constant=TRUE
Density, distribution function, quantile function and random generation for the positive-geometric distribution.
dposgeom(x, prob, log = FALSE) pposgeom(q, prob) qposgeom(p, prob) rposgeom(n, prob)
dposgeom(x, prob, log = FALSE) pposgeom(q, prob) qposgeom(p, prob) rposgeom(n, prob)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Fed into |
prob |
vector of probabilities of success (of an ordinary geometric distribution). Short vectors are recycled. |
log |
logical. |
The positive-geometric distribution is a geometric distribution but with
the probability of a zero being zero. The other probabilities are scaled
to add to unity.
The mean therefore is .
As decreases, the positive-geometric and geometric
distributions become more similar.
Like similar functions for the geometric distribution, a zero value
of
prob
is not permitted here.
dposgeom
gives the density,
pposgeom
gives the distribution function,
qposgeom
gives the quantile function, and
rposgeom
generates random deviates.
T. W. Yee
zageometric
,
zigeometric
,
rgeom
.
prob <- 0.75; y <- rposgeom(n = 1000, prob) table(y) mean(y) # Sample mean 1 / prob # Population mean (ii <- dposgeom(0:7, prob)) cumsum(ii) - pposgeom(0:7, prob) # Should be 0s table(rposgeom(100, prob)) table(qposgeom(runif(1000), prob)) round(dposgeom(1:10, prob) * 1000) # Should be similar ## Not run: x <- 0:5 barplot(rbind(dposgeom(x, prob), dgeom(x, prob)), beside = TRUE, col = c("blue", "orange"), main = paste("Positive geometric(", prob, ") (blue) vs", " geometric(", prob, ") (orange)", sep = ""), names.arg = as.character(x), las = 1, lwd = 2) ## End(Not run)
prob <- 0.75; y <- rposgeom(n = 1000, prob) table(y) mean(y) # Sample mean 1 / prob # Population mean (ii <- dposgeom(0:7, prob)) cumsum(ii) - pposgeom(0:7, prob) # Should be 0s table(rposgeom(100, prob)) table(qposgeom(runif(1000), prob)) round(dposgeom(1:10, prob) * 1000) # Should be similar ## Not run: x <- 0:5 barplot(rbind(dposgeom(x, prob), dgeom(x, prob)), beside = TRUE, col = c("blue", "orange"), main = paste("Positive geometric(", prob, ") (blue) vs", " geometric(", prob, ") (orange)", sep = ""), names.arg = as.character(x), las = 1, lwd = 2) ## End(Not run)
Maximum likelihood estimation of the two parameters of a positive negative binomial distribution.
posnegbinomial(zero = "size", type.fitted = c("mean", "munb", "prob0"), mds.min = 0.001, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-07, max.support = 4000, max.chunk.MB = 30, lmunb = "loglink", lsize = "loglink", imethod = 1, imunb = NULL, iprobs.y = NULL, gprobs.y = ppoints(8), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)))
posnegbinomial(zero = "size", type.fitted = c("mean", "munb", "prob0"), mds.min = 0.001, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-07, max.support = 4000, max.chunk.MB = 30, lmunb = "loglink", lsize = "loglink", imethod = 1, imunb = NULL, iprobs.y = NULL, gprobs.y = ppoints(8), isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)))
lmunb |
Link function applied to the |
lsize |
Parameter link function applied to the dispersion parameter,
called |
isize |
Optional initial value for |
nsimEIM , zero , eps.trig
|
|
mds.min , iprobs.y , cutoff.prob
|
Similar to |
imunb , max.support
|
Similar to |
max.chunk.MB , gsize.mux
|
Similar to |
imethod , gprobs.y
|
See |
type.fitted |
See |
The positive negative binomial distribution is an ordinary negative binomial distribution but with the probability of a zero response being zero. The other probabilities are scaled to sum to unity.
This family function is based on negbinomial
and most details can be found there. To avoid confusion, the
parameter munb
here corresponds to the mean of an ordinary
negative binomial distribution negbinomial
. The
mean of posnegbinomial
is
where
is the
probability an ordinary negative binomial distribution has a
zero value.
The parameters munb
and k
are not independent in
the positive negative binomial distribution, whereas they are
in the ordinary negative binomial distribution.
This function handles multiple responses, so that a
matrix can be used as the response. The number of columns is
the number of species, say, and setting zero = -2
means
that all species have a k
equalling a (different)
intercept only.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
This family function is fragile;
at least two cases will lead to numerical problems.
Firstly,
the positive-Poisson model corresponds to k
equalling infinity.
If the data is positive-Poisson or close to positive-Poisson,
then the estimated k
will diverge to Inf
or some
very large value.
Secondly, if the data is clustered about the value 1 because
the munb
parameter is close to 0
then numerical problems will also occur.
Users should set trace = TRUE
to monitor convergence.
In the situation when both cases hold, the result returned
(which will be untrustworthy) will depend on the initial values.
The negative binomial distribution (NBD) is a strictly unimodal
distribution. Any data set that does not exhibit a mode (in the
middle) makes the estimation problem difficult. The positive
NBD inherits this feature. Set trace = TRUE
to monitor
convergence.
See the example below of a data set where posbinomial()
fails; the so-called solution is extremely poor.
This is partly due to a lack of a
unimodal shape because the number of counts decreases only.
This long tail makes it very difficult to estimate the mean
parameter with any certainty. The result too is that the
size
parameter is numerically fraught.
This VGAM family function inherits the same warnings as
negbinomial
.
And if k
is much less than 1 then the estimation may
be slow.
If the estimated is very large then fitting a
pospoisson
model is a good idea.
If both munb
and are large then it may be
necessary to decrease
eps.trig
and increase
max.support
so that the EIMs are positive-definite,
e.g.,
eps.trig = 1e-8
and max.support = Inf
.
Thomas W. Yee
Barry, S. C. and Welsh, A. H. (2002). Generalized additive modelling and zero inflated count data. Ecological Modelling, 157, 179–188.
Williamson, E. and Bretherton, M. H. (1964). Tables of the logarithmic series distribution. Annals of Mathematical Statistics, 35, 284–297.
gaitdnbinomial
,
pospoisson
,
negbinomial
,
zanegbinomial
,
rnbinom
,
CommonVGAMffArguments
,
corbet
,
logff
,
simulate.vlm
,
margeff
.
## Not run: pdata <- data.frame(x2 = runif(nn <- 1000)) pdata <- transform(pdata, y1 = rgaitdnbinom(nn, exp(1), munb.p = exp(0+2*x2), truncate = 0), y2 = rgaitdnbinom(nn, exp(3), munb.p = exp(1+2*x2), truncate = 0)) fit <- vglm(cbind(y1, y2) ~ x2, posnegbinomial, pdata, trace = TRUE) coef(fit, matrix = TRUE) dim(depvar(fit)) # Using dim(fit@y) is not recommended # Another artificial data example pdata2 <- data.frame(munb = exp(2), size = exp(3)); nn <- 1000 pdata2 <- transform(pdata2, y3 = rgaitdnbinom(nn, size, munb.p = munb, truncate = 0)) with(pdata2, table(y3)) fit <- vglm(y3 ~ 1, posnegbinomial, data = pdata2, trace = TRUE) coef(fit, matrix = TRUE) with(pdata2, mean(y3)) # Sample mean head(with(pdata2, munb/(1-(size/(size+munb))^size)), 1) # Popn mean head(fitted(fit), 3) head(predict(fit), 3) # Example: Corbet (1943) butterfly Malaya data fit <- vglm(ofreq ~ 1, posnegbinomial, weights = species, corbet) coef(fit, matrix = TRUE) Coef(fit) (khat <- Coef(fit)["size"]) pdf2 <- dgaitdnbinom(with(corbet, ofreq), khat, munb.p = fitted(fit), truncate = 0) print(with(corbet, cbind(ofreq, species, fitted = pdf2*sum(species))), dig = 1) with(corbet, matplot(ofreq, cbind(species, fitted = pdf2*sum(species)), las = 1, xlab = "Observed frequency (of individual butterflies)", type = "b", ylab = "Number of species", col = c("blue", "orange"), main = "blue 1s = observe; orange 2s = fitted")) # Data courtesy of Maxim Gerashchenko causes posbinomial() to fail pnbd.fail <- data.frame( y1 = c(1:16, 18:21, 23:28, 33:38, 42, 44, 49:51, 55, 56, 58, 59, 61:63, 66, 73, 76, 94, 107, 112, 124, 190, 191, 244), ofreq = c(130, 80, 38, 23, 22, 11, 21, 14, 6, 7, 9, 9, 9, 4, 4, 5, 1, 4, 6, 1, 3, 2, 4, 3, 4, 5, 3, 1, 2, 1, 1, 4, 1, 2, 2, 1, 3, 1, 1, 2, 2, 2, 1, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1)) fit.fail <- vglm(y1 ~ 1, weights = ofreq, posnegbinomial, trace = TRUE, data = pnbd.fail) ## End(Not run)
## Not run: pdata <- data.frame(x2 = runif(nn <- 1000)) pdata <- transform(pdata, y1 = rgaitdnbinom(nn, exp(1), munb.p = exp(0+2*x2), truncate = 0), y2 = rgaitdnbinom(nn, exp(3), munb.p = exp(1+2*x2), truncate = 0)) fit <- vglm(cbind(y1, y2) ~ x2, posnegbinomial, pdata, trace = TRUE) coef(fit, matrix = TRUE) dim(depvar(fit)) # Using dim(fit@y) is not recommended # Another artificial data example pdata2 <- data.frame(munb = exp(2), size = exp(3)); nn <- 1000 pdata2 <- transform(pdata2, y3 = rgaitdnbinom(nn, size, munb.p = munb, truncate = 0)) with(pdata2, table(y3)) fit <- vglm(y3 ~ 1, posnegbinomial, data = pdata2, trace = TRUE) coef(fit, matrix = TRUE) with(pdata2, mean(y3)) # Sample mean head(with(pdata2, munb/(1-(size/(size+munb))^size)), 1) # Popn mean head(fitted(fit), 3) head(predict(fit), 3) # Example: Corbet (1943) butterfly Malaya data fit <- vglm(ofreq ~ 1, posnegbinomial, weights = species, corbet) coef(fit, matrix = TRUE) Coef(fit) (khat <- Coef(fit)["size"]) pdf2 <- dgaitdnbinom(with(corbet, ofreq), khat, munb.p = fitted(fit), truncate = 0) print(with(corbet, cbind(ofreq, species, fitted = pdf2*sum(species))), dig = 1) with(corbet, matplot(ofreq, cbind(species, fitted = pdf2*sum(species)), las = 1, xlab = "Observed frequency (of individual butterflies)", type = "b", ylab = "Number of species", col = c("blue", "orange"), main = "blue 1s = observe; orange 2s = fitted")) # Data courtesy of Maxim Gerashchenko causes posbinomial() to fail pnbd.fail <- data.frame( y1 = c(1:16, 18:21, 23:28, 33:38, 42, 44, 49:51, 55, 56, 58, 59, 61:63, 66, 73, 76, 94, 107, 112, 124, 190, 191, 244), ofreq = c(130, 80, 38, 23, 22, 11, 21, 14, 6, 7, 9, 9, 9, 4, 4, 5, 1, 4, 6, 1, 3, 2, 4, 3, 4, 5, 3, 1, 2, 1, 1, 4, 1, 2, 2, 1, 3, 1, 1, 2, 2, 2, 1, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1)) fit.fail <- vglm(y1 ~ 1, weights = ofreq, posnegbinomial, trace = TRUE, data = pnbd.fail) ## End(Not run)
Density, distribution function, quantile function and random generation for the univariate positive-normal distribution.
dposnorm(x, mean = 0, sd = 1, log = FALSE) pposnorm(q, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) qposnorm(p, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) rposnorm(n, mean = 0, sd = 1)
dposnorm(x, mean = 0, sd = 1, log = FALSE) pposnorm(q, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) qposnorm(p, mean = 0, sd = 1, lower.tail = TRUE, log.p = FALSE) rposnorm(n, mean = 0, sd = 1)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
mean , sd , log , lower.tail , log.p
|
see |
See posnormal
, the VGAM family function
for estimating the parameters,
for the formula of the probability density function and other
details.
dposnorm
gives the density,
pposnorm
gives the distribution function,
qposnorm
gives the quantile function, and
rposnorm
generates random deviates.
T. W. Yee
## Not run: m <- 0.8; x <- seq(-1, 4, len = 501) plot(x, dposnorm(x, m = m), type = "l", las = 1, ylim = 0:1, ylab = paste("posnorm(m = ", m, ", sd = 1)"), col = "blue", main = "Blue is density, orange is the CDF", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "grey") lines(x, pposnorm(x, m = m), col = "orange", type = "l") probs <- seq(0.1, 0.9, by = 0.1) Q <- qposnorm(probs, m = m) lines(Q, dposnorm(Q, m = m), col = "purple", lty = 3, type = "h") lines(Q, pposnorm(Q, m = m), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pposnorm(Q, m = m) - probs)) # Should be 0 ## End(Not run)
## Not run: m <- 0.8; x <- seq(-1, 4, len = 501) plot(x, dposnorm(x, m = m), type = "l", las = 1, ylim = 0:1, ylab = paste("posnorm(m = ", m, ", sd = 1)"), col = "blue", main = "Blue is density, orange is the CDF", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "grey") lines(x, pposnorm(x, m = m), col = "orange", type = "l") probs <- seq(0.1, 0.9, by = 0.1) Q <- qposnorm(probs, m = m) lines(Q, dposnorm(Q, m = m), col = "purple", lty = 3, type = "h") lines(Q, pposnorm(Q, m = m), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(pposnorm(Q, m = m) - probs)) # Should be 0 ## End(Not run)
Fits a positive (univariate) normal distribution.
posnormal(lmean = "identitylink", lsd = "loglink", eq.mean = FALSE, eq.sd = FALSE, gmean = exp((-5:5)/2), gsd = exp((-1:5)/2), imean = NULL, isd = NULL, probs.y = 0.10, imethod = 1, nsimEIM = NULL, zero = "sd")
posnormal(lmean = "identitylink", lsd = "loglink", eq.mean = FALSE, eq.sd = FALSE, gmean = exp((-5:5)/2), gsd = exp((-1:5)/2), imean = NULL, isd = NULL, probs.y = 0.10, imethod = 1, nsimEIM = NULL, zero = "sd")
lmean , lsd
|
Link functions for the mean and standard
deviation parameters of the usual univariate normal distribution.
They are |
gmean , gsd , imethod
|
See |
imean , isd
|
Optional initial values for |
eq.mean , eq.sd
|
See |
zero , nsimEIM , probs.y
|
See |
The positive normal distribution is the ordinary normal distribution but with the probability of zero or less being zero. The rest of the probability density function is scaled up. Hence the probability density function can be written
where is the cumulative distribution function
of a standard normal (
pnorm
).
Equivalently, this is
where is the probability
density function of a standard normal distribution
(
dnorm
).
The mean of is
This family function handles multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
It is recommended that trace = TRUE
be used to monitor
convergence; sometimes the estimated mean is -Inf
and the
estimated mean standard deviation is Inf
, especially
when the sample size is small.
Under- or over-flow may occur if the data is ill-conditioned.
The response variable for this family function is the same as
uninormal
except positive values are required.
Reasonably good initial values are needed.
The distribution of the reciprocal of a positive normal random variable is known as an alpha distribution.
Thomas W. Yee
pdata <- data.frame(Mean = 1.0, SD = exp(1.0)) pdata <- transform(pdata, y = rposnorm(n <- 1000, m = Mean, sd = SD)) ## Not run: with(pdata, hist(y, prob = TRUE, border = "blue", main = paste("posnorm(m =", Mean[1], ", sd =", round(SD[1], 2),")"))) ## End(Not run) fit <- vglm(y ~ 1, posnormal, data = pdata, trace = TRUE) coef(fit, matrix = TRUE) (Cfit <- Coef(fit)) mygrid <- with(pdata, seq(min(y), max(y), len = 200)) ## Not run: lines(mygrid, dposnorm(mygrid, Cfit[1], Cfit[2]), col = "red")
pdata <- data.frame(Mean = 1.0, SD = exp(1.0)) pdata <- transform(pdata, y = rposnorm(n <- 1000, m = Mean, sd = SD)) ## Not run: with(pdata, hist(y, prob = TRUE, border = "blue", main = paste("posnorm(m =", Mean[1], ", sd =", round(SD[1], 2),")"))) ## End(Not run) fit <- vglm(y ~ 1, posnormal, data = pdata, trace = TRUE) coef(fit, matrix = TRUE) (Cfit <- Coef(fit)) mygrid <- with(pdata, seq(min(y), max(y), len = 200)) ## Not run: lines(mygrid, dposnorm(mygrid, Cfit[1], Cfit[2]), col = "red")
Fits a positive Poisson distribution.
pospoisson(link = "loglink", type.fitted = c("mean", "lambda", "prob0"), expected = TRUE, ilambda = NULL, imethod = 1, zero = NULL, gt.1 = FALSE)
pospoisson(link = "loglink", type.fitted = c("mean", "lambda", "prob0"), expected = TRUE, ilambda = NULL, imethod = 1, zero = NULL, gt.1 = FALSE)
link |
Link function for the usual mean (lambda) parameter of
an ordinary Poisson distribution.
See |
expected |
Logical.
Fisher scoring is used if |
ilambda , imethod , zero
|
See |
type.fitted |
See |
gt.1 |
Logical.
Enforce |
The positive Poisson distribution is the ordinary Poisson
distribution but with the probability of zero being zero. Thus the
other probabilities are scaled up (i.e., divided by ).
The mean,
,
can be obtained by the extractor function
fitted
applied to
the object.
A related distribution is the zero-inflated Poisson, in which the
probability involves another parameter
.
See
zipoisson
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
rrvglm
and vgam
.
Under- or over-flow may occur if the data is ill-conditioned.
This family function can handle multiple responses.
Yet to be done: a quasi.pospoisson
which estimates a dispersion
parameter.
Thomas W. Yee
Coleman, J. S. and James, J. (1961). The equilibrium size distribution of freely-forming groups. Sociometry, 24, 36–45.
Gaitdpois
,
gaitdpoisson
,
posnegbinomial
,
poissonff
,
zapoisson
,
zipoisson
,
simulate.vlm
,
otpospoisson
,
Pospois
.
# Data from Coleman and James (1961) cjdata <- data.frame(y = 1:6, freq = c(1486, 694, 195, 37, 10, 1)) fit <- vglm(y ~ 1, pospoisson, data = cjdata, weights = freq) Coef(fit) summary(fit) fitted(fit) pdata <- data.frame(x2 = runif(nn <- 1000)) # Artificial data pdata <- transform(pdata, lambda = exp(1 - 2 * x2)) pdata <- transform(pdata, y1 = rgaitdpois(nn, lambda, truncate = 0)) with(pdata, table(y1)) fit <- vglm(y1 ~ x2, pospoisson, data = pdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE)
# Data from Coleman and James (1961) cjdata <- data.frame(y = 1:6, freq = c(1486, 694, 195, 37, 10, 1)) fit <- vglm(y ~ 1, pospoisson, data = cjdata, weights = freq) Coef(fit) summary(fit) fitted(fit) pdata <- data.frame(x2 = runif(nn <- 1000)) # Artificial data pdata <- transform(pdata, lambda = exp(1 - 2 * x2)) pdata <- transform(pdata, y1 = rgaitdpois(nn, lambda, truncate = 0)) with(pdata, table(y1)) fit <- vglm(y1 ~ x2, pospoisson, data = pdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE)
Computes the power transformation, including its inverse and the first two derivatives.
powerlink(theta, power = 1, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
powerlink(theta, power = 1, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
power |
This denotes the power or exponent. |
inverse , deriv , short , tag
|
Details at |
The power link function raises a parameter by a certain value of
power
.
Care is needed because it is very easy to get numerical
problems, e.g., if power=0.5
and theta
is
negative.
For powerlink
with deriv = 0
, then theta
raised
to the power of power
.
And if inverse = TRUE
then
theta
raised to the power of 1/power
.
For deriv = 1
, then the function returns
d theta
/ d eta
as a function of theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns the reciprocal.
Numerical problems may occur for certain combinations of
theta
and power
.
Consequently this link function should be used with caution.
Thomas W. Yee
powerlink("a", power = 2, short = FALSE, tag = TRUE) powerlink(x <- 1:5) powerlink(x, power = 2) max(abs(powerlink(powerlink(x, power = 2), power = 2, inverse = TRUE) - x)) # Should be 0 powerlink(x <- (-5):5, power = 0.5) # Has NAs # 1/2 = 0.5 pdata <- data.frame(y = rbeta(n = 1000, shape1 = 2^2, shape2 = 3^2)) fit <- vglm(y ~ 1, betaR(lshape1 = powerlink(power = 0.5), i1 = 3, lshape2 = powerlink(power = 0.5), i2 = 7), data = pdata) t(coef(fit, matrix = TRUE)) Coef(fit) # Useful for intercept-only models vcov(fit, untransform = TRUE)
powerlink("a", power = 2, short = FALSE, tag = TRUE) powerlink(x <- 1:5) powerlink(x, power = 2) max(abs(powerlink(powerlink(x, power = 2), power = 2, inverse = TRUE) - x)) # Should be 0 powerlink(x <- (-5):5, power = 0.5) # Has NAs # 1/2 = 0.5 pdata <- data.frame(y = rbeta(n = 1000, shape1 = 2^2, shape2 = 3^2)) fit <- vglm(y ~ 1, betaR(lshape1 = powerlink(power = 0.5), i1 = 3, lshape2 = powerlink(power = 0.5), i2 = 7), data = pdata) t(coef(fit, matrix = TRUE)) Coef(fit) # Useful for intercept-only models vcov(fit, untransform = TRUE)
A small toxological experiment data. The subjects are fetuses from two randomized groups of pregnant rats, and they were given a placebo or chemical treatment. The number with birth defects were recorded, as well as each litter size.
data(prats)
data(prats)
A data frame with the following variables.
A 0
means control;
a 1
means the chemical treatment.
The number of fetuses alive at 21 days, out of the number of fetuses alive at 4 days (the litter size).
The data concerns a toxological experiment where the subjects are fetuses from two randomized groups of 16 pregnant rats each, and they were given a placebo or chemical treatment. The number with birth defects and the litter size were recorded. Half the rats were fed a control diet during pregnancy and lactation, and the diet of the other half was treated with a chemical. For each litter the number of pups alive at 4 days and the number of pups that survived the 21 day lactation period, were recorded.
Weil, C. S. (1970) Selection of the valid number of sampling units and a consideration of their combination in toxicological studies involving reproduction, teratogenesis or carcinogenesis. Food and Cosmetics Toxicology, 8(2), 177–182.
Williams, D. A. (1975). The Analysis of Binary Responses From Toxicological Experiments Involving Reproduction and Teratogenicity. Biometrics, 31(4), 949–952.
prats colSums(subset(prats, treatment == 0)) colSums(subset(prats, treatment == 1)) summary(prats)
prats colSums(subset(prats, treatment == 0)) colSums(subset(prats, treatment == 1)) summary(prats)
Predicted values based on a constrained quadratic ordination (CQO) object.
predictqrrvglm(object, newdata = NULL, type = c("link", "response", "latvar", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, extra = object@extra, varI.latvar = FALSE, refResponse = NULL, ...)
predictqrrvglm(object, newdata = NULL, type = c("link", "response", "latvar", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, extra = object@extra, varI.latvar = FALSE, refResponse = NULL, ...)
object |
Object of class inheriting from |
newdata |
An optional data frame in which to look for variables with which to predict. If omitted, the fitted linear predictors are used. |
type , se.fit , dispersion , extra
|
See |
deriv |
Derivative. Currently only 0 is handled. |
varI.latvar , refResponse
|
Arguments passed into |
... |
Currently undocumented. |
Obtains predictions from a fitted CQO object. Currently there are lots of limitations of this function; it is unfinished.
See predictvglm
.
This function is not robust and has not been checked fully.
T. W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
## Not run: set.seed(1234) hspider[, 1:6] <- scale(hspider[, 1:6]) # Standardize the X vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE, I.toler = TRUE) sort(deviance(p1, history = TRUE)) # A history of all the iterations head(predict(p1)) # The following should be all 0s: max(abs(predict(p1, newdata = head(hspider)) - head(predict(p1)))) max(abs(predict(p1, newdata = head(hspider), type = "res")-head(fitted(p1)))) ## End(Not run)
## Not run: set.seed(1234) hspider[, 1:6] <- scale(hspider[, 1:6]) # Standardize the X vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE, I.toler = TRUE) sort(deviance(p1, history = TRUE)) # A history of all the iterations head(predict(p1)) # The following should be all 0s: max(abs(predict(p1, newdata = head(hspider)) - head(predict(p1)))) max(abs(predict(p1, newdata = head(hspider), type = "res")-head(fitted(p1)))) ## End(Not run)
Predicted values based on a vector generalized linear model (VGLM) object.
predictvglm(object, newdata = NULL, type = c("link", "response", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, untransform = FALSE, type.fitted = NULL, percentiles = NULL, ...)
predictvglm(object, newdata = NULL, type = c("link", "response", "terms"), se.fit = FALSE, deriv = 0, dispersion = NULL, untransform = FALSE, type.fitted = NULL, percentiles = NULL, ...)
object |
Object of class inheriting from |
newdata |
An optional data frame in which to look for variables with which to predict. If omitted, the fitted linear predictors are used. |
type |
The value of this argument can be abbreviated.
The type of prediction required. The default is the first one,
meaning on the scale of the linear predictors.
This should be a The alternative The |
se.fit |
logical: return standard errors? |
deriv |
Non-negative integer. Currently this must be zero. Later, this may be implemented for general values. |
dispersion |
Dispersion parameter. This may be inputted at this stage, but the default is to use the dispersion parameter of the fitted model. |
type.fitted |
Some VGAM family functions have an argument by
the same name. If so, then one can obtain fitted values
by setting |
percentiles |
Used only if |
untransform |
Logical. Reverses any parameter link function.
This argument only works if
|
... |
Arguments passed into |
Obtains predictions and optionally estimates
standard errors of those predictions from a
fitted vglm
object.
By default,
each row of the matrix returned can be written
as , comprising of
components or linear predictors.
If there are any offsets, these
are included.
This code implements smart prediction
(see smartpred
).
If se.fit = FALSE
, a vector or matrix
of predictions.
If se.fit = TRUE
, a list with components
fitted.values |
Predictions |
se.fit |
Estimated standard errors |
df |
Degrees of freedom |
sigma |
The square root of the dispersion parameter (but these are being phased out in the package) |
This function may change in the future.
Setting se.fit = TRUE
and
type = "response"
will generate an error.
The arguments type.fitted
and percentiles
are provided in this function to give more
convenience than
modifying the extra
slot directly.
Thomas W. Yee
Yee, T. W. (2015). Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: Springer.
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
predict
,
vglm
,
predictvlm
,
smartpred
,
calibrate
.
# Illustrates smart prediction pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ poly(c(scale(let)), 2), propodds, pneumo, trace = TRUE, x.arg = FALSE) class(fit) (q0 <- head(predict(fit))) (q1 <- predict(fit, newdata = head(pneumo))) (q2 <- predict(fit, newdata = head(pneumo))) all.equal(q0, q1) # Should be TRUE all.equal(q1, q2) # Should be TRUE head(predict(fit)) head(predict(fit, untransform = TRUE)) p0 <- head(predict(fit, type = "response")) p1 <- head(predict(fit, type = "response", newdata = pneumo)) p2 <- head(predict(fit, type = "response", newdata = pneumo)) p3 <- head(fitted(fit)) all.equal(p0, p1) # Should be TRUE all.equal(p1, p2) # Should be TRUE all.equal(p2, p3) # Should be TRUE predict(fit, type = "terms", se = TRUE)
# Illustrates smart prediction pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ poly(c(scale(let)), 2), propodds, pneumo, trace = TRUE, x.arg = FALSE) class(fit) (q0 <- head(predict(fit))) (q1 <- predict(fit, newdata = head(pneumo))) (q2 <- predict(fit, newdata = head(pneumo))) all.equal(q0, q1) # Should be TRUE all.equal(q1, q2) # Should be TRUE head(predict(fit)) head(predict(fit, untransform = TRUE)) p0 <- head(predict(fit, type = "response")) p1 <- head(predict(fit, type = "response", newdata = pneumo)) p2 <- head(predict(fit, type = "response", newdata = pneumo)) p3 <- head(fitted(fit)) all.equal(p0, p1) # Should be TRUE all.equal(p1, p2) # Should be TRUE all.equal(p2, p3) # Should be TRUE predict(fit, type = "terms", se = TRUE)
Estimation of a 3-parameter log-gamma distribution described by Prentice (1974).
prentice74(llocation = "identitylink", lscale = "loglink", lshape = "identitylink", ilocation = NULL, iscale = NULL, ishape = NULL, imethod = 1, glocation.mux = exp((-4:4)/2), gscale.mux = exp((-4:4)/2), gshape = qt(ppoints(6), df = 1), probs.y = 0.3, zero = c("scale", "shape"))
prentice74(llocation = "identitylink", lscale = "loglink", lshape = "identitylink", ilocation = NULL, iscale = NULL, ishape = NULL, imethod = 1, glocation.mux = exp((-4:4)/2), gscale.mux = exp((-4:4)/2), gshape = qt(ppoints(6), df = 1), probs.y = 0.3, zero = c("scale", "shape"))
llocation , lscale , lshape
|
Parameter link function applied to the
location parameter |
ilocation , iscale
|
Initial value for |
ishape |
Initial value for |
imethod , zero
|
See |
glocation.mux , gscale.mux , gshape , probs.y
|
See |
The probability density function is given by
for shape parameter ,
positive scale parameter
,
location parameter
,
and all real
.
Here,
where
is the digamma function,
digamma
.
The mean of is
(returned as the fitted values).
This is a different parameterization compared to
lgamma3
.
Special cases:
is the normal distribution with standard deviation
,
is the extreme value distribution for maximums,
is the extreme value distribution for minima (Weibull).
If
then the distribution is left skew,
else
is right skew.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The special case is not handled, therefore
estimates of
too close to zero may cause numerical problems.
The notation used here differs from Prentice (1974):
,
.
Fisher scoring is used.
T. W. Yee
Prentice, R. L. (1974). A log gamma model and its maximum likelihood estimation. Biometrika, 61, 539–544.
lgamma3
,
lgamma
,
gengamma.stacy
.
pdata <- data.frame(x2 = runif(nn <- 1000)) pdata <- transform(pdata, loc = -1 + 2*x2, Scale = exp(1)) pdata <- transform(pdata, y = rlgamma(nn, loc = loc, scale = Scale, shape = 1)) fit <- vglm(y ~ x2, prentice74(zero = 2:3), data = pdata, trace = TRUE) coef(fit, matrix = TRUE) # Note the coefficients for location
pdata <- data.frame(x2 = runif(nn <- 1000)) pdata <- transform(pdata, loc = -1 + 2*x2, Scale = exp(1)) pdata <- transform(pdata, y = rlgamma(nn, loc = loc, scale = Scale, shape = 1)) fit <- vglm(y ~ x2, prentice74(zero = 2:3), data = pdata, trace = TRUE) coef(fit, matrix = TRUE) # Note the coefficients for location
A data frame with yellow-bellied Prinia.
data(prinia)
data(prinia)
A data frame with 151 observations on the following 23 variables.
a numeric vector, the scaled wing length (zero mean and unit variance).
a numeric vector, fat index; originally 1 (no fat) to 4 (very fat) but converted to 0 (no fat) versus 1 otherwise.
a numeric vector, number of times the bird was captured or recaptured.
a numeric vector, number of times the bird was not captured.
a numeric vector of 0s and 1s; for noncapture and capture resp.
same as above.
same as above.
The yellow-bellied Prinia Prinia flaviventris is a common bird species located in Southeast Asia. A capture–recapture experiment was conducted at the Mai Po Nature Reserve in Hong Kong during 1991, where captured individuals had their wing lengths measured and fat index recorded. A total of 19 weekly capture occasions were considered, where 151 distinct birds were captured.
More generally, the prinias are a genus of small insectivorous birds, and are sometimes referred to as wren-warblers. They are a little-known group of the tropical and subtropical Old World, the roughly 30 species being divided fairly equally between Africa and Asia.
Thanks to Paul Yip for permission to make this data available.
Hwang, W.-H. and Huggins, R. M. (2007) Application of semiparametric regression models in the analysis of capture–recapture experiments. Australian and New Zealand Journal of Statistics 49, 191–202.
head(prinia) summary(prinia) rowSums(prinia[, c("cap", "noncap")]) # 19s # Fit a positive-binomial distribution (M.h) to the data: fit1 <- vglm(cbind(cap, noncap) ~ length + fat, posbinomial, prinia) # Fit another positive-binomial distribution (M.h) to the data: # The response input is suitable for posbernoulli.*-type functions. fit2 <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10, y11, y12, y13, y14, y15, y16, y17, y18, y19) ~ length + fat, posbernoulli.b(drop.b = FALSE ~ 0), prinia)
head(prinia) summary(prinia) rowSums(prinia[, c("cap", "noncap")]) # 19s # Fit a positive-binomial distribution (M.h) to the data: fit1 <- vglm(cbind(cap, noncap) ~ length + fat, posbinomial, prinia) # Fit another positive-binomial distribution (M.h) to the data: # The response input is suitable for posbernoulli.*-type functions. fit2 <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10, y11, y12, y13, y14, y15, y16, y17, y18, y19) ~ length + fat, posbernoulli.b(drop.b = FALSE ~ 0), prinia)
Computes the probit transformation, including its inverse and the first two derivatives.
probitlink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
probitlink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
bvalue |
See |
inverse , deriv , short , tag
|
Details at |
The probit link function is commonly used for parameters that
lie in the unit interval.
It is the inverse CDF of the standard normal distribution.
Numerical values of theta
close to 0 or 1 or out of range
result in
Inf
, -Inf
, NA
or NaN
.
For deriv = 0
, the probit of theta
, i.e.,
qnorm(theta)
when inverse = FALSE
, and if inverse =
TRUE
then pnorm(theta)
.
For deriv = 1
, then the function returns
d eta
/ d theta
as a function of theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns the reciprocal.
Numerical instability may occur when theta
is close to 1 or 0.
One way of overcoming this is to use bvalue
.
In terms of the threshold approach with cumulative probabilities for
an ordinal response this link function corresponds to the univariate
normal distribution (see uninormal
).
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Links
,
logitlink
,
clogloglink
,
cauchitlink
,
Normal
.
p <- seq(0.01, 0.99, by = 0.01) probitlink(p) max(abs(probitlink(probitlink(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) probitlink(p) # Has NAs probitlink(p, bvalue = .Machine$double.eps) # Has no NAs ## Not run: p <- seq(0.01, 0.99, by = 0.01); par(lwd = (mylwd <- 2)) plot(p, logitlink(p), type = "l", col = "limegreen", ylab = "transformation", las = 1, main = "Some probability link functions") lines(p, probitlink(p), col = "purple") lines(p, clogloglink(p), col = "chocolate") lines(p, cauchitlink(p), col = "tan") abline(v = 0.5, h = 0, lty = "dashed") legend(0.1, 4, c("logitlink", "probitlink", "clogloglink", "cauchitlink"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = mylwd) par(lwd = 1) ## End(Not run)
p <- seq(0.01, 0.99, by = 0.01) probitlink(p) max(abs(probitlink(probitlink(p), inverse = TRUE) - p)) # Should be 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) probitlink(p) # Has NAs probitlink(p, bvalue = .Machine$double.eps) # Has no NAs ## Not run: p <- seq(0.01, 0.99, by = 0.01); par(lwd = (mylwd <- 2)) plot(p, logitlink(p), type = "l", col = "limegreen", ylab = "transformation", las = 1, main = "Some probability link functions") lines(p, probitlink(p), col = "purple") lines(p, clogloglink(p), col = "chocolate") lines(p, cauchitlink(p), col = "tan") abline(v = 0.5, h = 0, lty = "dashed") legend(0.1, 4, c("logitlink", "probitlink", "clogloglink", "cauchitlink"), col = c("limegreen", "purple", "chocolate", "tan"), lwd = mylwd) par(lwd = 1) ## End(Not run)
Investigates the profile log-likelihood function for a fitted model of
class "vglm"
.
profilevglm(object, which = 1:p.vlm, alpha = 0.01, maxsteps = 10, del = zmax/5, trace = NULL, ...)
profilevglm(object, which = 1:p.vlm, alpha = 0.01, maxsteps = 10, del = zmax/5, trace = NULL, ...)
object |
the original fitted model object. |
which |
the original model parameters which should be profiled. This can be a numeric or character vector. By default, all parameters are profiled. |
alpha |
highest significance level allowed for the profiling. |
maxsteps |
maximum number of points to be used for profiling each parameter. |
del |
suggested change on the scale of the profile t-statistics. Default value chosen to allow profiling at about 10 parameter values. |
trace |
logical: should the progress of
profiling be reported? The default is to
use the |
... |
further arguments passed to or from other methods. |
This function is called by
confintvglm
to do the profiling.
See also profile.glm
for details.
A list of classes "profile.glm"
and "profile"
with an element
for each parameter being profiled.
The elements are data-frames with two
variables
par.vals |
a matrix of parameter values for each fitted model. |
tau |
the profile t-statistics. |
T. W. Yee adapted this function from
profile.glm
,
written originally by D. M. Bates and W. N. Venables.
(For S in 1996.)
The help file was also used as a template.
vglm
,
confintvglm
,
lrt.stat
,
profile
,
profile.glm
,
plot.profile
in MASS or stats.
pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, trace = TRUE, data = pneumo) pfit1 <- profile(fit1, trace = FALSE) confint(fit1, method = "profile", trace = FALSE)
pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, propodds, trace = TRUE, data = pneumo) pfit1 <- profile(fit1, trace = FALSE) confint(fit1, method = "profile", trace = FALSE)
Fits the proportional odds model to a (preferably ordered) factor response.
propodds(reverse = TRUE, whitespace = FALSE, ynames = FALSE, Thresh = NULL, Trev = reverse, Tref = if (Trev) "M" else 1)
propodds(reverse = TRUE, whitespace = FALSE, ynames = FALSE, Thresh = NULL, Trev = reverse, Tref = if (Trev) "M" else 1)
reverse , whitespace
|
Logical.
Fed into arguments of the same name in
|
ynames |
See |
Thresh , Trev , Tref
|
Fed into arguments of the same name in
|
The proportional odds model is a special case from the
class of cumulative link models.
It involves a logit link applied to cumulative probabilities
and a strong parallelism assumption.
A parallelism assumption means there is less chance of
numerical problems because the fitted probabilities will remain
between 0 and 1; however
the parallelism assumption ought to be checked,
e.g., via a likelihood ratio test.
This VGAM family function is merely a shortcut for
cumulative(reverse = reverse, link = "logit", parallel = TRUE)
.
Please see cumulative
for more details on this
model.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
No check is made to verify that the response is ordinal if the
response is a matrix; see ordered
.
Thomas W. Yee
See cumulative
.
# Fit the proportional odds model, McCullagh and Nelder (1989,p.179) pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) depvar(fit) # Sample proportions weights(fit, type = "prior") # Number of observations coef(fit, matrix = TRUE) constraints(fit) # Constraint matrices summary(fit) # Check that the model is linear in let ---------------------- fit2 <- vgam(cbind(normal, mild, severe) ~ s(let, df = 2), propodds, pneumo) ## Not run: plot(fit2, se = TRUE, lcol = 2, scol = 2) # Check the proportional odds assumption with a LRT ---------- (fit3 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), pneumo)) pchisq(deviance(fit) - deviance(fit3), df = df.residual(fit) - df.residual(fit3), lower.tail = FALSE) lrtest(fit3, fit) # Easier
# Fit the proportional odds model, McCullagh and Nelder (1989,p.179) pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo)) depvar(fit) # Sample proportions weights(fit, type = "prior") # Number of observations coef(fit, matrix = TRUE) constraints(fit) # Constraint matrices summary(fit) # Check that the model is linear in let ---------------------- fit2 <- vgam(cbind(normal, mild, severe) ~ s(let, df = 2), propodds, pneumo) ## Not run: plot(fit2, se = TRUE, lcol = 2, scol = 2) # Check the proportional odds assumption with a LRT ---------- (fit3 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), pneumo)) pchisq(deviance(fit) - deviance(fit3), df = df.residual(fit) - df.residual(fit3), lower.tail = FALSE) lrtest(fit3, fit) # Easier
Plots the fitted probabilities for some very simplified special cases of categorical data analysis models.
prplot(object, control = prplot.control(...), ...) prplot.control(xlab = NULL, ylab = "Probability", main = NULL, xlim = NULL, ylim = NULL, lty = par()$lty, col = par()$col, rcol = par()$col, lwd = par()$lwd, rlwd = par()$lwd, las = par()$las, rug.arg = FALSE, ...)
prplot(object, control = prplot.control(...), ...) prplot.control(xlab = NULL, ylab = "Probability", main = NULL, xlim = NULL, ylim = NULL, lty = par()$lty, col = par()$col, rcol = par()$col, lwd = par()$lwd, rlwd = par()$lwd, las = par()$las, rug.arg = FALSE, ...)
object |
Currently only an |
control |
List containing some basic graphical parameters. |
xlab , ylab , main , xlim , ylim , lty
|
See |
col , rcol , lwd , rlwd , las , rug.arg
|
See |
... |
Arguments such as |
For models involving one term in the RHS of the formula this function plots the fitted probabilities against the single explanatory variable.
The object is returned invisibly with the preplot
slot assigned.
This is obtained by a call to plotvgam()
.
This function is rudimentary.
pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo) M <- npred(fit) # Or fit@misc$M ## Not run: prplot(fit) prplot(fit, lty = 1:M, col = (1:M)+2, rug = TRUE, las = 1, ylim = c(0, 1), rlwd = 2) ## End(Not run)
pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo) M <- npred(fit) # Or fit@misc$M ## Not run: prplot(fit) prplot(fit, lty = 1:M, col = (1:M)+2, rug = TRUE, las = 1, ylim = c(0, 1), rlwd = 2) ## End(Not run)
Adds a list to the end of the list .smart.prediction
in
smartpredenv
.
put.smart(smart)
put.smart(smart)
smart |
a list containing parameters needed later for smart prediction. |
put.smart
is used in "write"
mode within a smart function.
It saves parameters at the time of model fitting, which are
later used for prediction.
The function put.smart
is the opposite of
get.smart
, and both deal with the same contents.
Nothing is returned.
The variable .smart.prediction.counter
in
smartpredenv
is incremented beforehand,
and .smart.prediction[[.smart.prediction.counter]]
is
assigned the list smart
.
If the list .smart.prediction
in
smartpredenv
is not long enough
to hold smart
, then it is made larger, and the variable
.max.smart
in
smartpredenv
is adjusted accordingly.
print(sm.min1)
print(sm.min1)
Algorithmic constants and parameters for a constrained quadratic
ordination (CQO), by fitting a quadratic reduced-rank vector
generalized linear model (QRR-VGLM), are set using this function.
It is the control function for cqo
.
qrrvglm.control(Rank = 1, Bestof = if (length(Cinit)) 1 else 10, checkwz = TRUE, Cinit = NULL, Crow1positive = TRUE, epsilon = 1.0e-06, EqualTolerances = NULL, eq.tolerances = TRUE, Etamat.colmax = 10, FastAlgorithm = TRUE, GradientFunction = TRUE, Hstep = 0.001, isd.latvar = rep_len(c(2, 1, rep_len(0.5, Rank)), Rank), iKvector = 0.1, iShape = 0.1, ITolerances = NULL, I.tolerances = FALSE, maxitl = 40, imethod = 1, Maxit.optim = 250, MUXfactor = rep_len(7, Rank), noRRR = ~ 1, Norrr = NA, optim.maxit = 20, Parscale = if (I.tolerances) 0.001 else 1.0, sd.Cinit = 0.02, SmallNo = 5.0e-13, trace = TRUE, Use.Init.Poisson.QO = TRUE, wzepsilon = .Machine$double.eps^0.75, ...)
qrrvglm.control(Rank = 1, Bestof = if (length(Cinit)) 1 else 10, checkwz = TRUE, Cinit = NULL, Crow1positive = TRUE, epsilon = 1.0e-06, EqualTolerances = NULL, eq.tolerances = TRUE, Etamat.colmax = 10, FastAlgorithm = TRUE, GradientFunction = TRUE, Hstep = 0.001, isd.latvar = rep_len(c(2, 1, rep_len(0.5, Rank)), Rank), iKvector = 0.1, iShape = 0.1, ITolerances = NULL, I.tolerances = FALSE, maxitl = 40, imethod = 1, Maxit.optim = 250, MUXfactor = rep_len(7, Rank), noRRR = ~ 1, Norrr = NA, optim.maxit = 20, Parscale = if (I.tolerances) 0.001 else 1.0, sd.Cinit = 0.02, SmallNo = 5.0e-13, trace = TRUE, Use.Init.Poisson.QO = TRUE, wzepsilon = .Machine$double.eps^0.75, ...)
In the following, is the
Rank
,
is the number
of linear predictors,
and
is the number of responses
(species).
Thus
for binomial and Poisson responses, and
for the negative binomial and
2-parameter gamma distributions.
Rank |
The numerical rank |
Bestof |
Integer. The best of |
checkwz |
logical indicating whether the
diagonal elements of
the working weight matrices should be checked
whether they are
sufficiently positive, i.e., greater
than |
Cinit |
Optional initial |
Crow1positive |
Logical vector of length |
epsilon |
Positive numeric. Used to test for convergence for GLMs fitted in C. Larger values mean a loosening of the convergence criterion. If an error code of 3 is reported, try increasing this value. |
eq.tolerances |
Logical indicating whether each (quadratic) predictor will
have equal tolerances. Having |
EqualTolerances |
Defunct argument.
Use |
Etamat.colmax |
Positive integer, no smaller than |
FastAlgorithm |
Logical.
Whether a new fast algorithm is to be used. The fast
algorithm results in a large speed increases
compared to Yee (2004).
Some details of the fast algorithm are found
in Appendix A of Yee (2006).
Setting |
GradientFunction |
Logical. Whether |
Hstep |
Positive value. Used as the step size in
the finite difference
approximation to the derivatives
by |
isd.latvar |
Initial standard deviations for the latent variables
(site scores).
Numeric, positive and of length |
iKvector , iShape
|
Numeric, recycled to length |
I.tolerances |
Logical. If |
ITolerances |
Defunct argument.
Use |
maxitl |
Maximum number of times the optimizer is called or restarted. Most users should ignore this argument. |
imethod |
Method of initialization. A positive integer 1 or 2 or 3 etc.
depending on the VGAM family function.
Currently it is used for |
Maxit.optim |
Positive integer. Number of iterations given to the function
|
MUXfactor |
Multiplication factor for detecting large offset values.
Numeric,
positive and of length |
optim.maxit |
Positive integer. Number of times |
noRRR |
Formula giving terms that are not
to be included in the
reduced-rank regression (or formation of
the latent variables),
i.e., those belong to |
Norrr |
Defunct. Please use |
Parscale |
Numerical and positive-valued vector of length |
sd.Cinit |
Standard deviation of the initial values for the elements
of |
trace |
Logical indicating if output should be produced for
each iteration. The default is |
SmallNo |
Positive numeric between |
Use.Init.Poisson.QO |
Logical. If |
wzepsilon |
Small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive. |
... |
Ignored at present. |
Recall that the central formula for CQO is
where is a vector
(usually just a 1 for an intercept),
is a vector of environmental variables,
is
a
-vector of latent variables,
is
a vector of 0s but with a 1 in the
th position.
QRR-VGLMs are an extension of RR-VGLMs and
allow for maximum
likelihood solutions to constrained
quadratic ordination (CQO) models.
Having I.tolerances = TRUE
means all the tolerance matrices
are the order- identity matrix, i.e., it forces
bell-shaped curves/surfaces on all species. This results in a
more difficult optimization problem (especially for 2-parameter
models such as the negative binomial and gamma) because of overflow
errors and it appears there are more local solutions. To help avoid
the overflow errors, scaling
by the factor
Parscale
can help enormously. Even better, scaling by specifying
isd.latvar
is more understandable to humans. If failure to
converge occurs, try adjusting Parscale
, or better, setting
eq.tolerances = TRUE
(and hope that the estimated tolerance
matrix is positive-definite). To fit an equal-tolerances model, it
is firstly best to try setting I.tolerances = TRUE
and varying
isd.latvar
and/or MUXfactor
if it fails to converge.
If it still fails to converge after many attempts, try setting
eq.tolerances = TRUE
, however this will usually be a lot slower
because it requires a lot more memory.
With a model, the latent variables are always uncorrelated,
i.e., the variance-covariance matrix of the site scores is a diagonal
matrix.
If setting eq.tolerances = TRUE
is
used and the common
estimated tolerance matrix is positive-definite
then that model is
effectively the same as the I.tolerances = TRUE
model (the two are
transformations of each other).
In general, I.tolerances = TRUE
is numerically more unstable and presents
a more difficult problem
to optimize; the arguments isd.latvar
and/or MUXfactor
often
must be assigned some good value(s)
(possibly found by trial and error)
in order for convergence to occur.
Setting I.tolerances = TRUE
forces a bell-shaped curve or surface
onto all the species data,
therefore this option should be used with
deliberation. If unsuitable,
the resulting fit may be very misleading.
Usually it is a good idea
for the user to set eq.tolerances = FALSE
to see which species
appear to have a bell-shaped curve or surface.
Improvements to the
fit can often be achieved using transformations,
e.g., nitrogen
concentration to log nitrogen concentration.
Fitting a CAO model (see cao
)
first is a good idea for
pre-examining the data and checking whether
it is appropriate to fit
a CQO model.
A list with components matching the input names.
The default value of Bestof
is a bare minimum
for many datasets,
therefore it will be necessary to increase its
value to increase the
chances of obtaining the global solution.
When I.tolerances = TRUE
it is a good idea to apply
scale
to all
the numerical variables that make up
the latent variable, i.e., those of .
This is to make
them have mean 0, and hence avoid large offset
values which cause
numerical problems.
This function has many arguments that are common with
rrvglm.control
and vglm.control
.
It is usually a good idea to try fitting a model with
I.tolerances = TRUE
first, and
if convergence is unsuccessful,
then try eq.tolerances = TRUE
and I.tolerances = FALSE
.
Ordination diagrams with
eq.tolerances = TRUE
have a natural
interpretation, but
with eq.tolerances = FALSE
they are
more complicated and
requires, e.g., contours to be overlaid on
the ordination diagram
(see lvplot.qrrvglm
).
In the example below, an equal-tolerances CQO model
is fitted to the
hunting spiders data.
Because I.tolerances = TRUE
, it is a good idea
to center all the variables first.
Upon fitting the model,
the actual standard deviation of the site scores
are computed. Ideally,
the
isd.latvar
argument should have had
this value for the best
chances of getting good initial values.
For comparison, the model is
refitted with that value and it should
run more faster and reliably.
Thomas W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
cqo
,
rcqo
,
Coef.qrrvglm
,
Coef.qrrvglm-class
,
optim
,
binomialff
,
poissonff
,
negbinomial
,
gamma2
.
## Not run: # Poisson CQO with equal tolerances set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Good when I.tolerances = TRUE p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, eq.tolerances = TRUE) sort(deviance(p1, history = TRUE)) # Iteration history (isd.latvar <- apply(latvar(p1), 2, sd)) # Approx isd.latvar # Refit the model with better initial values set.seed(111) # This leads to the global solution p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, I.tolerances = TRUE, poissonff, data = hspider, isd.latvar = isd.latvar) # Note this sort(deviance(p1, history = TRUE)) # Iteration history ## End(Not run)
## Not run: # Poisson CQO with equal tolerances set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[,1:6]) # Good when I.tolerances = TRUE p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, eq.tolerances = TRUE) sort(deviance(p1, history = TRUE)) # Iteration history (isd.latvar <- apply(latvar(p1), 2, sd)) # Approx isd.latvar # Refit the model with better initial values set.seed(111) # This leads to the global solution p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, I.tolerances = TRUE, poissonff, data = hspider, isd.latvar = isd.latvar) # Note this sort(deviance(p1, history = TRUE)) # Iteration history ## End(Not run)
Plots quantiles associated with a Gumbel model.
qtplot.gumbel(object, show.plot = TRUE, y.arg = TRUE, spline.fit = FALSE, label = TRUE, R = object@misc$R, percentiles = object@misc$percentiles, add.arg = FALSE, mpv = object@misc$mpv, xlab = NULL, ylab = "", main = "", pch = par()$pch, pcol.arg = par()$col, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, tcol.arg = par()$col, tadj = 1, ...)
qtplot.gumbel(object, show.plot = TRUE, y.arg = TRUE, spline.fit = FALSE, label = TRUE, R = object@misc$R, percentiles = object@misc$percentiles, add.arg = FALSE, mpv = object@misc$mpv, xlab = NULL, ylab = "", main = "", pch = par()$pch, pcol.arg = par()$col, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, tcol.arg = par()$col, tadj = 1, ...)
object |
A VGAM extremes model of the
Gumbel type, produced by modelling functions such as |
show.plot |
Logical. Plot it? If |
y.arg |
Logical. Add the raw data on to the plot? |
spline.fit |
Logical. Use a spline fit through the fitted percentiles? This can be useful if there are large gaps between some values along the covariate. |
label |
Logical. Label the percentiles? |
R |
See |
percentiles |
See |
add.arg |
Logical. Add the plot to an existing plot? |
mpv |
See |
xlab |
Caption for the x-axis. See |
ylab |
Caption for the y-axis. See |
main |
Title of the plot. See |
pch |
Plotting character. See |
pcol.arg |
Color of the points.
See the |
llty.arg |
Line type. Line type.
See the |
lcol.arg |
Color of the lines.
See the |
llwd.arg |
Line width.
See the |
tcol.arg |
Color of the text
(if |
tadj |
Text justification.
See the |
... |
Arguments passed into the |
There should be a single covariate such as time.
The quantiles specified by percentiles
are plotted.
The object with a list called qtplot
in the post
slot of object
.
(If show.plot = FALSE
then just the list is returned.)
The list contains components
fitted.values |
The percentiles of the response, possibly including the MPV. |
percentiles |
The percentiles (small vector of values between 0 and 100. |
Unlike gumbel
, one cannot have
percentiles = NULL
.
Thomas W. Yee
ymat <- as.matrix(venice[, paste("r", 1:10, sep = "")]) fit1 <- vgam(ymat ~ s(year, df = 3), gumbel(R = 365, mpv = TRUE), data = venice, trace = TRUE, na.action = na.pass) head(fitted(fit1)) ## Not run: par(mfrow = c(1, 1), bty = "l", xpd = TRUE, las = 1) qtplot(fit1, mpv = TRUE, lcol = c(1, 2, 5), tcol = c(1, 2, 5), lwd = 2, pcol = "blue", tadj = 0.4, ylab = "Sea level (cm)") qtplot(fit1, perc = 97, mpv = FALSE, lcol = 3, tcol = 3, lwd = 2, tadj = 0.4, add = TRUE) -> saved head(saved@post$qtplot$fitted) ## End(Not run)
ymat <- as.matrix(venice[, paste("r", 1:10, sep = "")]) fit1 <- vgam(ymat ~ s(year, df = 3), gumbel(R = 365, mpv = TRUE), data = venice, trace = TRUE, na.action = na.pass) head(fitted(fit1)) ## Not run: par(mfrow = c(1, 1), bty = "l", xpd = TRUE, las = 1) qtplot(fit1, mpv = TRUE, lcol = c(1, 2, 5), tcol = c(1, 2, 5), lwd = 2, pcol = "blue", tadj = 0.4, ylab = "Sea level (cm)") qtplot(fit1, perc = 97, mpv = FALSE, lcol = 3, tcol = 3, lwd = 2, tadj = 0.4, add = TRUE) -> saved head(saved@post$qtplot$fitted) ## End(Not run)
Plots quantiles associated with a LMS quantile regression.
qtplot.lmscreg(object, newdata = NULL, percentiles = object@misc$percentiles, show.plot = TRUE, ...)
qtplot.lmscreg(object, newdata = NULL, percentiles = object@misc$percentiles, show.plot = TRUE, ...)
object |
A VGAM quantile regression model, i.e.,
an object produced by modelling functions
such as |
newdata |
Optional data frame for computing the quantiles. If missing, the original data is used. |
percentiles |
Numerical vector with values between 0 and 100 that specify the percentiles (quantiles). The default are the percentiles used when the model was fitted. |
show.plot |
Logical. Plot it? If |
... |
Graphical parameter that are passed into
|
The ‘primary’ variable is defined as the main covariate upon which the regression or smoothing is performed. For example, in medical studies, it is often the age. In VGAM, it is possible to handle more than one covariate, however, the primary variable must be the first term after the intercept.
A list with the following components.
fitted.values |
A vector of fitted percentile values. |
percentiles |
The percentiles used. |
plotqtplot.lmscreg
does the actual plotting.
Thomas W. Yee
Yee, T. W. (2004). Quantile regression via vector generalized additive models. Statistics in Medicine, 23, 2295–2315.
plotqtplot.lmscreg
,
deplot.lmscreg
,
lms.bcn
,
lms.bcg
,
lms.yjn
.
## Not run: fit <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero=1), bmi.nz) qtplot(fit) qtplot(fit, perc = c(25, 50, 75, 95), lcol = 4, tcol = 4, llwd = 2) ## End(Not run)
## Not run: fit <- vgam(BMI ~ s(age, df = c(4, 2)), lms.bcn(zero=1), bmi.nz) qtplot(fit) qtplot(fit, perc = c(25, 50, 75, 95), lcol = 4, tcol = 4, llwd = 2) ## End(Not run)
Takes a rcim
fit of the appropriate format and
returns either the quasi-variances or quasi-standard errors.
qvar(object, se = FALSE, ...)
qvar(object, se = FALSE, ...)
object |
A |
se |
Logical. If |
... |
Currently unused. |
This simple function is ad hoc and simply is equivalent to
computing the quasi-variances
by diag(predict(fit1)[, c(TRUE, FALSE)]) / 2
.
This function is for convenience only.
Serious users of quasi-variances ought to understand
why and how this
function works.
A vector of quasi-variances or quasi-standard errors.
T. W. Yee.
rcim
,
uninormal
,
explink
,
Qvar
,
ships
.
data("ships", package = "MASS") Shipmodel <- vglm(incidents ~ type + year + period, poissonff, offset = log(service), data = ships, subset = (service > 0)) # Easiest form of input fit1 = rcim(Qvar(Shipmodel, "type"), uninormal("explink"), maxit=99) qvar(fit1) # Quasi-variances qvar(fit1, se = TRUE) # Quasi-standard errors # Manually compute them: (quasiVar <- exp(diag(fitted(fit1))) / 2) # Version 1 (quasiVar <- diag(predict(fit1)[, c(TRUE, FALSE)]) / 2) # Version 2 (quasiSE <- sqrt(quasiVar)) ## Not run: qvplot(fit1, col = "green", lwd = 3, scol = "blue", slwd = 2, las = 1) ## End(Not run)
data("ships", package = "MASS") Shipmodel <- vglm(incidents ~ type + year + period, poissonff, offset = log(service), data = ships, subset = (service > 0)) # Easiest form of input fit1 = rcim(Qvar(Shipmodel, "type"), uninormal("explink"), maxit=99) qvar(fit1) # Quasi-variances qvar(fit1, se = TRUE) # Quasi-standard errors # Manually compute them: (quasiVar <- exp(diag(fitted(fit1))) / 2) # Version 1 (quasiVar <- diag(predict(fit1)[, c(TRUE, FALSE)]) / 2) # Version 2 (quasiSE <- sqrt(quasiVar)) ## Not run: qvplot(fit1, col = "green", lwd = 3, scol = "blue", slwd = 2, las = 1) ## End(Not run)
Takes a vglm
fit or a variance-covariance matrix,
and preprocesses it for rcim
and
uninormal
so that quasi-variances can be computed.
Qvar(object, factorname = NULL, which.linpred = 1, coef.indices = NULL, labels = NULL, dispersion = NULL, reference.name = "(reference)", estimates = NULL)
Qvar(object, factorname = NULL, which.linpred = 1, coef.indices = NULL, labels = NULL, dispersion = NULL, reference.name = "(reference)", estimates = NULL)
object |
A |
which.linpred |
A single integer from the set |
factorname |
Character.
If the |
labels |
Character. Optional, for labelling the variance-covariance matrix. |
dispersion |
Numeric.
Optional, passed into |
reference.name |
Character. Label for for the reference level. |
coef.indices |
Optional numeric vector of length at least 3 specifying the indices of the factor from the variance-covariance matrix. |
estimates |
an optional vector of estimated coefficients
(redundant if |
Suppose a factor with levels is an explanatory variable in a
regression model. By default, R treats the first level as baseline so
that its coefficient is set to zero. It estimates the other
coefficients, and with its associated standard errors, this is the
conventional output. From the complete variance-covariance matrix one
can compute
quasi-variances based on all pairwise difference
of the coefficients. They are based on an approximation, and can be
treated as uncorrelated. In minimizing the relative (not absolute)
errors it is not hard to see that the estimation involves a RCIM
(
rcim
) with an exponential link function
(explink
).
If object
is a model, then at least one of factorname
or
coef.indices
must be non-NULL
. The value of
coef.indices
, if non-NULL
, determines which rows and
columns of the model's variance-covariance matrix to use. If
coef.indices
contains a zero, an extra row and column are
included at the indicated position, to represent the zero variances
and covariances associated with a reference level. If
coef.indices
is NULL
, then factorname
should be
the name of a factor effect in the model, and is used in order to
extract the necessary variance-covariance estimates.
Quasi-variances were first implemented in R with qvcalc. This implementation draws heavily from that.
A by
matrix whose
-
element
is the logarithm of the variance of the
th coefficient
minus the
th coefficient, for all values of
and
. The diagonal elements are abitrary and are set
to zero.
The matrix has an attribute that corresponds to the prior
weight matrix; it is accessed by uninormal
and replaces the usual weights
argument.
of vglm
. This weight matrix has ones on
the off-diagonals and some small positive number on
the diagonals.
Negative quasi-variances may occur (one of them and
only one), though they are rare in practice. If
so then numerical problems may occur. See
qvcalc()
for more information.
This is an adaptation of qvcalc()
in qvcalc.
It should work for all vglm
models with one linear predictor, i.e., .
For
the factor should appear only in one of the
linear predictors.
It is important to set maxit
to be larger than usual for
rcim
since convergence is slow. Upon successful
convergence the th row effect and the
th column effect
should be equal. A simple computation involving the fitted and
predicted values allows the quasi-variances to be extracted (see
example below).
A function to plot comparison intervals has not been written here.
T. W. Yee, based heavily on qvcalc()
in qvcalc
written by David Firth.
Firth, D. (2003). Overcoming the reference category problem in the presentation of statistical models. Sociological Methodology 33, 1–18.
Firth, D. and de Menezes, R. X. (2004). Quasi-variances. Biometrika 91, 65–80.
Yee, T. W. and Hadi, A. F. (2014). Row-column interaction models, with an R implementation. Computational Statistics, 29, 1427–1445.
rcim
,
vglm
,
qvar
,
uninormal
,
explink
,
qvcalc()
in qvcalc,
ships
.
# Example 1 data("ships", package = "MASS") Shipmodel <- vglm(incidents ~ type + year + period, poissonff, offset = log(service), # trace = TRUE, model = TRUE, data = ships, subset = (service > 0)) # Easiest form of input fit1 <- rcim(Qvar(Shipmodel, "type"), uninormal("explink"), maxit = 99) qvar(fit1) # Easy method to get the quasi-variances qvar(fit1, se = TRUE) # Easy method to get the quasi-standard errors (quasiVar <- exp(diag(fitted(fit1))) / 2) # Version 1 (quasiVar <- diag(predict(fit1)[, c(TRUE, FALSE)]) / 2) # Version 2 (quasiSE <- sqrt(quasiVar)) # Another form of input fit2 <- rcim(Qvar(Shipmodel, coef.ind = c(0, 2:5), reference.name = "typeA"), uninormal("explink"), maxit = 99) ## Not run: qvplot(fit2, col = "green", lwd = 3, scol = "blue", slwd = 2, las = 1) # The variance-covariance matrix is another form of input (not recommended) fit3 <- rcim(Qvar(cbind(0, rbind(0, vcov(Shipmodel)[2:5, 2:5])), labels = c("typeA", "typeB", "typeC", "typeD", "typeE"), estimates = c(typeA = 0, coef(Shipmodel)[2:5])), uninormal("explink"), maxit = 99) (QuasiVar <- exp(diag(fitted(fit3))) / 2) # Version 1 (QuasiVar <- diag(predict(fit3)[, c(TRUE, FALSE)]) / 2) # Version 2 (QuasiSE <- sqrt(quasiVar)) ## Not run: qvplot(fit3) # Example 2: a model with M > 1 linear predictors ## Not run: require("VGAMdata") xs.nz.f <- subset(xs.nz, sex == "F") xs.nz.f <- subset(xs.nz.f, !is.na(babies) & !is.na(age) & !is.na(ethnicity)) xs.nz.f <- subset(xs.nz.f, ethnicity != "Other") clist <- list("sm.bs(age, df = 4)" = rbind(1, 0), "sm.bs(age, df = 3)" = rbind(0, 1), "ethnicity" = diag(2), "(Intercept)" = diag(2)) fit1 <- vglm(babies ~ sm.bs(age, df = 4) + sm.bs(age, df = 3) + ethnicity, zipoissonff(zero = NULL), xs.nz.f, constraints = clist, trace = TRUE) Fit1 <- rcim(Qvar(fit1, "ethnicity", which.linpred = 1), uninormal("explink", imethod = 1), maxit = 99, trace = TRUE) Fit2 <- rcim(Qvar(fit1, "ethnicity", which.linpred = 2), uninormal("explink", imethod = 1), maxit = 99, trace = TRUE) ## End(Not run) ## Not run: par(mfrow = c(1, 2)) qvplot(Fit1, scol = "blue", pch = 16, main = expression(eta[1]), slwd = 1.5, las = 1, length.arrows = 0.07) qvplot(Fit2, scol = "blue", pch = 16, main = expression(eta[2]), slwd = 1.5, las = 1, length.arrows = 0.07) ## End(Not run)
# Example 1 data("ships", package = "MASS") Shipmodel <- vglm(incidents ~ type + year + period, poissonff, offset = log(service), # trace = TRUE, model = TRUE, data = ships, subset = (service > 0)) # Easiest form of input fit1 <- rcim(Qvar(Shipmodel, "type"), uninormal("explink"), maxit = 99) qvar(fit1) # Easy method to get the quasi-variances qvar(fit1, se = TRUE) # Easy method to get the quasi-standard errors (quasiVar <- exp(diag(fitted(fit1))) / 2) # Version 1 (quasiVar <- diag(predict(fit1)[, c(TRUE, FALSE)]) / 2) # Version 2 (quasiSE <- sqrt(quasiVar)) # Another form of input fit2 <- rcim(Qvar(Shipmodel, coef.ind = c(0, 2:5), reference.name = "typeA"), uninormal("explink"), maxit = 99) ## Not run: qvplot(fit2, col = "green", lwd = 3, scol = "blue", slwd = 2, las = 1) # The variance-covariance matrix is another form of input (not recommended) fit3 <- rcim(Qvar(cbind(0, rbind(0, vcov(Shipmodel)[2:5, 2:5])), labels = c("typeA", "typeB", "typeC", "typeD", "typeE"), estimates = c(typeA = 0, coef(Shipmodel)[2:5])), uninormal("explink"), maxit = 99) (QuasiVar <- exp(diag(fitted(fit3))) / 2) # Version 1 (QuasiVar <- diag(predict(fit3)[, c(TRUE, FALSE)]) / 2) # Version 2 (QuasiSE <- sqrt(quasiVar)) ## Not run: qvplot(fit3) # Example 2: a model with M > 1 linear predictors ## Not run: require("VGAMdata") xs.nz.f <- subset(xs.nz, sex == "F") xs.nz.f <- subset(xs.nz.f, !is.na(babies) & !is.na(age) & !is.na(ethnicity)) xs.nz.f <- subset(xs.nz.f, ethnicity != "Other") clist <- list("sm.bs(age, df = 4)" = rbind(1, 0), "sm.bs(age, df = 3)" = rbind(0, 1), "ethnicity" = diag(2), "(Intercept)" = diag(2)) fit1 <- vglm(babies ~ sm.bs(age, df = 4) + sm.bs(age, df = 3) + ethnicity, zipoissonff(zero = NULL), xs.nz.f, constraints = clist, trace = TRUE) Fit1 <- rcim(Qvar(fit1, "ethnicity", which.linpred = 1), uninormal("explink", imethod = 1), maxit = 99, trace = TRUE) Fit2 <- rcim(Qvar(fit1, "ethnicity", which.linpred = 2), uninormal("explink", imethod = 1), maxit = 99, trace = TRUE) ## End(Not run) ## Not run: par(mfrow = c(1, 2)) qvplot(Fit1, scol = "blue", pch = 16, main = expression(eta[1]), slwd = 1.5, las = 1, length.arrows = 0.07) qvplot(Fit2, scol = "blue", pch = 16, main = expression(eta[2]), slwd = 1.5, las = 1, length.arrows = 0.07) ## End(Not run)
R-squared goodness of fit for latent variable models, such as cumulative link models. Some software such as Stata call the quantity the McKelvey–Zavoina R-squared, which was proposed in their 1975 paper for cumulative probit models.
R2latvar(object)
R2latvar(object)
object |
A |
Models such as the proportional odds model have
a latent variable interpretation
(see, e.g., Section 6.2.6 of Agresti (2018),
Section 14.4.1.1 of Yee (2015),
Section 5.2.2 of McCullagh and Nelder (1989)).
It is possible to summarize the predictive power of
the model by computing on the transformed
scale, e.g., on a standard normal distribution for
a
probitlink
link.
For more details see Section 6.3.7 of Agresti (2018).
The value.
Approximately, that amount is the variability in the
latent variable of the model explained by all the explanatory
variables.
Then taking the positive square-root gives an approximate
multiple correlation
.
Thomas W. Yee
Agresti, A. (2018). An Introduction to Categorical Data Analysis, 3rd ed., New York: John Wiley & Sons.
McKelvey, R. D. and W. Zavoina (1975). A statistical model for the analysis of ordinal level dependent variables. The Journal of Mathematical Sociology, 4, 103–120.
vglm
,
cumulative
,
propodds
,
logitlink
,
probitlink
,
clogloglink
,
summary.lm
.
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)) R2latvar(fit)
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, data = pneumo)) R2latvar(fit)
Returns the rank of reduced-rank regression-type models in the VGAM package.
Rank(object, ...) Rank.rrvglm(object, ...) Rank.qrrvglm(object, ...) Rank.rrvgam(object, ...)
Rank(object, ...) Rank.rrvglm(object, ...) Rank.qrrvglm(object, ...) Rank.rrvgam(object, ...)
object |
Some VGAM object, for example, having
class |
... |
Other possible arguments fed into the function later (used for added flexibility for the future). |
Regression models based on reduced-rank regression have a quantity
called the rank, which is 1 or 2 or 3 etc.
The smaller the value the more dimension reduction, so that there
are fewer parameters.
This function was not called rank()
to avoid conflict
with rank
.
Returns an integer value, provided the rank of the model makes sense.
This function has not been defined for VGLMs yet.
It might refer to the rank of the VL model matrix,
but for now this function should not be applied to
vglm
fits.
T. W. Yee.
RR-VGLMs are described in rrvglm-class
;
QRR-VGLMs are described in qrrvglm-class
.
pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) (fit1 <- rrvglm(cbind(normal, mild, severe) ~ let + x3, acat, data = pneumo)) coef(fit1, matrix = TRUE) constraints(fit1) Rank(fit1)
pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) (fit1 <- rrvglm(cbind(normal, mild, severe) ~ let + x3, acat, data = pneumo)) coef(fit1, matrix = TRUE) constraints(fit1) Rank(fit1)
Estimating the parameter of the Rayleigh distribution by maximum likelihood estimation. Right-censoring is allowed.
rayleigh(lscale = "loglink", nrfs = 1/3 + 0.01, oim.mean = TRUE, zero = NULL, parallel = FALSE, type.fitted = c("mean", "percentiles", "Qlink"), percentiles = 50) cens.rayleigh(lscale = "loglink", oim = TRUE)
rayleigh(lscale = "loglink", nrfs = 1/3 + 0.01, oim.mean = TRUE, zero = NULL, parallel = FALSE, type.fitted = c("mean", "percentiles", "Qlink"), percentiles = 50) cens.rayleigh(lscale = "loglink", oim = TRUE)
lscale |
Parameter link function applied to the scale parameter |
nrfs |
Numeric, of length one, with value in |
oim.mean |
Logical, used only for intercept-only models.
|
oim |
Logical.
For censored data only,
|
zero , parallel
|
Details at |
type.fitted , percentiles
|
See |
The Rayleigh distribution, which is used in physics, has a probability density function that can be written
for and
.
The mean of
is
(returned as the fitted values)
and its variance is
.
The VGAM family function cens.rayleigh
handles
right-censored data (the true value is greater than the observed
value). To indicate which type of censoring, input extra =
list(rightcensored = vec2)
where vec2
is a logical vector the
same length as the response. If the component of this list is missing
then the logical values are taken to be FALSE
. The fitted
object has this component stored in the extra
slot.
The VGAM family function rayleigh
handles multiple
responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
rrvglm
and vgam
.
The theory behind the argument oim
is not fully complete.
The poisson.points
family function is
more general so that if ostatistic = 1
and dimension = 2
then it coincides with rayleigh
.
Other related distributions are the Maxwell
and Weibull distributions.
T. W. Yee
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
Rayleigh
,
genrayleigh
,
riceff
,
maxwell
,
weibullR
,
poisson.points
,
simulate.vlm
.
nn <- 1000; Scale <- exp(2) rdata <- data.frame(ystar = rrayleigh(nn, scale = Scale)) fit <- vglm(ystar ~ 1, rayleigh, data = rdata, trace = TRUE) head(fitted(fit)) with(rdata, mean(ystar)) coef(fit, matrix = TRUE) Coef(fit) # Censored data rdata <- transform(rdata, U = runif(nn, 5, 15)) rdata <- transform(rdata, y = pmin(U, ystar)) ## Not run: par(mfrow = c(1, 2)) hist(with(rdata, ystar)); hist(with(rdata, y)) ## End(Not run) extra <- with(rdata, list(rightcensored = ystar > U)) fit <- vglm(y ~ 1, cens.rayleigh, data = rdata, trace = TRUE, extra = extra, crit = "coef") table(fit@extra$rightcen) coef(fit, matrix = TRUE) head(fitted(fit))
nn <- 1000; Scale <- exp(2) rdata <- data.frame(ystar = rrayleigh(nn, scale = Scale)) fit <- vglm(ystar ~ 1, rayleigh, data = rdata, trace = TRUE) head(fitted(fit)) with(rdata, mean(ystar)) coef(fit, matrix = TRUE) Coef(fit) # Censored data rdata <- transform(rdata, U = runif(nn, 5, 15)) rdata <- transform(rdata, y = pmin(U, ystar)) ## Not run: par(mfrow = c(1, 2)) hist(with(rdata, ystar)); hist(with(rdata, y)) ## End(Not run) extra <- with(rdata, list(rightcensored = ystar > U)) fit <- vglm(y ~ 1, cens.rayleigh, data = rdata, trace = TRUE, extra = extra, crit = "coef") table(fit@extra$rightcen) coef(fit, matrix = TRUE) head(fitted(fit))
Density, distribution function, quantile function and random
generation for the Rayleigh distribution with parameter
a
.
drayleigh(x, scale = 1, log = FALSE) prayleigh(q, scale = 1, lower.tail = TRUE, log.p = FALSE) qrayleigh(p, scale = 1, lower.tail = TRUE, log.p = FALSE) rrayleigh(n, scale = 1)
drayleigh(x, scale = 1, log = FALSE) prayleigh(q, scale = 1, lower.tail = TRUE, log.p = FALSE) qrayleigh(p, scale = 1, lower.tail = TRUE, log.p = FALSE) rrayleigh(n, scale = 1)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Fed into |
scale |
the scale parameter |
log |
Logical.
If |
lower.tail , log.p
|
See rayleigh
, the VGAM family
function for estimating the scale parameter by
maximum likelihood estimation, for the formula of the
probability density function and range restrictions on
the parameter
.
drayleigh
gives the density,
prayleigh
gives the distribution function,
qrayleigh
gives the quantile function, and
rrayleigh
generates random deviates.
The Rayleigh distribution is related to the Maxwell distribution.
T. W. Yee and Kai Huang
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
## Not run: Scale <- 2; x <- seq(-1, 8, by = 0.1) plot(x, drayleigh(x, scale = Scale), type = "l", ylim = c(0,1), las = 1, ylab = "", main = "Rayleigh density divided into 10 equal areas; red = CDF") abline(h = 0, col = "blue", lty = 2) qq <- qrayleigh(seq(0.1, 0.9, by = 0.1), scale = Scale) lines(qq, drayleigh(qq, scale = Scale), col = 2, lty = 3, type = "h") lines(x, prayleigh(x, scale = Scale), col = "red") ## End(Not run)
## Not run: Scale <- 2; x <- seq(-1, 8, by = 0.1) plot(x, drayleigh(x, scale = Scale), type = "l", ylim = c(0,1), las = 1, ylab = "", main = "Rayleigh density divided into 10 equal areas; red = CDF") abline(h = 0, col = "blue", lty = 2) qq <- qrayleigh(seq(0.1, 0.9, by = 0.1), scale = Scale) lines(qq, drayleigh(qq, scale = Scale), col = 2, lty = 3, type = "h") lines(x, prayleigh(x, scale = Scale), col = "red") ## End(Not run)
Rearrange the rows and columns of the input so that the first row and first column are baseline. This function is for rank-zero row-column interaction models (RCIMs; i.e., general main effects models).
Rcim(mat, rbaseline = 1, cbaseline = 1)
Rcim(mat, rbaseline = 1, cbaseline = 1)
mat |
Matrix, of dimension |
rbaseline , cbaseline
|
Numeric (row number of the matrix |
This is a data preprocessing function for rcim
.
For rank-zero row-column interaction models this function
establishes the baseline (or reference) levels of the matrix
response with respect to the row and columns—these become
the new first row and column.
Matrix of the same dimension as the input,
with rbaseline
and cbaseline
specifying the
first rows and columns.
The default is no change in mat
.
This function is similar to moffset
; see
moffset
for information about the differences.
If numeric, the arguments
rbaseline
and
cbaseline
differ from arguments
roffset
and
coffset
in moffset
by 1 (when elements of the matrix agree).
Alfian F. Hadi and T. W. Yee.
(alcoff.e <- moffset(alcoff, roffset = "6", postfix = "*")) (aa <- Rcim(alcoff, rbaseline = "11", cbaseline = "Sun")) (bb <- moffset(alcoff, "11", "Sun", postfix = "*")) aa - bb # Note the difference!
(alcoff.e <- moffset(alcoff, roffset = "6", postfix = "*")) (aa <- Rcim(alcoff, rbaseline = "11", cbaseline = "Sun")) (bb <- moffset(alcoff, "11", "Sun", postfix = "*")) aa - bb # Note the difference!
Random generation for constrained quadratic ordination (CQO).
rcqo(n, p, S, Rank = 1, family = c("poisson", "negbinomial", "binomial-poisson", "Binomial-negbinomial", "ordinal-poisson", "Ordinal-negbinomial", "gamma2"), eq.maximums = FALSE, eq.tolerances = TRUE, es.optimums = FALSE, lo.abundance = if (eq.maximums) hi.abundance else 10, hi.abundance = 100, sd.latvar = head(1.5/2^(0:3), Rank), sd.optimums = ifelse(es.optimums, 1.5/Rank, 1) * ifelse(scale.latvar, sd.latvar, 1), sd.tolerances = 0.25, Kvector = 1, Shape = 1, sqrt.arg = FALSE, log.arg = FALSE, rhox = 0.5, breaks = 4, seed = NULL, optimums1.arg = NULL, Crow1positive = TRUE, xmat = NULL, scale.latvar = TRUE)
rcqo(n, p, S, Rank = 1, family = c("poisson", "negbinomial", "binomial-poisson", "Binomial-negbinomial", "ordinal-poisson", "Ordinal-negbinomial", "gamma2"), eq.maximums = FALSE, eq.tolerances = TRUE, es.optimums = FALSE, lo.abundance = if (eq.maximums) hi.abundance else 10, hi.abundance = 100, sd.latvar = head(1.5/2^(0:3), Rank), sd.optimums = ifelse(es.optimums, 1.5/Rank, 1) * ifelse(scale.latvar, sd.latvar, 1), sd.tolerances = 0.25, Kvector = 1, Shape = 1, sqrt.arg = FALSE, log.arg = FALSE, rhox = 0.5, breaks = 4, seed = NULL, optimums1.arg = NULL, Crow1positive = TRUE, xmat = NULL, scale.latvar = TRUE)
n |
Number of sites. It is denoted by |
p |
Number of environmental variables, including an intercept term.
It is denoted by |
S |
Number of species.
It is denoted by |
Rank |
The rank or the number of latent variables or true dimension
of the data on the reduced space.
This must be either 1, 2, 3 or 4.
It is denoted by |
family |
What type of species data is to be returned.
The first choice is the default.
If binomial then a 0 means absence and 1 means presence.
If ordinal then the |
eq.maximums |
Logical. Does each species have the same maximum?
See arguments |
eq.tolerances |
Logical. Does each species have the
same tolerance? If |
es.optimums |
Logical. Do the species have equally spaced optimums?
If |
lo.abundance , hi.abundance
|
Numeric. These are recycled to a vector of length |
sd.latvar |
Numeric, of length |
sd.optimums |
Numeric, of length |
sd.tolerances |
Logical. If |
Kvector |
A vector of positive |
Shape |
A vector of positive |
sqrt.arg |
Logical. Take the square-root of the
negative binomial counts?
Assigning |
log.arg |
Logical. Take the logarithm of the gamma random variates?
Assigning |
rhox |
Numeric, less than 1 in absolute value.
The correlation between the environmental variables.
The correlation matrix is a matrix of 1's along the diagonal
and |
breaks |
If |
seed |
If given, it is passed into |
optimums1.arg |
If assigned and |
Crow1positive |
See |
xmat |
The
|
scale.latvar |
Logical. If |
This function generates data coming from a
constrained quadratic
ordination (CQO) model. In particular,
data coming from a species packing model
can be generated
with this function.
The species packing model states that species have equal
tolerances, equal maximums, and optimums which are uniformly
distributed over the latent variable space. This can be
achieved by assigning the arguments es.optimums = TRUE
,
eq.maximums = TRUE
, eq.tolerances = TRUE
.
At present, the Poisson and negative binomial abundances
are generated first using lo.abundance
and
hi.abundance
, and if family
is binomial or ordinal
then it is converted into these forms.
In CQO theory the by
matrix
is
partitioned into two parts
and
. The matrix
contains the ‘real’ environmental variables whereas
the variables in
are just for adjustment purposes;
they contain the intercept terms and other variables that one
wants to adjust for when (primarily) looking at the variables
in
. This function has
only being a matrix
of ones, i.e., containing an intercept only.
A by
data frame with
components and attributes.
In the following the attributes are labelled with double
quotes.
x2 , x3 , x4 , ... , xp
|
The environmental variables. This makes up the
|
y1 , y2 , x3 , ... , yS
|
The species data. This makes up the
|
"concoefficients" |
The |
"formula" |
The formula involving the species and environmental
variable names.
This can be used directly in the |
"log.maximums" |
The |
"latvar" |
The |
"eta" |
The linear/additive predictor value. |
"optimums" |
The |
"tolerances" |
The |
Other attributes are "break"
,
"family"
, "Rank"
,
"lo.abundance"
, "hi.abundance"
,
"eq.tolerances"
, "eq.maximums"
,
"seed"
as used.
This function is under development and is not finished yet. There may be a few bugs.
Yet to do: add an argument that allows absences to be equal to the first level if ordinal data is requested.
T. W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
ter Braak, C. J. F. and Prentice, I. C. (1988). A theory of gradient analysis. Advances in Ecological Research, 18, 271–317.
cqo
,
qrrvglm.control
,
cut
,
binomialff
,
poissonff
,
negbinomial
,
gamma2
.
## Not run: # Example 1: Species packing model: n <- 100; p <- 5; S <- 5 mydata <- rcqo(n, p, S, es.opt = TRUE, eq.max = TRUE) names(mydata) (myform <- attr(mydata, "formula")) fit <- cqo(myform, poissonff, mydata, Bestof = 3) # eq.tol = TRUE matplot(attr(mydata, "latvar"), mydata[,-(1:(p-1))], col = 1:S) persp(fit, col = 1:S, add = TRUE) lvplot(fit, lcol = 1:S, y = TRUE, pcol = 1:S) # Same plot as above # Compare the fitted model with the 'truth' concoef(fit) # The fitted model attr(mydata, "concoefficients") # The 'truth' c(apply(attr(mydata, "latvar"), 2, sd), apply(latvar(fit), 2, sd)) # Both values should be approx equal # Example 2: negative binomial data fitted using a Poisson model: n <- 200; p <- 5; S <- 5 mydata <- rcqo(n, p, S, fam = "negbin", sqrt = TRUE) myform <- attr(mydata, "formula") fit <- cqo(myform, fam = poissonff, dat = mydata) # I.tol = TRUE, lvplot(fit, lcol = 1:S, y = TRUE, pcol = 1:S) # Compare the fitted model with the 'truth' concoef(fit) # The fitted model attr(mydata, "concoefficients") # The 'truth' ## End(Not run)
## Not run: # Example 1: Species packing model: n <- 100; p <- 5; S <- 5 mydata <- rcqo(n, p, S, es.opt = TRUE, eq.max = TRUE) names(mydata) (myform <- attr(mydata, "formula")) fit <- cqo(myform, poissonff, mydata, Bestof = 3) # eq.tol = TRUE matplot(attr(mydata, "latvar"), mydata[,-(1:(p-1))], col = 1:S) persp(fit, col = 1:S, add = TRUE) lvplot(fit, lcol = 1:S, y = TRUE, pcol = 1:S) # Same plot as above # Compare the fitted model with the 'truth' concoef(fit) # The fitted model attr(mydata, "concoefficients") # The 'truth' c(apply(attr(mydata, "latvar"), 2, sd), apply(latvar(fit), 2, sd)) # Both values should be approx equal # Example 2: negative binomial data fitted using a Poisson model: n <- 200; p <- 5; S <- 5 mydata <- rcqo(n, p, S, fam = "negbin", sqrt = TRUE) myform <- attr(mydata, "formula") fit <- cqo(myform, fam = poissonff, dat = mydata) # I.tol = TRUE, lvplot(fit, lcol = 1:S, y = TRUE, pcol = 1:S) # Compare the fitted model with the 'truth' concoef(fit) # The fitted model attr(mydata, "concoefficients") # The 'truth' ## End(Not run)
Generates Dirichlet random variates.
rdiric(n, shape, dimension = NULL, is.matrix.shape = FALSE)
rdiric(n, shape, dimension = NULL, is.matrix.shape = FALSE)
n |
number of observations.
Note it has two meanings, see |
shape |
the shape parameters. These must be positive.
If |
dimension |
the dimension of the distribution.
If |
is.matrix.shape |
Logical.
If |
This function is based on a relationship between the gamma and Dirichlet distribution. Random gamma variates are generated, and then Dirichlet random variates are formed from these.
A n
by dimension
matrix of Dirichlet random variates.
Each element is positive, and each row will sum to unity.
If shape
has names then these will become the column names
of the answer.
Thomas W. Yee
Lange, K. (2002). Mathematical and Statistical Methods for Genetic Analysis, 2nd ed. New York: Springer-Verlag.
dirichlet
is a VGAM family function for
fitting a Dirichlet distribution to data.
ddata <- data.frame(rdiric(n = 1000, shape = c(y1 = 3, y2 = 1, y3 = 4))) fit <- vglm(cbind(y1, y2, y3) ~ 1, dirichlet, data = ddata, trace = TRUE) Coef(fit) coef(fit, matrix = TRUE)
ddata <- data.frame(rdiric(n = 1000, shape = c(y1 = 3, y2 = 1, y3 = 4))) fit <- vglm(cbind(y1, y2, y3) ~ 1, dirichlet, data = ddata, trace = TRUE) Coef(fit) coef(fit, matrix = TRUE)
Maximum likelihood estimation of the rate parameter of a 1-parameter exponential distribution when the observations are upper record values.
rec.exp1(lrate = "loglink", irate = NULL, imethod = 1)
rec.exp1(lrate = "loglink", irate = NULL, imethod = 1)
lrate |
Link function applied to the rate parameter.
See |
irate |
Numeric. Optional initial values for the rate.
The default value |
imethod |
Integer, either 1 or 2 or 3. Initial method,
three algorithms are
implemented. Choose the another value if
convergence fails, or use
|
The response must be a vector or one-column matrix with strictly increasing values.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
By default, this family function has the intercept-only MLE as the initial value, therefore convergence may only take one iteration. Fisher scoring is used.
T. W. Yee
Arnold, B. C. and Balakrishnan, N. and Nagaraja, H. N. (1998). Records, New York: John Wiley & Sons.
rawy <- rexp(n <- 10000, rate = exp(1)) y <- unique(cummax(rawy)) # Keep only the records length(y) / y[length(y)] # MLE of rate fit <- vglm(y ~ 1, rec.exp1, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit)
rawy <- rexp(n <- 10000, rate = exp(1)) y <- unique(cummax(rawy)) # Keep only the records length(y) / y[length(y)] # MLE of rate fit <- vglm(y ~ 1, rec.exp1, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit)
Maximum likelihood estimation of the two parameters of a univariate normal distribution when the observations are upper record values.
rec.normal(lmean = "identitylink", lsd = "loglink", imean = NULL, isd = NULL, imethod = 1, zero = NULL)
rec.normal(lmean = "identitylink", lsd = "loglink", imean = NULL, isd = NULL, imethod = 1, zero = NULL)
lmean , lsd
|
Link functions applied to the mean and sd parameters.
See |
imean , isd
|
Numeric. Optional initial values for the mean and sd.
The default value |
imethod |
Integer, either 1 or 2 or 3. Initial method,
three algorithms are
implemented. Choose the another value if
convergence fails, or use
|
zero |
Can be an integer vector, containing the value 1 or 2.
If so, the mean or
standard deviation respectively are modelled as an
intercept only.
Usually, setting |
The response must be a vector or one-column matrix with strictly increasing values.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
This family function tries to solve a difficult problem,
and the larger the data set the better.
Convergence failure can commonly occur, and
convergence may be very slow,
so set maxit = 200, trace = TRUE
, say.
Inputting good initial values are advised.
This family function uses the BFGS quasi-Newton update
formula for the
working weight matrices. Consequently the estimated
variance-covariance matrix may be inaccurate or
simply wrong! The
standard errors must be therefore treated with caution;
these are
computed in functions such as vcov()
and summary()
.
T. W. Yee
Arnold, B. C. and Balakrishnan, N. and Nagaraja, H. N. (1998). Records, New York: John Wiley & Sons.
uninormal
,
double.cens.normal
.
nn <- 10000; mymean <- 100 # First value is reference value or trivial record Rdata <- data.frame(rawy = c(mymean, rnorm(nn, mymean, exp(3)))) # Keep only observations that are records: rdata <- data.frame(y = unique(cummax(with(Rdata, rawy)))) fit <- vglm(y ~ 1, rec.normal, rdata, trace = TRUE, maxit = 200) coef(fit, matrix = TRUE) Coef(fit) summary(fit)
nn <- 10000; mymean <- 100 # First value is reference value or trivial record Rdata <- data.frame(rawy = c(mymean, rnorm(nn, mymean, exp(3)))) # Keep only observations that are records: rdata <- data.frame(y = unique(cummax(with(Rdata, rawy)))) fit <- vglm(y ~ 1, rec.normal, rdata, trace = TRUE, maxit = 200) coef(fit, matrix = TRUE) Coef(fit) summary(fit)
Computes the reciprocal transformation, including its inverse and the first two derivatives.
reciprocallink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) negreciprocallink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
reciprocallink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) negreciprocallink(theta, bvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
bvalue |
See |
inverse , deriv , short , tag
|
Details at |
The reciprocallink
link function
is a special case of the power link function.
Numerical values of theta
close to 0 result in
Inf
, -Inf
, NA
or NaN
.
The negreciprocallink
link function computes
the negative reciprocal, i.e., .
For reciprocallink
:
for deriv = 0
, the reciprocal of theta
, i.e.,
1/theta
when inverse = FALSE
,
and if inverse = TRUE
then
1/theta
.
For deriv = 1
, then the function returns
d theta
/ d eta
as a function of theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns the reciprocal.
Numerical instability may occur when theta
is
close to 0.
Thomas W. Yee
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
reciprocallink(1:5) reciprocallink(1:5, inverse = TRUE, deriv = 2) negreciprocallink(1:5) negreciprocallink(1:5, inverse = TRUE, deriv = 2) x <- (-3):3 reciprocallink(x) # Has Inf reciprocallink(x, bvalue = .Machine$double.eps) # Has no Inf
reciprocallink(1:5) reciprocallink(1:5, inverse = TRUE, deriv = 2) negreciprocallink(1:5) negreciprocallink(1:5, inverse = TRUE, deriv = 2) x <- (-3):3 reciprocallink(x) # Has Inf reciprocallink(x, bvalue = .Machine$double.eps) # Has no Inf
Residuals for a vector generalized linear model (VGLM) object.
residualsvglm(object, type = c("working", "pearson", "response", "deviance", "ldot", "stdres", "rquantile"), matrix.arg = TRUE)
residualsvglm(object, type = c("working", "pearson", "response", "deviance", "ldot", "stdres", "rquantile"), matrix.arg = TRUE)
object |
Object of class |
type |
The value of this argument can be abbreviated.
The type of residuals to be returned.
The default is the first one: working residuals
corresponding to
the IRLS algorithm. These are defined for all models.
They are sometimes added to VGAM plots of estimated
component functions (see Pearson residuals for GLMs, when squared and summed over the data set, total to the Pearson chi-squared statistic. For VGLMs, Pearson residuals involve the working weight matrices and the score vectors. Under certain limiting conditions, Pearson residuals have 0 means and identity matrix as the variance-covariance matrix. Response residuals are simply the difference between the observed values and the fitted values. Both have to be of the same dimension, hence not all families have response residuals defined. Deviance residuals are only defined for models with
a deviance function. They tend to GLMs mainly.
This function returns a Randomized quantile residuals (RQRs)
(Dunn and Smyth, 1996)
are based on
the The choice Standardized residuals are currently
only defined for 2 types of models:
(i) GLMs
( |
matrix.arg |
Logical, which applies when if the pre-processed
answer is a vector or a 1-column matrix.
If |
This function returns various kinds of residuals, sometimes depending on the specific type of model having been fitted. Section 3.7 of Yee (2015) gives some details on several types of residuals defined for the VGLM class.
Standardized residuals for GLMs are described in
Section 4.5.6 of Agresti (2013) as the ratio of
the raw (response) residuals divided by their
standard error.
They involve the generalized hat matrix evaluated
at the final IRLS iteration.
When applied to the LM,
standardized residuals for GLMs simplify to
rstandard
.
For GLMs they are basically
the Pearson residual divided by the square root of 1 minus the
leverage.
If that residual type is undefined or inappropriate
or not yet implemented,
then NULL
is returned,
otherwise a matrix or vector of residuals is returned.
This function may change in the future, especially those whose definitions may change.
Agresti, A. (2007). An Introduction to Categorical Data Analysis, 2nd ed., New York: John Wiley & Sons. Page 38.
Agresti, A. (2013). Categorical Data Analysis, 3rd ed., New York: John Wiley & Sons.
Agresti, A. (2018). An Introduction to Categorical Data Analysis, 3rd ed., New York: John Wiley & Sons.
Dunn, P. K. and Smyth, G. K. (1996). Randomized quantile residuals. Journal of Computational and Graphical Statistics, 5, 236–244.
resid
,
vglm
,
chisq.test
,
hatvalues
.
pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo) resid(fit) # Same as having type = "working" (the default) resid(fit, type = "response") resid(fit, type = "pearson") resid(fit, type = "stdres") # Test for independence
pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo) resid(fit) # Same as having type = "working" (the default) resid(fit, type = "response") resid(fit, type = "pearson") resid(fit, type = "stdres") # Test for independence
Computes the rhobit link transformation, including its inverse and the first two derivatives.
rhobitlink(theta, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
rhobitlink(theta, bminvalue = NULL, bmaxvalue = NULL, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE)
theta |
Numeric or character. See below for further details. |
bminvalue , bmaxvalue
|
Optional boundary values, e.g.,
values of |
inverse , deriv , short , tag
|
Details at |
The rhobitlink
link function is commonly used for
parameters that lie between and
. Numerical
values of
theta
close to or
or out of
range result in
Inf
, -Inf
, NA
or NaN
.
For deriv = 0
, the rhobit of theta
, i.e.,
log((1 + theta)/(1 - theta))
when inverse =
FALSE
, and if inverse = TRUE
then (exp(theta) -
1)/(exp(theta) + 1)
.
For deriv = 1
, then the function
returns d eta
/ d theta
as a
function of theta
if inverse = FALSE
,
else if inverse = TRUE
then it returns the reciprocal.
Numerical instability may occur when theta
is close
to or
. One way of overcoming this is to
use
bminvalue
, etc.
The correlation parameter of a standard bivariate normal
distribution lies between and
, therefore this
function can be used for modelling this parameter as a function
of explanatory variables.
The link function rhobitlink
is very similar to
fisherzlink
, e.g., just twice the value of
fisherzlink
.
Thomas W. Yee
theta <- seq(-0.99, 0.99, by = 0.01) y <- rhobitlink(theta) ## Not run: plot(theta, y, type = "l", ylab = "", main = "rhobitlink(theta)") abline(v = 0, h = 0, lty = 2) ## End(Not run) x <- c(seq(-1.02, -0.98, by = 0.01), seq(0.97, 1.02, by = 0.01)) rhobitlink(x) # Has NAs rhobitlink(x, bminvalue = -1 + .Machine$double.eps, bmaxvalue = 1 - .Machine$double.eps) # Has no NAs
theta <- seq(-0.99, 0.99, by = 0.01) y <- rhobitlink(theta) ## Not run: plot(theta, y, type = "l", ylab = "", main = "rhobitlink(theta)") abline(v = 0, h = 0, lty = 2) ## End(Not run) x <- c(seq(-1.02, -0.98, by = 0.01), seq(0.97, 1.02, by = 0.01)) rhobitlink(x) # Has NAs rhobitlink(x, bminvalue = -1 + .Machine$double.eps, bmaxvalue = 1 - .Machine$double.eps) # Has no NAs
Density, distribution function, quantile function and random generation for the Rician distribution.
drice(x, sigma, vee, log = FALSE) price(q, sigma, vee, lower.tail = TRUE, log.p = FALSE, ...) qrice(p, sigma, vee, lower.tail = TRUE, log.p = FALSE, ...) rrice(n, sigma, vee)
drice(x, sigma, vee, log = FALSE) price(q, sigma, vee, lower.tail = TRUE, log.p = FALSE, ...) qrice(p, sigma, vee, lower.tail = TRUE, log.p = FALSE, ...) rrice(n, sigma, vee)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as in |
vee , sigma
|
See |
... |
Other arguments such as
|
lower.tail , log.p
|
|
log |
Logical.
If |
See riceff
, the VGAM family function
for estimating the two parameters,
for the formula of the probability density function
and other details.
Formulas for price()
and qrice()
are
based on the Marcum-Q function.
drice
gives the density,
price
gives the distribution function,
qrice
gives the quantile function, and
rrice
generates random deviates.
T. W. Yee and Kai Huang
## Not run: x <- seq(0.01, 7, len = 201) plot(x, drice(x, vee = 0, sigma = 1), type = "n", las = 1, ylab = "", main = "Density of Rice distribution for various v values") sigma <- 1; vee <- c(0, 0.5, 1, 2, 4) for (ii in 1:length(vee)) lines(x, drice(x, vee = vee[ii], sigma), col = ii) legend(x = 5, y = 0.6, legend = as.character(vee), col = 1:length(vee), lty = 1) x <- seq(0, 4, by = 0.01); vee <- 1; sigma <- 1 probs <- seq(0.05, 0.95, by = 0.05) plot(x, drice(x, vee = vee, sigma = sigma), type = "l", main = "Blue is density, orange is CDF", col = "blue", ylim = c(0, 1), sub = "Red are 5, 10, ..., 95 percentiles", las = 1, ylab = "", cex.main = 0.9) abline(h = 0:1, col = "black", lty = 2) Q <- qrice(probs, sigma, vee = vee) lines(Q, drice(qrice(probs, sigma, vee = vee), sigma, vee = vee), col = "red", lty = 3, type = "h") lines(x, price(x, sigma, vee = vee), type = "l", col = "orange") lines(Q, drice(Q, sigma, vee = vee), col = "red", lty = 3, type = "h") lines(Q, price(Q, sigma, vee = vee), col = "red", lty = 3, type = "h") abline(h = probs, col = "red", lty = 3) max(abs(price(Q, sigma, vee = vee) - probs)) # Should be 0 ## End(Not run)
## Not run: x <- seq(0.01, 7, len = 201) plot(x, drice(x, vee = 0, sigma = 1), type = "n", las = 1, ylab = "", main = "Density of Rice distribution for various v values") sigma <- 1; vee <- c(0, 0.5, 1, 2, 4) for (ii in 1:length(vee)) lines(x, drice(x, vee = vee[ii], sigma), col = ii) legend(x = 5, y = 0.6, legend = as.character(vee), col = 1:length(vee), lty = 1) x <- seq(0, 4, by = 0.01); vee <- 1; sigma <- 1 probs <- seq(0.05, 0.95, by = 0.05) plot(x, drice(x, vee = vee, sigma = sigma), type = "l", main = "Blue is density, orange is CDF", col = "blue", ylim = c(0, 1), sub = "Red are 5, 10, ..., 95 percentiles", las = 1, ylab = "", cex.main = 0.9) abline(h = 0:1, col = "black", lty = 2) Q <- qrice(probs, sigma, vee = vee) lines(Q, drice(qrice(probs, sigma, vee = vee), sigma, vee = vee), col = "red", lty = 3, type = "h") lines(x, price(x, sigma, vee = vee), type = "l", col = "orange") lines(Q, drice(Q, sigma, vee = vee), col = "red", lty = 3, type = "h") lines(Q, price(Q, sigma, vee = vee), col = "red", lty = 3, type = "h") abline(h = probs, col = "red", lty = 3) max(abs(price(Q, sigma, vee = vee) - probs)) # Should be 0 ## End(Not run)
Estimates the two parameters of a Rice distribution by maximum likelihood estimation.
riceff(lsigma = "loglink", lvee = "loglink", isigma = NULL, ivee = NULL, nsimEIM = 100, zero = NULL, nowarning = FALSE)
riceff(lsigma = "loglink", lvee = "loglink", isigma = NULL, ivee = NULL, nsimEIM = 100, zero = NULL, nowarning = FALSE)
nowarning |
Logical. Suppress a warning? Ignored for VGAM 0.9-7 and higher. |
lvee , lsigma
|
Link functions for the |
ivee , isigma
|
Optional initial values for the parameters.
If convergence failure occurs (this VGAM family function
seems to require good initial values) try using these arguments.
See |
nsimEIM , zero
|
See |
The Rician distribution has density function
where ,
,
and
is the
modified Bessel function of the
first kind with order zero.
When
the Rice distribution reduces to a Rayleigh
distribution.
The mean is
(returned as the fitted values) where
.
Simulated Fisher scoring is implemented.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
and vgam
.
Convergence problems may occur for data where ;
if so, use
rayleigh
or possibly use an
identity
link.
When is large (greater than 3, say) then the mean is
approximately
and the standard deviation
is approximately
.
T. W. Yee
Rice, S. O. (1945). Mathematical Analysis of Random Noise. Bell System Technical Journal, 24, 46–156.
drice
,
rayleigh
,
besselI
,
simulate.vlm
.
## Not run: sigma <- exp(1); vee <- exp(2) rdata <- data.frame(y = rrice(n <- 1000, sigma, vee = vee)) fit <- vglm(y ~ 1, riceff, data = rdata, trace = TRUE, crit = "c") c(with(rdata, mean(y)), fitted(fit)[1]) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
## Not run: sigma <- exp(1); vee <- exp(2) rdata <- data.frame(y = rrice(n <- 1000, sigma, vee = vee)) fit <- vglm(y ~ 1, riceff, data = rdata, trace = TRUE, crit = "c") c(with(rdata, mean(y)), fitted(fit)[1]) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
Estimation of the parameters of a reciprocal inverse Gaussian distribution.
rigff(lmu = "identitylink", llambda = "loglink", imu = NULL, ilambda = 1)
rigff(lmu = "identitylink", llambda = "loglink", imu = NULL, ilambda = 1)
lmu , llambda
|
Link functions for |
imu , ilambda
|
Initial values for |
See Jorgensen (1997) for details.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
This distribution is potentially useful for dispersion modelling.
T. W. Yee
Jorgensen, B. (1997). The Theory of Dispersion Models. London: Chapman & Hall
rdata <- data.frame(y = rchisq(100, df = 14)) # Not 'proper' data!! fit <- vglm(y ~ 1, rigff, rdata, trace = TRUE) fit <- vglm(y ~ 1, rigff, rdata, trace = TRUE, crit = "c") summary(fit)
rdata <- data.frame(y = rchisq(100, df = 14)) # Not 'proper' data!! fit <- vglm(y ~ 1, rigff, rdata, trace = TRUE) fit <- vglm(y ~ 1, rigff, rdata, trace = TRUE, crit = "c") summary(fit)
A return level plot is constructed for a GEV-type model.
rlplot.gevff(object, show.plot = TRUE, probability = c((1:9)/100, (1:9)/10, 0.95, 0.99, 0.995, 0.999), add.arg = FALSE, xlab = if(log.arg) "Return Period (log-scale)" else "Return Period", ylab = "Return Level", main = "Return Level Plot", pch = par()$pch, pcol.arg = par()$col, pcex = par()$cex, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, slty.arg = par()$lty, scol.arg = par()$col, slwd.arg = par()$lwd, ylim = NULL, log.arg = TRUE, CI = TRUE, epsilon = 1e-05, ...)
rlplot.gevff(object, show.plot = TRUE, probability = c((1:9)/100, (1:9)/10, 0.95, 0.99, 0.995, 0.999), add.arg = FALSE, xlab = if(log.arg) "Return Period (log-scale)" else "Return Period", ylab = "Return Level", main = "Return Level Plot", pch = par()$pch, pcol.arg = par()$col, pcex = par()$cex, llty.arg = par()$lty, lcol.arg = par()$col, llwd.arg = par()$lwd, slty.arg = par()$lty, scol.arg = par()$col, slwd.arg = par()$lwd, ylim = NULL, log.arg = TRUE, CI = TRUE, epsilon = 1e-05, ...)
object |
A VGAM extremes model of the
GEV-type, produced by |
show.plot |
Logical. Plot it? If |
probability |
Numeric vector of probabilities used. |
add.arg |
Logical. Add the plot to an existing plot? |
xlab |
Caption for the x-axis. See |
ylab |
Caption for the y-axis. See |
main |
Title of the plot. See |
pch |
Plotting character. See |
pcol.arg |
Color of the points.
See the |
pcex |
Character expansion of the points.
See the |
llty.arg |
Line type. Line type.
See the |
lcol.arg |
Color of the lines.
See the |
llwd.arg |
Line width.
See the |
slty.arg , scol.arg , slwd.arg
|
Correponding arguments for the lines used for the
confidence intervals. Used only if |
ylim |
Limits for the y-axis. Numeric of length 2. |
log.arg |
Logical. If |
CI |
Logical. Add in a 95 percent confidence interval? |
epsilon |
Numeric, close to zero. Used for the finite-difference approximation to the first derivatives with respect to each parameter. If too small, numerical problems will occur. |
... |
Arguments passed into the |
A return level plot plots versus
.
It is linear if the shape parameter
.
If
then the plot is convex
with asymptotic limit as
approaches zero at
.
And if
then the plot is concave and has
no finite bound.
Here,
where
(
corresponds to the argument
probability
)
and is the cumulative distribution function of the
GEV distribution. The quantity
is known as the
return level associated with the return period
. For many applications, this means
is exceeded by the annual
maximum in any particular year with probability
.
The points in the plot are the actual data.
In the post
slot of the object is a list called
rlplot
with list components
yp |
|
zp |
values which are used for the y-axis |
lower , upper
|
lower and upper confidence limits for the
95 percent confidence intervals evaluated at the values of
|
The confidence intervals are approximate, being based on finite-difference approximations to derivatives.
T. W. Yee
Coles, S. (2001). An Introduction to Statistical Modeling of Extreme Values. London: Springer-Verlag.
gdata <- data.frame(y = rgev(n <- 100, scale = 2, shape = -0.1)) fit <- vglm(y ~ 1, gevff, data = gdata, trace = TRUE) # Identity link for all parameters: fit2 <- vglm(y ~ 1, gevff(lshape = identitylink, lscale = identitylink, iscale = 10), data = gdata, trace = TRUE) coef(fit2, matrix = TRUE) ## Not run: par(mfrow = c(1, 2)) rlplot(fit) -> i1 rlplot(fit2, pcol = "darkorange", lcol = "blue", log.arg = FALSE, scol = "darkgreen", slty = "dashed", las = 1) -> i2 range(i2@post$rlplot$upper - i1@post$rlplot$upper) # Should be near 0 range(i2@post$rlplot$lower - i1@post$rlplot$lower) # Should be near 0 ## End(Not run)
gdata <- data.frame(y = rgev(n <- 100, scale = 2, shape = -0.1)) fit <- vglm(y ~ 1, gevff, data = gdata, trace = TRUE) # Identity link for all parameters: fit2 <- vglm(y ~ 1, gevff(lshape = identitylink, lscale = identitylink, iscale = 10), data = gdata, trace = TRUE) coef(fit2, matrix = TRUE) ## Not run: par(mfrow = c(1, 2)) rlplot(fit) -> i1 rlplot(fit2, pcol = "darkorange", lcol = "blue", log.arg = FALSE, scol = "darkgreen", slty = "dashed", las = 1) -> i2 range(i2@post$rlplot$upper - i1@post$rlplot$upper) # Should be near 0 range(i2@post$rlplot$lower - i1@post$rlplot$lower) # Should be near 0 ## End(Not run)
A graphical technique for comparing the observed and fitted counts from a probability model, on a square root scale.
rootogram4(object, ...) rootogram4vglm(object, newdata = NULL, breaks = NULL, max = NULL, xlab = NULL, main = NULL, width = NULL, ...)
rootogram4(object, ...) rootogram4vglm(object, newdata = NULL, breaks = NULL, max = NULL, xlab = NULL, main = NULL, width = NULL, ...)
object |
an object of class |
newdata |
Data upon which to base the calculations. The default is the one used to fit the model. |
breaks |
numeric. Breaks for the histogram intervals. |
max |
maximum count displayed.
If an error message occurs regarding running out of memory
then use this argument; it might occur with a very long
tailed distribution such as |
xlab , main
|
graphical parameters. |
width |
numeric. Widths of the histogram bars. |
... |
any additional arguments to
|
Rootograms are a useful graphical technique for comparing the observed counts with the expected counts given a probability model.
This S4 implementation is based very heavily
on rootogram
coming from
countreg. This package is primarily written by
A. Zeileis and
C. Kleiber.
That package is currently on R-Forge but not CRAN, and
it is based on S3.
Since VGAM is written using S4, it was necessary
to define an S4 generic function called
rootogram4()
which dispatches appropriately for
S4 objects.
Currently, only a selected number of VGAM family functions are implemented. Over time, hopefully more and more will be completed.
See
rootogram
in countreg;
an object of class "rootogram0"
inheriting from "data.frame"
with
about 8 variables.
This function is rudimentary and based totally on the implementation in countreg.
The function names used coming from countreg have been renamed slightly to avoid conflict.
Package countreg is primarily written by
A. Zeileis and
C. Kleiber.
Function rootogram4()
is based very heavily
on countreg.
T. W. Yee wrote code to unpack variables from
many various models
and feed them into the appropriate d
-type function.
Friendly, M. and Meyer, D. (2016). Discrete Data Analysis with R: Visualization and Modeling Techniques for Categorical and Count Data, Boca Raton, FL, USA: Chapman & Hall/CRC Press.
Kleiber, C. and Zeileis, A. (2016) “Visualizing Count Data Regressions Using Rootograms.” The American Statistician, 70(3), 296–303. doi:10.1080/00031305.2016.1173590.
Tukey, J. W. (1977) Exploratory Data Analysis, Reading, MA, USA: Addison-Wesley.
vglm
,
vgam
,
glm
,
zipoisson
,
zapoisson
,
rootogram
in countreg.
## Not run: data("hspider", package = "VGAM") # Count responses hs.p <- vglm(Pardlugu ~ CoveHerb, poissonff, data = hspider) hs.nb <- vglm(Pardlugu ~ CoveHerb, negbinomial, data = hspider) hs.zip <- vglm(Pardlugu ~ CoveHerb, zipoisson, data = hspider) hs.zap <- vglm(Pardlugu ~ CoveHerb, zapoisson, data = hspider) opar <- par(mfrow = c(2, 2)) # Plot the rootograms rootogram4(hs.p, max = 15, main = "poissonff") rootogram4(hs.nb, max = 15, main = "negbinomial") rootogram4(hs.zip, max = 15, main = "zipoisson") rootogram4(hs.zap, max = 15, main = "zapoisson") par(opar) ## End(Not run)
## Not run: data("hspider", package = "VGAM") # Count responses hs.p <- vglm(Pardlugu ~ CoveHerb, poissonff, data = hspider) hs.nb <- vglm(Pardlugu ~ CoveHerb, negbinomial, data = hspider) hs.zip <- vglm(Pardlugu ~ CoveHerb, zipoisson, data = hspider) hs.zap <- vglm(Pardlugu ~ CoveHerb, zapoisson, data = hspider) opar <- par(mfrow = c(2, 2)) # Plot the rootograms rootogram4(hs.p, max = 15, main = "poissonff") rootogram4(hs.nb, max = 15, main = "negbinomial") rootogram4(hs.zip, max = 15, main = "zipoisson") rootogram4(hs.zap, max = 15, main = "zapoisson") par(opar) ## End(Not run)
'round2' works like 'round' but the rounding has base 2 under consideration so that bits (binary digits) beyond a certain theshold are zeroed.
round2(x, digits10 = 0)
round2(x, digits10 = 0)
x |
Same as |
digits10 |
Same as |
round2()
is intended to allow reliable and safe for
==
comparisons provided both sides have the function
applied to the same value of digits10
. Internally a
numeric has its binary representation (bits)
past a certain point
set to all 0s, while retaining a certain degree of accuracy.
Algorithmically, x
is multiplied by 2^exponent
and then rounded, and then divided by 2^exponent
.
The value of exponent
is approximately 3 *
digits10
when digits10
is positive. If digits10
is negative then what is returned is round(x, digits10)
.
The value of exponent
guarantees that x
has been
rounded to at least digits10
decimal places (often around
digits10 + 1
for safety).
Something similar to round
.
T. W. Yee.
set.seed(1); x <- sort(rcauchy(10)) x3 <- round2(x, 3) x3 == round2(x, 3) # Supposed to be reliable (all TRUE) rbind(x, x3) # Comparison (x3[1] * 2^(0:9)) / 2^(0:9) print((x3[1] * 2^(0:11)), digits = 14) # Round to approx 1 d.p. x1 <- round2(x, 1) x1 == round2(x, 1) # Supposed to be reliable (all TRUE) rbind(x, x1) x1[8] == 0.75 # 3/4 print((x1[1] * 2^(0:11)), digits = 9) seq(31) / 32
set.seed(1); x <- sort(rcauchy(10)) x3 <- round2(x, 3) x3 == round2(x, 3) # Supposed to be reliable (all TRUE) rbind(x, x3) # Comparison (x3[1] * 2^(0:9)) / 2^(0:9) print((x3[1] * 2^(0:11)), digits = 14) # Round to approx 1 d.p. x1 <- round2(x, 1) x1 == round2(x, 1) # Supposed to be reliable (all TRUE) rbind(x, x1) x1[8] == 0.75 # 3/4 print((x1[1] * 2^(0:11)), digits = 9) seq(31) / 32
Estimates the parameters of a nested reduced-rank autoregressive model for multiple time series.
rrar(Ranks = 1, coefstart = NULL)
rrar(Ranks = 1, coefstart = NULL)
Ranks |
Vector of integers: the ranks of the model.
Each value must be at least one and no more than |
coefstart |
Optional numerical vector of initial values for the coefficients. By default, the family function chooses these automatically. |
Full details are given in Ahn and Reinsel (1988).
Convergence may be very slow, so setting maxits = 50
,
say, may help. If convergence is not obtained, you might like
to try inputting different initial values.
Setting trace = TRUE
in vglm
is useful
for monitoring the progress at each iteration.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
and vgam
.
This family function should be used within vglm
and not with rrvglm
because it does not fit into
the RR-VGLM framework exactly. Instead, the reduced-rank model
is formulated as a VGLM!
A methods function Coef.rrar
, say, has yet to be written.
It would return the quantities
Ak1
,
C
,
D
,
omegahat
,
Phi
,
etc. as slots, and then show.Coef.rrar
would also need
to be written.
T. W. Yee
Ahn, S. and Reinsel, G. C. (1988). Nested reduced-rank autoregressive models for multiple time series. Journal of the American Statistical Association, 83, 849–856.
## Not run: year <- seq(1961 + 1/12, 1972 + 10/12, by = 1/12) par(mar = c(4, 4, 2, 2) + 0.1, mfrow = c(2, 2)) for (ii in 1:4) { plot(year, grain.us[, ii], main = names(grain.us)[ii], las = 1, type = "l", xlab = "", ylab = "", col = "blue") points(year, grain.us[, ii], pch = "*", col = "blue") } apply(grain.us, 2, mean) # mu vector cgrain <- scale(grain.us, scale = FALSE) # Center the time series only fit <- vglm(cgrain ~ 1, rrar(Ranks = c(4, 1)), trace = TRUE) summary(fit) print(fit@misc$Ak1, digits = 2) print(fit@misc$Cmatrices, digits = 3) print(fit@misc$Dmatrices, digits = 3) print(fit@misc$omegahat, digits = 3) print(fit@misc$Phimatrices, digits = 2) par(mar = c(4, 4, 2, 2) + 0.1, mfrow = c(4, 1)) for (ii in 1:4) { plot(year, fit@misc$Z[, ii], main = paste("Z", ii, sep = ""), type = "l", xlab = "", ylab = "", las = 1, col = "blue") points(year, fit@misc$Z[, ii], pch = "*", col = "blue") } ## End(Not run)
## Not run: year <- seq(1961 + 1/12, 1972 + 10/12, by = 1/12) par(mar = c(4, 4, 2, 2) + 0.1, mfrow = c(2, 2)) for (ii in 1:4) { plot(year, grain.us[, ii], main = names(grain.us)[ii], las = 1, type = "l", xlab = "", ylab = "", col = "blue") points(year, grain.us[, ii], pch = "*", col = "blue") } apply(grain.us, 2, mean) # mu vector cgrain <- scale(grain.us, scale = FALSE) # Center the time series only fit <- vglm(cgrain ~ 1, rrar(Ranks = c(4, 1)), trace = TRUE) summary(fit) print(fit@misc$Ak1, digits = 2) print(fit@misc$Cmatrices, digits = 3) print(fit@misc$Dmatrices, digits = 3) print(fit@misc$omegahat, digits = 3) print(fit@misc$Phimatrices, digits = 2) par(mar = c(4, 4, 2, 2) + 0.1, mfrow = c(4, 1)) for (ii in 1:4) { plot(year, fit@misc$Z[, ii], main = paste("Z", ii, sep = ""), type = "l", xlab = "", ylab = "", las = 1, col = "blue") points(year, fit@misc$Z[, ii], pch = "*", col = "blue") } ## End(Not run)
A reduced-rank vector generalized linear model (RR-VGLM) is fitted. RR-VGLMs are VGLMs but some of the constraint matrices are estimated. Doubly constrained RR-VGLMs (DRR-VGLMs) can also be fitted, and these provide structure for the two other outer product matrices.
rrvglm(formula, family = stop("'family' is unassigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = rrvglm.control(...), offset = NULL, method = "rrvglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, smart = TRUE, ...)
rrvglm(formula, family = stop("'family' is unassigned"), data = list(), weights = NULL, subset = NULL, na.action = na.fail, etastart = NULL, mustart = NULL, coefstart = NULL, control = rrvglm.control(...), offset = NULL, method = "rrvglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = NULL, qr.arg = FALSE, smart = TRUE, ...)
formula , family
|
See |
weights , data
|
See |
subset , na.action
|
See |
etastart , mustart , coefstart
|
See |
control |
a list of parameters for controlling
the fitting process.
See |
offset , model , contrasts
|
See |
method |
the method to be used in fitting the model.
The default (and presently only)
method |
x.arg , y.arg
|
logical values indicating whether the model matrix
and response vector/matrix used in the fitting
process should be assigned in the |
constraints |
See |
extra , smart , qr.arg
|
See |
... |
further arguments passed into |
In this documentation, is the
number of linear predictors.
For RR-VGLMs,
the central formula is given by
where is a vector
(usually just a 1 for an intercept),
is another vector of explanatory variables, and
is an
-vector of
latent variables.
Here,
is a vector of linear predictors,
e.g., the
th element is
for the
th Poisson response.
The dimension of
is
by
definition.
The matrices
,
and
are estimated from the data, i.e., contain the
regression coefficients. For ecologists, the central
formula represents a constrained linear ordination
(CLO) since it is linear in the latent variables. It
means that the response is a monotonically increasing or
decreasing function of the latent variables.
For identifiability it is common to enforce
corner constraints on A:
by default, for RR-VGLMs, the top
by
submatrix is fixed to be
the order-
identity matrix and
the remainder of A is estimated.
And by default, for DRR-VGLMs, there is
also an order-
identity matrix
embedded in A because the RRR must
be separable (this is so that any
existing structure in A is preserved).
The underlying algorithm of RR-VGLMs is iteratively reweighted least squares (IRLS) with an optimizing algorithm applied within each IRLS iteration (e.g., alternating algorithm).
In theory, any VGAM family function
that works for vglm
and vgam
should work for
rrvglm
too. The function that
actually does the work is rrvglm.fit
;
it is essentially vglm.fit
with some
extra code.
For RR-VGLMs,
an object of class "rrvglm"
, which
has the the same slots as a "vglm"
object. The only difference is that the some
of the constraint matrices are estimates
rather than known. But VGAM stores
the models the same internally. The slots
of "vglm"
objects are described in
vglm-class
.
For DRR-VGLMs,
an object of class "drrvglm"
.
The arguments of rrvglm
are in general the same
as those of vglm
but with some extras in
rrvglm.control
.
The smart prediction (smartpred
) library
is packed with the VGAM library.
In an example below, a rank-1 stereotype
(reduced-rank multinomial logit)
model of Anderson (1984) is fitted to some car data.
The reduced-rank regression is performed, adjusting for
two covariates. Setting a trivial constraint matrix
(diag(M)
)
for the latent variable variables in avoids
a warning message when it is overwritten by a (common)
estimated constraint matrix. It shows that German cars
tend to be more expensive than American cars, given a
car of fixed weight and width.
If fit <- rrvglm(..., data = mydata)
then
summary(fit)
requires corner constraints and no
missing values in mydata
.
Sometimes the estimated variance-covariance
matrix of the parameters is not
positive-definite; if this occurs, try
refitting the model with a different value
for Index.corner
.
For constrained quadratic ordination (CQO) see
cqo
for more details about QRR-VGLMs.
With multiple binary responses, one must use
binomialff(multiple.responses = TRUE)
to indicate
that the response is a matrix with one response per column.
Otherwise, it is interpreted as a single binary response
variable.
To fit DRR-VGLMs see the arguments
H.A.thy
and
H.C
in rrvglm.control
.
DRR-VGLMs provide structure to the A and
C matrices via constraint matrices.
So instead of them being general unstructured
matrices, one can make specified elements to
be identically equal to 0, for example.
This gives greater control
over what is modelled as a latent variable,
e.g., in a health study,
if one subset of the covariates are physical
variables and the remainder are psychological
variables then a rank-2 model might have each
latent variable a linear combination of each
of the types of variables separately.
Incidentally
before I forget, since Corner = TRUE
,
then
the differences between the @H.A.thy
and
@H.A.alt
slots
are due to Index.corner
,
which specifies which rows of A
are not estimated.
However,
in the alternating algorithm,
it is more efficient to estimate the entire
A, bar (effectively) rows str0
,
and then normalize it.
In contrast, optimizing over the subset of
A to be estimated is slow.
In the @misc
slot are logical components
is.drrvglm
and
is.rrvglm
.
Only one is TRUE
.
If is.rrvglm
then (full) corner constraints
are used.
If is.drrvglm
then
restricted corner constraints (RCCs)
are used and the reduced rank regression
(RRR) must be separable.
The case is.rrvglm
means that
H.A.thy
is a vector("list", Rank)
with H.A.thy[[r]] <- diag(M)
assigned
to all .
Because DRR-VGLMs are implemented only for
separable problems, this means that
all columns of
H.A.thy[[s]]
are orthogonal to all columns from
H.A.try[[t]]
, for all and
.
DRR-VGLMs are proposed in
Yee et al. (2024) in the context of GAITD regression
for heaped and seeped survey data.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Anderson, J. A. (1984). Regression and ordered categorical variables. Journal of the Royal Statistical Society, Series B, Methodological, 46, 1–30.
Yee, T. W. (2014). Reduced-rank vector generalized linear models with two linear predictors. Computational Statistics and Data Analysis, 71, 889–902.
Yee, T. W., Frigau, L. and Ma, C. (2024). Heaping and seeping, GAITD regression and doubly constrained reduced rank vector generalized linear models, in smoking studies. In preparation.
rrvglm.control
,
summary.drrvglm
,
lvplot.rrvglm
(same as biplot.rrvglm
),
rrvglm-class
,
grc
,
cqo
,
vglmff-class
,
vglm
,
vglm-class
,
smartpred
,
rrvglm.fit
.
Special family functions include
negbinomial
zipoisson
and zinegbinomial
.
(see Yee (2014)
and what was formerly in COZIGAM).
Methods functions include
Coef.rrvglm
,
calibrate.rrvglm
,
etc.
Data include
crashi
.
## Not run: # Example 1: RR NB with Var(Y) = mu + delta1 * mu^delta2 nn <- 1000 # Number of observations delta1 <- 3.0 # Specify this delta2 <- 1.5 # Specify this; should be greater than 1 a21 <- 2 - delta2 mydata <- data.frame(x2 = runif(nn), x3 = runif(nn)) mydata <- transform(mydata, mu = exp(2 + 3 * x2 + 0 * x3)) mydata <- transform(mydata, y2 = rnbinom(nn, mu = mu, size = (1/delta1)*mu^a21)) plot(y2 ~ x2, mydata, pch = "+", col = 'blue', las = 1, main = paste0("Var(Y) = mu + ", delta1, " * mu^", delta2)) rrnb2 <- rrvglm(y2 ~ x2 + x3, negbinomial(zero = NULL), data = mydata, trace = TRUE) a21.hat <- (Coef(rrnb2)@A)["loglink(size)", 1] beta11.hat <- Coef(rrnb2)@B1["(Intercept)", "loglink(mu)"] beta21.hat <- Coef(rrnb2)@B1["(Intercept)", "loglink(size)"] (delta1.hat <- exp(a21.hat * beta11.hat - beta21.hat)) (delta2.hat <- 2 - a21.hat) # delta1.hat: # exp(a21.hat * predict(rrnb2)[1,1] - predict(rrnb2)[1,2]) summary(rrnb2) # Obtain a 95 percent CI for delta2: se.a21.hat <- sqrt(vcov(rrnb2)["I(latvar.mat)", "I(latvar.mat)"]) ci.a21 <- a21.hat + c(-1, 1) * 1.96 * se.a21.hat (ci.delta2 <- 2 - rev(ci.a21)) # The 95 percent CI Confint.rrnb(rrnb2) # Quick way to get it # Plot the abundances and fitted values vs the latent variable plot(y2 ~ latvar(rrnb2), data = mydata, col = "blue", xlab = "Latent variable", las = 1) ooo <- order(latvar(rrnb2)) lines(fitted(rrnb2)[ooo] ~ latvar(rrnb2)[ooo], col = "red") # Example 2: stereotype model (RR multinomial logit model) data(car.all) scar <- subset(car.all, is.element(Country, c("Germany", "USA", "Japan", "Korea"))) fcols <- c(13,14,18:20,22:26,29:31,33,34,36) # These are factors scar[, -fcols] <- scale(scar[, -fcols]) # Stdze all numerical vars ones <- CM.ones(3) # matrix(1, 3, 1) clist <- list("(Intercept)" = diag(3), Width = ones, Weight = ones, Disp. = diag(3), Tank = diag(3), Price = diag(3), Frt.Leg.Room = diag(3)) set.seed(111) fit <- rrvglm(Country ~ Width + Weight + Disp. + Tank + Price + Frt.Leg.Room, multinomial, data = scar, Rank = 2, trace = TRUE, constraints = clist, noRRR = ~ 1 + Width + Weight, # Uncor = TRUE, Corner = FALSE, # orig. Index.corner = c(1, 3), # Less correlation Bestof = 3) fit@misc$deviance # A history of the fits Coef(fit) biplot(fit, chull = TRUE, scores = TRUE, clty = 2, Ccex = 2, ccol = "blue", scol = "orange", Ccol = "darkgreen", Clwd = 2, main = "1=Germany, 2=Japan, 3=Korea, 4=USA") ## End(Not run)
## Not run: # Example 1: RR NB with Var(Y) = mu + delta1 * mu^delta2 nn <- 1000 # Number of observations delta1 <- 3.0 # Specify this delta2 <- 1.5 # Specify this; should be greater than 1 a21 <- 2 - delta2 mydata <- data.frame(x2 = runif(nn), x3 = runif(nn)) mydata <- transform(mydata, mu = exp(2 + 3 * x2 + 0 * x3)) mydata <- transform(mydata, y2 = rnbinom(nn, mu = mu, size = (1/delta1)*mu^a21)) plot(y2 ~ x2, mydata, pch = "+", col = 'blue', las = 1, main = paste0("Var(Y) = mu + ", delta1, " * mu^", delta2)) rrnb2 <- rrvglm(y2 ~ x2 + x3, negbinomial(zero = NULL), data = mydata, trace = TRUE) a21.hat <- (Coef(rrnb2)@A)["loglink(size)", 1] beta11.hat <- Coef(rrnb2)@B1["(Intercept)", "loglink(mu)"] beta21.hat <- Coef(rrnb2)@B1["(Intercept)", "loglink(size)"] (delta1.hat <- exp(a21.hat * beta11.hat - beta21.hat)) (delta2.hat <- 2 - a21.hat) # delta1.hat: # exp(a21.hat * predict(rrnb2)[1,1] - predict(rrnb2)[1,2]) summary(rrnb2) # Obtain a 95 percent CI for delta2: se.a21.hat <- sqrt(vcov(rrnb2)["I(latvar.mat)", "I(latvar.mat)"]) ci.a21 <- a21.hat + c(-1, 1) * 1.96 * se.a21.hat (ci.delta2 <- 2 - rev(ci.a21)) # The 95 percent CI Confint.rrnb(rrnb2) # Quick way to get it # Plot the abundances and fitted values vs the latent variable plot(y2 ~ latvar(rrnb2), data = mydata, col = "blue", xlab = "Latent variable", las = 1) ooo <- order(latvar(rrnb2)) lines(fitted(rrnb2)[ooo] ~ latvar(rrnb2)[ooo], col = "red") # Example 2: stereotype model (RR multinomial logit model) data(car.all) scar <- subset(car.all, is.element(Country, c("Germany", "USA", "Japan", "Korea"))) fcols <- c(13,14,18:20,22:26,29:31,33,34,36) # These are factors scar[, -fcols] <- scale(scar[, -fcols]) # Stdze all numerical vars ones <- CM.ones(3) # matrix(1, 3, 1) clist <- list("(Intercept)" = diag(3), Width = ones, Weight = ones, Disp. = diag(3), Tank = diag(3), Price = diag(3), Frt.Leg.Room = diag(3)) set.seed(111) fit <- rrvglm(Country ~ Width + Weight + Disp. + Tank + Price + Frt.Leg.Room, multinomial, data = scar, Rank = 2, trace = TRUE, constraints = clist, noRRR = ~ 1 + Width + Weight, # Uncor = TRUE, Corner = FALSE, # orig. Index.corner = c(1, 3), # Less correlation Bestof = 3) fit@misc$deviance # A history of the fits Coef(fit) biplot(fit, chull = TRUE, scores = TRUE, clty = 2, Ccex = 2, ccol = "blue", scol = "orange", Ccol = "darkgreen", Clwd = 2, main = "1=Germany, 2=Japan, 3=Korea, 4=USA") ## End(Not run)
Reduced-rank vector generalized linear models.
Objects can be created by calls to rrvglm
.
extra
:Object of class "list"
;
the extra
argument on entry to vglm
. This
contains any extra information that might be needed
by the family function.
family
:Object of class "vglmff"
.
The family function.
iter
:Object of class "numeric"
.
The number of IRLS iterations used.
predictors
:Object of class "matrix"
with columns which holds the
linear predictors.
assign
:Object of class "list"
,
from class "vlm"
.
This named list gives information matching the columns and the
(LM) model matrix terms.
call
:Object of class "call"
, from class "vlm"
.
The matched call.
coefficients
:Object of class
"numeric"
, from class "vlm"
.
A named vector of coefficients.
constraints
:Object of class "list"
, from
class "vlm"
.
A named list of constraint matrices used in the fitting.
contrasts
:Object of class "list"
, from
class "vlm"
.
The contrasts used (if any).
control
:Object of class "list"
, from class
"vlm"
.
A list of parameters for controlling the fitting process.
See vglm.control
for details.
criterion
:Object of class "list"
, from
class "vlm"
.
List of convergence criterion evaluated at the
final IRLS iteration.
df.residual
:Object of class
"numeric"
, from class "vlm"
.
The residual degrees of freedom.
df.total
:Object of class "numeric"
,
from class "vlm"
.
The total degrees of freedom.
dispersion
:Object of class "numeric"
,
from class "vlm"
.
The scaling parameter.
effects
:Object of class "numeric"
,
from class "vlm"
.
The effects.
fitted.values
:Object of class
"matrix"
, from class "vlm"
.
The fitted values. This is usually the mean but may be quantiles,
or the location parameter, e.g., in the Cauchy model.
misc
:Object of class "list"
,
from class "vlm"
.
A named list to hold miscellaneous parameters.
model
:Object of class "data.frame"
,
from class "vlm"
.
The model frame.
na.action
:Object of class "list"
,
from class "vlm"
.
A list holding information about missing values.
offset
:Object of class "matrix"
,
from class "vlm"
.
If non-zero, a -column matrix of offsets.
post
:Object of class "list"
,
from class "vlm"
where post-analysis results may be put.
preplot
:Object of class "list"
,
from class "vlm"
used by plotvgam
; the plotting parameters
may be put here.
prior.weights
:Object of class
"matrix"
, from class "vlm"
holding the initially supplied weights.
qr
:Object of class "list"
,
from class "vlm"
.
QR decomposition at the final iteration.
R
:Object of class "matrix"
,
from class "vlm"
.
The R matrix in the QR decomposition used in the fitting.
rank
:Object of class "integer"
,
from class "vlm"
.
Numerical rank of the fitted model.
residuals
:Object of class "matrix"
,
from class "vlm"
.
The working residuals at the final IRLS iteration.
ResSS
:Object of class "numeric"
,
from class "vlm"
.
Residual sum of squares at the final IRLS iteration with
the adjusted dependent vectors and weight matrices.
smart.prediction
:Object of class
"list"
, from class "vlm"
.
A list of data-dependent parameters (if any)
that are used by smart prediction.
terms
:Object of class "list"
,
from class "vlm"
.
The terms
object used.
weights
:Object of class "matrix"
,
from class "vlm"
.
The weight matrices at the final IRLS iteration.
This is in matrix-band form.
x
:Object of class "matrix"
,
from class "vlm"
.
The model matrix (LM, not VGLM).
xlevels
:Object of class "list"
,
from class "vlm"
.
The levels of the factors, if any, used in fitting.
y
:Object of class "matrix"
,
from class "vlm"
.
The response, in matrix form.
Xm2
:Object of class "matrix"
,
from class "vlm"
.
See vglm-class
.
Ym2
:Object of class "matrix"
,
from class "vlm"
.
See vglm-class
.
callXm2
:Object of class "call"
, from class "vlm"
.
The matched call for argument form2
.
A.est
, C.est
:Object of class "matrix"
.
The estimates of A and C.
Class "vglm"
, directly.
Class "vlm"
, by class "vglm".
signature(x = "rrvglm")
: biplot.
signature(object = "rrvglm")
:
more detailed coefficients giving A,
, C, etc.
signature(object = "rrvglm")
:
biplot.
signature(x = "rrvglm")
:
short summary of the object.
signature(object = "rrvglm")
:
a more detailed summary of the object.
Two new slots for "rrvglm"
were added compared
to "vglm"
objects,
for VGAM 1.1-10.
They are A.est
and C.est
.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
rrvglm
,
lvplot.rrvglm
,
vglmff-class
.
## Not run: # Rank-1 stereotype model of Anderson (1984) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) # Noise fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, data = pneumo, Rank = 1) Coef(fit) ## End(Not run)
## Not run: # Rank-1 stereotype model of Anderson (1984) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) # Noise fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, data = pneumo, Rank = 1) Coef(fit) ## End(Not run)
Algorithmic constants and parameters for
running rrvglm
are set using this
function.
Doubly constrained RR-VGLMs (DRR-VGLMs) are
also catered for.
rrvglm.control(Rank = 1, Corner = TRUE, Index.corner = head(setdiff(seq(length(str0) + Rank), str0), Rank), noRRR = ~ 1, str0 = NULL, Crow1positive = NULL, trace = FALSE, Bestof = 1, H.A.thy = list(), H.C = list(), Ainit = NULL, Cinit = NULL, sd.Cinit = 0.02, Algorithm = "alternating", Etamat.colmax = 10, noWarning = FALSE, Use.Init.Poisson.QO = FALSE, checkwz = TRUE, Check.rank = TRUE, Check.cm.rank = TRUE, wzepsilon = .Machine$double.eps^0.75, ...)
rrvglm.control(Rank = 1, Corner = TRUE, Index.corner = head(setdiff(seq(length(str0) + Rank), str0), Rank), noRRR = ~ 1, str0 = NULL, Crow1positive = NULL, trace = FALSE, Bestof = 1, H.A.thy = list(), H.C = list(), Ainit = NULL, Cinit = NULL, sd.Cinit = 0.02, Algorithm = "alternating", Etamat.colmax = 10, noWarning = FALSE, Use.Init.Poisson.QO = FALSE, checkwz = TRUE, Check.rank = TRUE, Check.cm.rank = TRUE, wzepsilon = .Machine$double.eps^0.75, ...)
Rank |
The numerical rank |
Corner |
Logical indicating whether corner
constraints are to be used.
Strongly recommended as the only
method for fitting RR-VGLMs and
DRR-VGLMs.
This is one
method for ensuring a unique solution
and the availability of standard errors.
If |
Index.corner |
Specifies the For DRR-VGLMs one needs
to have (restricted) corner constraints.
Then argument |
noRRR |
Formula giving terms that are not
to be included in the reduced-rank
regression. That is, |
str0 |
Integer vector specifying which rows of the
estimated constraint matrices (A)
are to be all zeros. These are called
structural zeros. Must not have
any common value with |
Crow1positive |
Currently this argument has no effect.
In the future, it may be a
logical vector of length |
trace |
Logical indicating if output should be produced for each iteration. |
Bestof |
Integer. The best of |
H.A.thy , H.C
|
Lists.
DRR-VGLMs are Doubly constrained
RR-VGLMs where A has
|
Algorithm |
Character string indicating what algorithm is
to be used. The default is the first one.
The choice |
Ainit , Cinit
|
Initial A and C matrices which may speed up convergence. They must be of the correct dimension. |
sd.Cinit |
Standard deviation of the initial values
for the elements of C.
These are normally distributed with
mean zero. This argument is used only if
|
Etamat.colmax |
Positive integer, no smaller than
|
Use.Init.Poisson.QO |
Logical indicating whether the
|
checkwz |
logical indicating whether the diagonal
elements of the working weight matrices
should be checked whether they are
sufficiently positive, i.e., greater than
|
noWarning , Check.rank , Check.cm.rank
|
Same as |
wzepsilon |
Small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive. |
... |
Variables in ... are passed into
|
In the above, is the
Rank
and
is the number of linear predictors.
VGAM supported three normalizations
to ensure a unique solution.
But currently,
only corner constraints will work with
summary
of RR-VGLM
and DRR-VGLM objects.
Update during late-2023/early-2024:
with ongoing work implementing
the "drrvglm"
class, there may
be disruption and changes to other
normalizations. However, corner
constraints should be fully supported
and have the greatest priority.
A list with components matching the input names. Some error checking is done, but not much.
In VGAM 1.1-11 and higher,
the following arguments are no longer supported:
Wmat
, Norrr
, Svd.arg
,
Uncorrelated.latvar
, scaleA
.
Users should use corner constraints only.
The arguments in this function begin with an
upper case letter to help avoid interference
with those of vglm.control
.
In the example below a rank-1 stereotype model (Anderson, 1984) is fitted, however, the intercepts are completely unconstrained rather than sorted.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
rrvglm
,
rrvglm-class
,
summary.drrvglm
,
rrvglm.optim.control
,
vglm
,
vglm.control
,
TypicalVGAMfamilyFunction
,
CM.qnorm
,
cqo
.
## Not run: set.seed(111) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) # Unrelated fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, pneumo, Rank = 1, Index.corner = 2) constraints(fit) vcov(fit) summary(fit) ## End(Not run)
## Not run: set.seed(111) pneumo <- transform(pneumo, let = log(exposure.time), x3 = runif(nrow(pneumo))) # Unrelated fit <- rrvglm(cbind(normal, mild, severe) ~ let + x3, multinomial, pneumo, Rank = 1, Index.corner = 2) constraints(fit) vcov(fit) summary(fit) ## End(Not run)
Algorithmic constants and parameters for running optim
within rrvglm
are set using this function.
rrvglm.optim.control(Fnscale = 1, Maxit = 100, Switch.optimizer = 3, Abstol = -Inf, Reltol = sqrt(.Machine$double.eps), ...)
rrvglm.optim.control(Fnscale = 1, Maxit = 100, Switch.optimizer = 3, Abstol = -Inf, Reltol = sqrt(.Machine$double.eps), ...)
Fnscale |
Passed into |
Maxit |
Passed into |
Switch.optimizer |
Iteration number when the "Nelder-Mead"
method of |
Abstol |
Passed into |
Reltol |
Passed into |
... |
Ignored. |
See optim
for more details.
A list with components equal to the arguments.
The transition between optimization methods may be
unstable, so users may have to vary the value of
Switch.optimizer
.
Practical experience with Switch.optimizer
shows that
setting it to too large a value may lead to a local solution,
whereas setting it to a low value will obtain the global
solution. It appears that, if BFGS kicks in too late when
the Nelder-Mead algorithm is starting to converge to a local
solution, then switching to BFGS will not be sufficient to
bypass convergence to that local solution.
Thomas W. Yee
Decay counts of polonium recorded by Rutherford and Geiger (1910).
data(ruge)
data(ruge)
This data frame contains the following columns:
a numeric vector, counts or frequencies
a numeric vector, the number of decays
These are the radioactive decay counts of polonium recorded by Rutherford and Geiger (1910) representing the number of scintillations in 2608 1/8 minute intervals. For example, there were 57 frequencies of zero counts. The counts can be thought of as being approximately Poisson distributed.
Rutherford, E. and Geiger, H. (1910) The Probability Variations in the Distribution of alpha Particles, Philosophical Magazine, 20, 698–704.
lambdahat <- with(ruge, weighted.mean(number, w = counts)) (N <- with(ruge, sum(counts))) with(ruge, cbind(number, counts, fitted = round(N * dpois(number, lambdahat))))
lambdahat <- with(ruge, weighted.mean(number, w = counts)) (N <- with(ruge, sum(counts))) with(ruge, cbind(number, counts, fitted = round(N * dpois(number, lambdahat))))
s
is used in the definition of (vector) smooth terms within
vgam
formulas.
This corresponds to 1st-generation VGAMs that use backfitting
for their estimation.
The effective degrees of freedom is prespecified.
s(x, df = 4, spar = 0, ...)
s(x, df = 4, spar = 0, ...)
x |
covariate (abscissae) to be smoothed.
Note that |
df |
numerical vector of length |
spar |
numerical vector of length |
... |
Ignored for now. |
In this help file is the number of additive predictors
and
is the number of component functions to be
estimated (so that
is an element from the set
{1,2,...,
}).
Also, if
is the number of distinct abscissae, then
s
will fail if .
s
, which is symbolic and does not perform any smoothing itself,
only handles a single covariate.
Note that s
works in vgam
only.
It has no effect in vglm
(actually, it is similar to the identity function I
so that s(x2)
is the same as x2
in the LM model matrix).
It differs from the s()
of the gam package and
the s
of the mgcv package;
they should not be mixed together.
Also, terms involving s
should be simple additive terms, and not
involving interactions and nesting etc.
For example, myfactor:s(x2)
is not a good idea.
A vector with attributes that are (only) used by vgam
.
The vector cubic smoothing spline which s()
represents is
computationally demanding for large .
The cost is approximately
where
is the
number of unique abscissae.
Currently a bug relating to the use of s()
is that
only constraint matrices whose columns are orthogonal are handled
correctly. If any s()
term has a constraint matrix that
does not satisfy this condition then a warning is issued.
See is.buggy
for more information.
A more modern alternative to using
s
with vgam
is to use
sm.os
or
sm.ps
.
This does not require backfitting
and allows automatic smoothing parameter selection.
However, this alternative should only be used when the
sample size is reasonably large (, say).
These are called Generation-2 VGAMs.
Another alternative to using
s
with vgam
is
bs
and/or ns
with vglm
.
The latter implements half-stepping, which is helpful if
convergence is difficult.
Thomas W. Yee
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
vgam
,
is.buggy
,
sm.os
,
sm.ps
,
vsmooth.spline
.
# Nonparametric logistic regression fit1 <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) ## Not run: plot(fit1, se = TRUE) # Bivariate logistic model with artificial data nn <- 300 bdata <- data.frame(x1 = runif(nn), x2 = runif(nn)) bdata <- transform(bdata, y1 = rbinom(nn, size = 1, prob = logitlink(sin(2 * x2), inverse = TRUE)), y2 = rbinom(nn, size = 1, prob = logitlink(sin(2 * x2), inverse = TRUE))) fit2 <- vgam(cbind(y1, y2) ~ x1 + s(x2, 3), trace = TRUE, binom2.or(exchangeable = TRUE), data = bdata) coef(fit2, matrix = TRUE) # Hard to interpret ## Not run: plot(fit2, se = TRUE, which.term = 2, scol = "blue")
# Nonparametric logistic regression fit1 <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) ## Not run: plot(fit1, se = TRUE) # Bivariate logistic model with artificial data nn <- 300 bdata <- data.frame(x1 = runif(nn), x2 = runif(nn)) bdata <- transform(bdata, y1 = rbinom(nn, size = 1, prob = logitlink(sin(2 * x2), inverse = TRUE)), y2 = rbinom(nn, size = 1, prob = logitlink(sin(2 * x2), inverse = TRUE))) fit2 <- vgam(cbind(y1, y2) ~ x1 + s(x2, 3), trace = TRUE, binom2.or(exchangeable = TRUE), data = bdata) coef(fit2, matrix = TRUE) # Hard to interpret ## Not run: plot(fit2, se = TRUE, which.term = 2, scol = "blue")
Estimates the location and scale parameters of a scaled Student t distribution with 2 degrees of freedom, by maximum likelihood estimation.
sc.studentt2(percentile = 50, llocation = "identitylink", lscale = "loglink", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale")
sc.studentt2(percentile = 50, llocation = "identitylink", lscale = "loglink", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale")
percentile |
A numerical vector containing values between 0 and 100, which are the quantiles and expectiles. They will be returned as ‘fitted values’. |
llocation , lscale
|
See |
ilocation , iscale , imethod , zero
|
See |
Koenker (1993) solved for the distribution whose quantiles are equal to its expectiles. Its canonical form has mean and mode at 0, and has a heavy tail (in fact, its variance is infinite).
The standard (“canonical”) form of this distribution can be endowed with a location and scale parameter. The standard form has a density that can be written as
for real .
Then
for location and scale parameters
and
.
The mean of
is
.
By default,
and
.
The expectiles/quantiles corresponding to
percentile
are returned as the fitted values;
in particular, percentile = 50
corresponds to the mean
(0.5 expectile) and median (0.5 quantile).
Note that if has a standard
dsc.t2
then where
has a Student-t distribution with 2 degrees of freedom.
The two parameters here can also be estimated using
studentt2
by specifying df = 2
and making
an adjustment for the scale parameter, however, this VGAM
family function is more efficient since the EIM is known
(Fisher scoring is implemented.)
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
rrvglm
and vgam
.
T. W. Yee
Koenker, R. (1993). When are expectiles percentiles? (solution) Econometric Theory, 9, 526–527.
set.seed(123); nn <- 1000 kdata <- data.frame(x2 = sort(runif(nn))) kdata <- transform(kdata, mylocat = 1 + 3 * x2, myscale = 1) kdata <- transform(kdata, y = rsc.t2(nn, loc = mylocat, scale = myscale)) fit <- vglm(y ~ x2, sc.studentt2(perc = c(1, 50, 99)), data = kdata) fit2 <- vglm(y ~ x2, studentt2(df = 2), data = kdata) # 'same' as fit coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) # Nice plot of the results ## Not run: plot(y ~ x2, data = kdata, col = "blue", las = 1, sub = paste("n =", nn), main = "Fitted quantiles/expectiles using the sc.studentt2() distribution") matplot(with(kdata, x2), fitted(fit), add = TRUE, type = "l", lwd = 3) legend("bottomright", lty = 1:3, lwd = 3, legend = colnames(fitted(fit)), col = 1:3) ## End(Not run) fit@extra$percentile # Sample quantiles
set.seed(123); nn <- 1000 kdata <- data.frame(x2 = sort(runif(nn))) kdata <- transform(kdata, mylocat = 1 + 3 * x2, myscale = 1) kdata <- transform(kdata, y = rsc.t2(nn, loc = mylocat, scale = myscale)) fit <- vglm(y ~ x2, sc.studentt2(perc = c(1, 50, 99)), data = kdata) fit2 <- vglm(y ~ x2, studentt2(df = 2), data = kdata) # 'same' as fit coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) # Nice plot of the results ## Not run: plot(y ~ x2, data = kdata, col = "blue", las = 1, sub = paste("n =", nn), main = "Fitted quantiles/expectiles using the sc.studentt2() distribution") matplot(with(kdata, x2), fitted(fit), add = TRUE, type = "l", lwd = 3) legend("bottomright", lty = 1:3, lwd = 3, legend = colnames(fitted(fit)), col = 1:3) ## End(Not run) fit@extra$percentile # Sample quantiles
Generic function that computes Rao's score test statistics evaluated at the null values.
score.stat(object, ...) score.stat.vlm(object, values0 = 0, subset = NULL, omit1s = TRUE, all.out = FALSE, orig.SE = FALSE, iterate.SE = TRUE, iterate.score = TRUE, trace = FALSE, ...)
score.stat(object, ...) score.stat.vlm(object, values0 = 0, subset = NULL, omit1s = TRUE, all.out = FALSE, orig.SE = FALSE, iterate.SE = TRUE, iterate.score = TRUE, trace = FALSE, ...)
object , values0 , subset
|
Same as in |
omit1s , all.out
|
Same as in |
orig.SE , iterate.SE
|
Same as in |
iterate.score |
Logical. The score vector is evaluated at one value of
|
trace |
Same as in |
... |
Ignored for now. |
The (Rao) score test
(also known as the Lagrange multiplier test in econometrics)
is a third general method for
hypothesis testing under a likelihood-based framework
(the others are the likelihood ratio test and
Wald test; see lrt.stat
and
wald.stat
).
Asymptotically, the three tests are equivalent.
The Wald test is not invariant to parameterization, and
the usual Wald test statistics computed at the estimates
make it vulnerable to the Hauck-Donner effect
(HDE; see hdeff
).
This function is similar to wald.stat
in that
one coefficient is set to 0 (by default) and the other
coefficients are iterated by IRLS to get their MLE subject to this
constraint.
The SE is almost always based on the expected information matrix
(EIM) rather than the OIM, and for some models
the EIM and OIM coincide.
By default the
signed square root of the
Rao score statistics are returned.
If all.out = TRUE
then a list is returned with the
following components:
score.stat
the score statistic,
SE0
the standard error of that coefficient,
values0
the null values.
Approximately, the default score statistics output are
standard normal random variates if each null hypothesis is true.
Altogether,
by the eight combinations of iterate.SE
, iterate.score
and orig.SE
,
there are six different variants of the Rao score statistic
that can be returned because the score vector has 2 and
the SEs have 3 subvariants.
See wald.stat.vlm
.
Thomas W. Yee
wald.stat
,
lrt.stat
,
summaryvglm
,
summary.glm
,
anova.vglm
,
vglm
,
hdeff
.
set.seed(1) pneumo <- transform(pneumo, let = log(exposure.time), x3 = rnorm(nrow(pneumo))) (pfit <- vglm(cbind(normal, mild, severe) ~ let + x3, propodds, pneumo)) score.stat(pfit) # No HDE here; should be similar to the next line: coef(summary(pfit))[, "z value"] # Wald statistics computed at the MLE summary(pfit, score0 = TRUE)
set.seed(1) pneumo <- transform(pneumo, let = log(exposure.time), x3 = rnorm(nrow(pneumo))) (pfit <- vglm(cbind(normal, mild, severe) ~ let + x3, propodds, pneumo)) score.stat(pfit) # No HDE here; should be similar to the next line: coef(summary(pfit))[, "z value"] # Wald statistics computed at the MLE summary(pfit, score0 = TRUE)
Plots the piecewise segmented curve made up of Wald statistics versus estimates, using a colour code for the HDE severity.
seglines(x, y, dy, ddy, lwd = 2, cex = 2, plot.it = TRUE, add.legend = TRUE, cex.legend = 1, position.legend = "topleft", eta0 = NA, COPS0 = NA, lty.table = c("solid", "dashed", "solid", "dashed", "solid", "dashed", "solid"), col.table = rainbow.sky[-5], pch.table = 7:1, severity.table = c("None", "Faint", "Weak", "Moderate", "Strong", "Extreme", "Undetermined"), FYI = FALSE, ...)
seglines(x, y, dy, ddy, lwd = 2, cex = 2, plot.it = TRUE, add.legend = TRUE, cex.legend = 1, position.legend = "topleft", eta0 = NA, COPS0 = NA, lty.table = c("solid", "dashed", "solid", "dashed", "solid", "dashed", "solid"), col.table = rainbow.sky[-5], pch.table = 7:1, severity.table = c("None", "Faint", "Weak", "Moderate", "Strong", "Extreme", "Undetermined"), FYI = FALSE, ...)
x , y , dy , ddy
|
Same as |
lwd , cex
|
Graphical parameters: line width, and character expansion. |
plot.it |
Logical, plot it? If |
add.legend , position.legend
|
Logical and character; add a legend?
The other argument is fed
into |
cex.legend |
Self-explanatory. |
severity.table , eta0 , COPS0
|
Same as |
lty.table , col.table , pch.table
|
Graphical parameters for the 7 different types of segments.
Usually users should not assign anything to these arguments.
Setting |
FYI , ...
|
Should be ignored. |
This function was written to
complement hdeffsev
and is rough-and-ready.
It plots the signed Wald statistics as a function of
the estimates, and uses a colour-code to indicate
the severity of the
Hauck-Donner effect (HDE).
This can be obtained from its first two derivatives.
This function returns the severity of the HDE, possibly invisibly.
This function is likely to change in the short future because it is experimental and far from complete.
Thomas W. Yee.
deg <- 4 # myfun is a function that approximates the HDE myfun <- function(x, deriv = 0) switch(as.character(deriv), '0' = x^deg * exp(-x), '1' = (deg * x^(deg-1) - x^deg) * exp(-x), '2' = (deg * (deg-1) * x^(deg-2) - 2*deg * x^(deg-1) + x^deg) * exp(-x)) ## Not run: curve(myfun, 0, 10, col = "white") xgrid <- seq(0, 10, length = 101) seglines(xgrid, myfun(xgrid), myfun(xgrid, deriv = 1), COPS0 = 2, myfun(xgrid, deriv = 2), pch.table = NULL, position = "bottom") ## End(Not run)
deg <- 4 # myfun is a function that approximates the HDE myfun <- function(x, deriv = 0) switch(as.character(deriv), '0' = x^deg * exp(-x), '1' = (deg * x^(deg-1) - x^deg) * exp(-x), '2' = (deg * (deg-1) * x^(deg-2) - 2*deg * x^(deg-1) + x^deg) * exp(-x)) ## Not run: curve(myfun, 0, 10, col = "white") xgrid <- seq(0, 10, length = 101) seglines(xgrid, myfun(xgrid), myfun(xgrid, deriv = 1), COPS0 = 2, myfun(xgrid, deriv = 2), pch.table = NULL, position = "bottom") ## End(Not run)
Select variables from a data frame whose names begin with a certain character string.
Select(data = list(), prefix = "y", lhs = NULL, rhs = NULL, rhs2 = NULL, rhs3 = NULL, as.character = FALSE, as.formula.arg = FALSE, tilde = TRUE, exclude = NULL, sort.arg = TRUE)
Select(data = list(), prefix = "y", lhs = NULL, rhs = NULL, rhs2 = NULL, rhs3 = NULL, as.character = FALSE, as.formula.arg = FALSE, tilde = TRUE, exclude = NULL, sort.arg = TRUE)
data |
A data frame or a matrix. |
prefix |
A vector of character strings, or a logical.
If a character then
the variables chosen from |
lhs |
A character string. The response of a formula. |
rhs |
A character string.
Included as part of the RHS a formula.
Set |
rhs2 , rhs3
|
Same as |
as.character |
Logical. Return the answer as a character string? |
as.formula.arg |
Logical. Is the answer a formula? |
tilde |
Logical.
If |
exclude |
Vector of character strings. Exclude these variables explicitly. |
sort.arg |
Logical. Sort the variables? |
This is meant as a utility function to avoid manually:
(i) making a cbind
call to construct
a big matrix response,
and
(ii) constructing a formula involving a lot of terms.
The savings can be made because the variables of interest
begin with some prefix, e.g., with the character "y"
.
If as.character = FALSE
and
as.formula.arg = FALSE
then a matrix such
as cbind(y1, y2, y3)
.
If as.character = TRUE
and
as.formula.arg = FALSE
then a character string such
as "cbind(y1, y2, y3)"
.
If as.character = FALSE
and
as.formula.arg = TRUE
then a formula
such
as lhs ~ y1 + y2 + y3
.
If as.character = TRUE
and
as.formula.arg = TRUE
then a character string such
as "lhs ~ y1 + y2 + y3"
.
See the examples below.
By default, if no variables beginning the the value of prefix
is found then a NULL
is returned.
Setting prefix = " "
is a way of selecting no variables.
This function is a bit experimental at this stage and
may change in the short future.
Some of its utility may be better achieved using
subset
and its select
argument,
e.g., subset(pdata, TRUE, select = y01:y10)
.
For some models such as posbernoulli.t
the
order of the variables in the xij
argument is
crucial, therefore care must be taken with the
argument sort.arg
.
In some instances, it may be good to rename variables
y1
to y01
,
y2
to y02
, etc.
when there are variables such as
y14
.
Currently subsetcol()
and Select()
are identical.
One of these functions might be withdrawn in the future.
T. W. Yee.
vglm
,
cbind
,
subset
,
formula
,
fill1
.
Pneumo <- pneumo colnames(Pneumo) <- c("y1", "y2", "y3", "x2") # The "y" variables are response Pneumo$x1 <- 1; Pneumo$x3 <- 3; Pneumo$x <- 0; Pneumo$x4 <- 4 # Add these Select(data = Pneumo) # Same as with(Pneumo, cbind(y1, y2, y3)) Select(Pneumo, "x") Select(Pneumo, "x", sort = FALSE, as.char = TRUE) Select(Pneumo, "x", exclude = "x1") Select(Pneumo, "x", exclude = "x1", as.char = TRUE) Select(Pneumo, c("x", "y")) Select(Pneumo, "z") # Now returns a NULL Select(Pneumo, " ") # Now returns a NULL Select(Pneumo, prefix = TRUE, as.formula = TRUE) Select(Pneumo, "x", exclude = c("x3", "x1"), as.formula = TRUE, lhs = "cbind(y1, y2, y3)", rhs = "0") Select(Pneumo, "x", exclude = "x1", as.formula = TRUE, as.char = TRUE, lhs = "cbind(y1, y2, y3)", rhs = "0") # Now a 'real' example: Huggins89table1 <- transform(Huggins89table1, x3.tij = t01) tab1 <- subset(Huggins89table1, rowSums(Select(Huggins89table1, "y")) > 0) # Same as # subset(Huggins89table1, y1 + y2 + y3 + y4 + y5 + y6 + y7 + y8 + y9 + y10 > 0) # Long way to do it: fit.th <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2 + x3.tij, xij = list(x3.tij ~ t01 + t02 + t03 + t04 + t05 + t06 + t07 + t08 + t09 + t10 - 1), posbernoulli.t(parallel.t = TRUE ~ x2 + x3.tij), data = tab1, trace = TRUE, form2 = ~ x2 + x3.tij + t01 + t02 + t03 + t04 + t05 + t06 + t07 + t08 + t09 + t10) # Short way to do it: Fit.th <- vglm(Select(tab1, "y") ~ x2 + x3.tij, xij = list(Select(tab1, "t", as.formula = TRUE, sort = FALSE, lhs = "x3.tij", rhs = "0")), posbernoulli.t(parallel.t = TRUE ~ x2 + x3.tij), data = tab1, trace = TRUE, form2 = Select(tab1, prefix = TRUE, as.formula = TRUE))
Pneumo <- pneumo colnames(Pneumo) <- c("y1", "y2", "y3", "x2") # The "y" variables are response Pneumo$x1 <- 1; Pneumo$x3 <- 3; Pneumo$x <- 0; Pneumo$x4 <- 4 # Add these Select(data = Pneumo) # Same as with(Pneumo, cbind(y1, y2, y3)) Select(Pneumo, "x") Select(Pneumo, "x", sort = FALSE, as.char = TRUE) Select(Pneumo, "x", exclude = "x1") Select(Pneumo, "x", exclude = "x1", as.char = TRUE) Select(Pneumo, c("x", "y")) Select(Pneumo, "z") # Now returns a NULL Select(Pneumo, " ") # Now returns a NULL Select(Pneumo, prefix = TRUE, as.formula = TRUE) Select(Pneumo, "x", exclude = c("x3", "x1"), as.formula = TRUE, lhs = "cbind(y1, y2, y3)", rhs = "0") Select(Pneumo, "x", exclude = "x1", as.formula = TRUE, as.char = TRUE, lhs = "cbind(y1, y2, y3)", rhs = "0") # Now a 'real' example: Huggins89table1 <- transform(Huggins89table1, x3.tij = t01) tab1 <- subset(Huggins89table1, rowSums(Select(Huggins89table1, "y")) > 0) # Same as # subset(Huggins89table1, y1 + y2 + y3 + y4 + y5 + y6 + y7 + y8 + y9 + y10 > 0) # Long way to do it: fit.th <- vglm(cbind(y01, y02, y03, y04, y05, y06, y07, y08, y09, y10) ~ x2 + x3.tij, xij = list(x3.tij ~ t01 + t02 + t03 + t04 + t05 + t06 + t07 + t08 + t09 + t10 - 1), posbernoulli.t(parallel.t = TRUE ~ x2 + x3.tij), data = tab1, trace = TRUE, form2 = ~ x2 + x3.tij + t01 + t02 + t03 + t04 + t05 + t06 + t07 + t08 + t09 + t10) # Short way to do it: Fit.th <- vglm(Select(tab1, "y") ~ x2 + x3.tij, xij = list(Select(tab1, "t", as.formula = TRUE, sort = FALSE, lhs = "x3.tij", rhs = "0")), posbernoulli.t(parallel.t = TRUE ~ x2 + x3.tij), data = tab1, trace = TRUE, form2 = Select(tab1, prefix = TRUE, as.formula = TRUE))
Estimation of the probabilities of a two-stage binomial distribution.
seq2binomial(lprob1 = "logitlink", lprob2 = "logitlink", iprob1 = NULL, iprob2 = NULL, parallel = FALSE, zero = NULL)
seq2binomial(lprob1 = "logitlink", lprob2 = "logitlink", iprob1 = NULL, iprob2 = NULL, parallel = FALSE, zero = NULL)
lprob1 , lprob2
|
Parameter link functions applied to the two probabilities,
called |
iprob1 , iprob2
|
Optional initial value for the first and second probabilities
respectively. A |
parallel , zero
|
Details at |
This VGAM family function fits the model described by
Crowder and Sweeting (1989) which is described as follows.
Each of spores has a probability
of
germinating. Of the
spores that germinate,
each has a probability
of bending in a particular
direction. Let
be the number that bend in the
specified direction. The probability model for this data is
for ,
,
and
.
Here,
is
prob1
,
is
prob2
.
Although the Authors refer to this as the bivariate binomial model, I have named it the (two-stage) sequential binomial model. Fisher scoring is used.
An object of class "vglmff"
(see
vglmff-class
). The object is used by modelling
functions such as vglm
and vgam
.
The response must be a two-column matrix of sample proportions
corresponding to and
.
The
values should be inputted with the
weights
argument of vglm
and vgam
.
The fitted value is a two-column matrix of estimated
probabilities and
.
A common form of error is when there are no trials
for
,
e.g., if
mvector
below has some values which are zero.
Thomas W. Yee
Crowder, M. and Sweeting, T. (1989). Bayesian inference for a bivariate binomial distribution. Biometrika, 76, 599–603.
sdata <- data.frame(mvector = round(rnorm(nn <- 100, m = 10, sd = 2)), x2 = runif(nn)) sdata <- transform(sdata, prob1 = logitlink(+2 - x2, inverse = TRUE), prob2 = logitlink(-2 + x2, inverse = TRUE)) sdata <- transform(sdata, successes1 = rbinom(nn, size = mvector, prob = prob1)) sdata <- transform(sdata, successes2 = rbinom(nn, size = successes1, prob = prob2)) sdata <- transform(sdata, y1 = successes1 / mvector) sdata <- transform(sdata, y2 = successes2 / successes1) fit <- vglm(cbind(y1, y2) ~ x2, seq2binomial, weight = mvector, data = sdata, trace = TRUE) coef(fit) coef(fit, matrix = TRUE) head(fitted(fit)) head(depvar(fit)) head(weights(fit, type = "prior")) # Same as with(sdata, mvector) # Number of first successes: head(depvar(fit)[, 1] * c(weights(fit, type = "prior"))) # Number of second successes: head(depvar(fit)[, 2] * c(weights(fit, type = "prior")) * depvar(fit)[, 1])
sdata <- data.frame(mvector = round(rnorm(nn <- 100, m = 10, sd = 2)), x2 = runif(nn)) sdata <- transform(sdata, prob1 = logitlink(+2 - x2, inverse = TRUE), prob2 = logitlink(-2 + x2, inverse = TRUE)) sdata <- transform(sdata, successes1 = rbinom(nn, size = mvector, prob = prob1)) sdata <- transform(sdata, successes2 = rbinom(nn, size = successes1, prob = prob2)) sdata <- transform(sdata, y1 = successes1 / mvector) sdata <- transform(sdata, y2 = successes2 / successes1) fit <- vglm(cbind(y1, y2) ~ x2, seq2binomial, weight = mvector, data = sdata, trace = TRUE) coef(fit) coef(fit, matrix = TRUE) head(fitted(fit)) head(depvar(fit)) head(weights(fit, type = "prior")) # Same as with(sdata, mvector) # Number of first successes: head(depvar(fit)[, 1] * c(weights(fit, type = "prior"))) # Number of second successes: head(depvar(fit)[, 2] * c(weights(fit, type = "prior")) * depvar(fit)[, 1])
Sets up smart prediction in one of two modes:
"write"
and "read"
.
setup.smart(mode.arg, smart.prediction = NULL, max.smart = 30)
setup.smart(mode.arg, smart.prediction = NULL, max.smart = 30)
mode.arg |
|
smart.prediction |
If in |
max.smart |
|
This function is only required by programmers writing a modelling
function such as lm
and glm
, or a prediction functions of such,
e.g., predict.lm
.
The function
setup.smart
operates by mimicking the operations of a
first-in first-out stack (better known as a queue).
Nothing is returned.
In "write"
mode
.smart.prediction
in
smartpredenv
is assigned an empty list with max.smart
components.
In "read"
mode
.smart.prediction
in
smartpredenv
is assigned smart.prediction
.
Then
.smart.prediction.counter
in
smartpredenv
is assigned the value 0, and
.smart.prediction.mode
and .max.smart
are written to
smartpredenv
too.
lm
,
predict.lm
.
## Not run: setup.smart("write") # Put at the beginning of lm ## End(Not run) ## Not run: # Put at the beginning of predict.lm setup.smart("read", smart.prediction = object$smart.prediction) ## End(Not run)
## Not run: setup.smart("write") # Put at the beginning of lm ## End(Not run) ## Not run: # Put at the beginning of predict.lm setup.smart("read", smart.prediction = object$smart.prediction) ## End(Not run)
The two parameters of the univariate standard simplex distribution are estimated by full maximum likelihood estimation.
simplex(lmu = "logitlink", lsigma = "loglink", imu = NULL, isigma = NULL, imethod = 1, ishrinkage = 0.95, zero = "sigma")
simplex(lmu = "logitlink", lsigma = "loglink", imu = NULL, isigma = NULL, imethod = 1, ishrinkage = 0.95, zero = "sigma")
lmu , lsigma
|
Link function for |
imu , isigma
|
Optional initial values for |
imethod , ishrinkage , zero
|
See |
The probability density function can be written
for ,
,
and
.
The mean of
is
(called
mu
, and
returned as the fitted values).
The second parameter, sigma
, of this standard simplex
distribution is known as the dispersion parameter.
The unit variance function is
.
Fisher scoring is applied to both parameters.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
This distribution is potentially useful for dispersion modelling.
Numerical problems may occur when mu
is very close to 0 or 1.
T. W. Yee
Jorgensen, B. (1997). The Theory of Dispersion Models. London: Chapman & Hall
Song, P. X.-K. (2007). Correlated Data Analysis: Modeling, Analytics, and Applications. Springer.
dsimplex
,
dirichlet
,
rigff
,
binomialff
.
sdata <- data.frame(x2 = runif(nn <- 1000)) sdata <- transform(sdata, eta1 = 1 + 2 * x2, eta2 = 1 - 2 * x2) sdata <- transform(sdata, y = rsimplex(nn, mu = logitlink(eta1, inverse = TRUE), dispersion = exp(eta2))) (fit <- vglm(y ~ x2, simplex(zero = NULL), data = sdata, trace = TRUE)) coef(fit, matrix = TRUE) summary(fit)
sdata <- data.frame(x2 = runif(nn <- 1000)) sdata <- transform(sdata, eta1 = 1 + 2 * x2, eta2 = 1 - 2 * x2) sdata <- transform(sdata, y = rsimplex(nn, mu = logitlink(eta1, inverse = TRUE), dispersion = exp(eta2))) (fit <- vglm(y ~ x2, simplex(zero = NULL), data = sdata, trace = TRUE)) coef(fit, matrix = TRUE) summary(fit)
Density function, and random generation for the simplex distribution.
dsimplex(x, mu = 0.5, dispersion = 1, log = FALSE) rsimplex(n, mu = 0.5, dispersion = 1)
dsimplex(x, mu = 0.5, dispersion = 1, log = FALSE) rsimplex(n, mu = 0.5, dispersion = 1)
x |
Vector of quantiles.
The support of the distribution is the interval |
mu , dispersion
|
Mean and dispersion parameters.
The former lies in the interval |
n , log
|
Same usage as |
The VGAM family function simplex
fits this model;
see that online help for more information.
For rsimplex()
the rejection method is used;
it may be very slow if the density is highly peaked,
and will fail if the density asymptotes at the boundary.
dsimplex(x)
gives the density function,
rsimplex(n)
gives random variates.
T. W. Yee
sigma <- c(4, 2, 1) # Dispersion parameter mymu <- c(0.1, 0.5, 0.7); xxx <- seq(0, 1, len = 501) ## Not run: par(mfrow = c(3, 3)) # Figure 2.1 of Song (2007) for (iii in 1:3) for (jjj in 1:3) { plot(xxx, dsimplex(xxx, mymu[jjj], sigma[iii]), type = "l", col = "blue", xlab = "", ylab = "", main = paste("mu = ", mymu[jjj], ", sigma = ", sigma[iii], sep = "")) } ## End(Not run)
sigma <- c(4, 2, 1) # Dispersion parameter mymu <- c(0.1, 0.5, 0.7); xxx <- seq(0, 1, len = 501) ## Not run: par(mfrow = c(3, 3)) # Figure 2.1 of Song (2007) for (iii in 1:3) for (jjj in 1:3) { plot(xxx, dsimplex(xxx, mymu[jjj], sigma[iii]), type = "l", col = "blue", xlab = "", ylab = "", main = paste("mu = ", mymu[jjj], ", sigma = ", sigma[iii], sep = "")) } ## End(Not run)
Simulate one or more responses from the distribution corresponding to a fitted model object.
## S3 method for class 'vlm' simulate(object, nsim = 1, seed = NULL, ...)
## S3 method for class 'vlm' simulate(object, nsim = 1, seed = NULL, ...)
object |
an object representing a fitted model.
Usually an object of class
|
nsim , seed
|
Same as |
... |
additional optional arguments. |
This is a methods function for simulate
and hopefully should behave in a very similar manner.
Only VGAM family functions with a simslot
slot
have been implemented for simulate
.
Similar to simulate
.
Note that many VGAM family functions can handle multiple responses.
This can result in a longer data frame with more rows
(nsim
multiplied by n
rather than the
ordinary n
).
In the future an argument may be available so that there
is always n
rows no matter how many responses were
inputted.
With multiple response and/or multivariate responses,
the order of the elements may differ.
For some VGAM families, the order is
,
where
is the sample size,
is
nsim
and
is
ncol(fitted(vglmObject))
.
For other VGAM families, the order is
.
An example of each is given below.
Currently the VGAM family functions with a
simslot
slot are:
alaplace1
,
alaplace2
,
betabinomial
,
betabinomialff
,
betaR
,
betaff
,
biamhcop
,
bifrankcop
,
bilogistic
,
binomialff
,
binormal
,
binormalcop
,
biclaytoncop
,
cauchy
,
cauchy1
,
chisq
,
dirichlet
,
dagum
,
erlang
,
exponential
,
bifgmcop
,
fisk
,
gamma1
,
gamma2
,
gammaR
,
gengamma.stacy
,
geometric
,
gompertz
,
gumbelII
,
hzeta
,
inv.lomax
,
inv.paralogistic
,
kumar
,
lgamma1
,
lgamma3
,
lindley
,
lino
,
logff
,
logistic1
,
logistic
,
lognormal
,
lomax
,
makeham
,
negbinomial
,
negbinomial.size
,
paralogistic
,
perks
,
poissonff
,
posnegbinomial
,
posnormal
,
pospoisson
,
polya
,
polyaR
,
posbinomial
,
rayleigh
,
riceff
,
simplex
,
sinmad
,
slash
,
studentt
,
studentt2
,
studentt3
,
triangle
,
uninormal
,
yulesimon
,
zageometric
,
zageometricff
,
zanegbinomial
,
zanegbinomialff
,
zapoisson
,
zapoissonff
,
zigeometric
,
zigeometricff
,
zinegbinomial
,
zipf
,
zipoisson
,
zipoissonff
.
Also, categorical family functions:
acat
,
cratio
,
sratio
,
cumulative
,
multinomial
.
See also
RNG
about random number generation in R,
vglm
, vgam
for model fitting.
nn <- 10; mysize <- 20; set.seed(123) bdata <- data.frame(x2 = rnorm(nn)) bdata <- transform(bdata, y1 = rbinom(nn, size = mysize, p = logitlink(1+x2, inverse = TRUE)), y2 = rbinom(nn, size = mysize, p = logitlink(1+x2, inverse = TRUE)), f1 = factor(as.numeric(rbinom(nn, size = 1, p = logitlink(1+x2, inverse = TRUE))))) (fit1 <- vglm(cbind(y1, aaa = mysize - y1) ~ x2, # Matrix response (2-colns) binomialff, data = bdata)) (fit2 <- vglm(f1 ~ x2, binomialff, model = TRUE, data = bdata)) # Factor response set.seed(123); simulate(fit1, nsim = 8) set.seed(123); c(simulate(fit2, nsim = 3)) # Use c() when model = TRUE # An n x N x F example set.seed(123); n <- 100 bdata <- data.frame(x2 = runif(n), x3 = runif(n)) bdata <- transform(bdata, y1 = rnorm(n, 1 + 2 * x2), y2 = rnorm(n, 3 + 4 * x2)) fit1 <- vglm(cbind(y1, y2) ~ x2, binormal(eq.sd = TRUE), data = bdata) nsim <- 1000 # Number of simulations for each observation my.sims <- simulate(fit1, nsim = nsim) dim(my.sims) # A data frame aaa <- array(unlist(my.sims), c(n, nsim, ncol(fitted(fit1)))) # n by N by F summary(rowMeans(aaa[, , 1]) - fitted(fit1)[, 1]) # Should be all 0s summary(rowMeans(aaa[, , 2]) - fitted(fit1)[, 2]) # Should be all 0s # An n x F x N example n <- 100; set.seed(111); nsim <- 1000 zdata <- data.frame(x2 = runif(n)) zdata <- transform(zdata, lambda1 = loglink(-0.5 + 2 * x2, inverse = TRUE), lambda2 = loglink( 0.5 + 2 * x2, inverse = TRUE), pstr01 = logitlink( 0, inverse = TRUE), pstr02 = logitlink(-1.0, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(n, lambda = lambda1, pstr0 = pstr01), y2 = rzipois(n, lambda = lambda2, pstr0 = pstr02)) zip.fit <- vglm(cbind(y1, y2) ~ x2, zipoissonff, data = zdata, crit = "coef") my.sims <- simulate(zip.fit, nsim = nsim) dim(my.sims) # A data frame aaa <- array(unlist(my.sims), c(n, ncol(fitted(zip.fit)), nsim)) # n by F by N summary(rowMeans(aaa[, 1, ]) - fitted(zip.fit)[, 1]) # Should be all 0s summary(rowMeans(aaa[, 2, ]) - fitted(zip.fit)[, 2]) # Should be all 0s
nn <- 10; mysize <- 20; set.seed(123) bdata <- data.frame(x2 = rnorm(nn)) bdata <- transform(bdata, y1 = rbinom(nn, size = mysize, p = logitlink(1+x2, inverse = TRUE)), y2 = rbinom(nn, size = mysize, p = logitlink(1+x2, inverse = TRUE)), f1 = factor(as.numeric(rbinom(nn, size = 1, p = logitlink(1+x2, inverse = TRUE))))) (fit1 <- vglm(cbind(y1, aaa = mysize - y1) ~ x2, # Matrix response (2-colns) binomialff, data = bdata)) (fit2 <- vglm(f1 ~ x2, binomialff, model = TRUE, data = bdata)) # Factor response set.seed(123); simulate(fit1, nsim = 8) set.seed(123); c(simulate(fit2, nsim = 3)) # Use c() when model = TRUE # An n x N x F example set.seed(123); n <- 100 bdata <- data.frame(x2 = runif(n), x3 = runif(n)) bdata <- transform(bdata, y1 = rnorm(n, 1 + 2 * x2), y2 = rnorm(n, 3 + 4 * x2)) fit1 <- vglm(cbind(y1, y2) ~ x2, binormal(eq.sd = TRUE), data = bdata) nsim <- 1000 # Number of simulations for each observation my.sims <- simulate(fit1, nsim = nsim) dim(my.sims) # A data frame aaa <- array(unlist(my.sims), c(n, nsim, ncol(fitted(fit1)))) # n by N by F summary(rowMeans(aaa[, , 1]) - fitted(fit1)[, 1]) # Should be all 0s summary(rowMeans(aaa[, , 2]) - fitted(fit1)[, 2]) # Should be all 0s # An n x F x N example n <- 100; set.seed(111); nsim <- 1000 zdata <- data.frame(x2 = runif(n)) zdata <- transform(zdata, lambda1 = loglink(-0.5 + 2 * x2, inverse = TRUE), lambda2 = loglink( 0.5 + 2 * x2, inverse = TRUE), pstr01 = logitlink( 0, inverse = TRUE), pstr02 = logitlink(-1.0, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(n, lambda = lambda1, pstr0 = pstr01), y2 = rzipois(n, lambda = lambda2, pstr0 = pstr02)) zip.fit <- vglm(cbind(y1, y2) ~ x2, zipoissonff, data = zdata, crit = "coef") my.sims <- simulate(zip.fit, nsim = nsim) dim(my.sims) # A data frame aaa <- array(unlist(my.sims), c(n, ncol(fitted(zip.fit)), nsim)) # n by F by N summary(rowMeans(aaa[, 1, ]) - fitted(zip.fit)[, 1]) # Should be all 0s summary(rowMeans(aaa[, 2, ]) - fitted(zip.fit)[, 2]) # Should be all 0s
Maximum likelihood estimation of the 3-parameter Singh-Maddala distribution.
sinmad(lscale = "loglink", lshape1.a = "loglink", lshape3.q = "loglink", iscale = NULL, ishape1.a = NULL, ishape3.q = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = exp(-5:5), gshape3.q = exp(-5:5), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
sinmad(lscale = "loglink", lshape1.a = "loglink", lshape3.q = "loglink", iscale = NULL, ishape1.a = NULL, ishape3.q = NULL, imethod = 1, lss = TRUE, gscale = exp(-5:5), gshape1.a = exp(-5:5), gshape3.q = exp(-5:5), probs.y = c(0.25, 0.5, 0.75), zero = "shape")
lss |
See |
lshape1.a , lscale , lshape3.q
|
Parameter link functions applied to the
(positive) parameters |
iscale , ishape1.a , ishape3.q , imethod , zero
|
See |
gscale , gshape1.a , gshape3.q
|
See |
probs.y |
See |
The 3-parameter Singh-Maddala distribution is the 4-parameter
generalized beta II distribution with shape parameter .
It is known under various other names, such as the Burr XII (or
just the Burr distribution), Pareto IV,
beta-P, and generalized log-logistic distribution.
More details can be found in Kleiber and Kotz (2003).
Some distributions which are special cases of the 3-parameter
Singh-Maddala are the Lomax (), Fisk (
), and
paralogistic (
).
The Singh-Maddala distribution has density
for ,
,
,
.
Here,
is the scale parameter
scale
,
and the others are shape parameters.
The cumulative distribution function is
The mean is
provided ; these are returned as the fitted values.
This family function handles multiple responses.
An object of class "vglmff"
(see
vglmff-class
). The object
is used by modelling functions such as
vglm
, and vgam
.
See the notes in genbetaII
.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
Sinmad
,
genbetaII
,
betaII
,
dagum
,
fisk
,
inv.lomax
,
lomax
,
paralogistic
,
inv.paralogistic
,
simulate.vlm
.
## Not run: sdata <- data.frame(y = rsinmad(n = 1000, shape1 = exp(1), scale = exp(2), shape3 = exp(0))) fit <- vglm(y ~ 1, sinmad(lss = FALSE), sdata, trace = TRUE) fit <- vglm(y ~ 1, sinmad(lss = FALSE, ishape1.a = exp(1)), sdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) # Harder problem (has the shape3.q parameter going to infinity) set.seed(3) sdata <- data.frame(y1 = rbeta(1000, 6, 6)) # hist(with(sdata, y1)) if (FALSE) { # These struggle fit1 <- vglm(y1 ~ 1, sinmad(lss = FALSE), sdata, trace = TRUE) fit1 <- vglm(y1 ~ 1, sinmad(lss = FALSE), sdata, trace = TRUE, crit = "coef") Coef(fit1) } # Try this remedy: fit2 <- vglm(y1 ~ 1, data = sdata, trace = TRUE, stepsize = 0.05, maxit = 99, sinmad(lss = FALSE, ishape3.q = 3, lshape3.q = "logloglink")) coef(fit2, matrix = TRUE) Coef(fit2) ## End(Not run)
## Not run: sdata <- data.frame(y = rsinmad(n = 1000, shape1 = exp(1), scale = exp(2), shape3 = exp(0))) fit <- vglm(y ~ 1, sinmad(lss = FALSE), sdata, trace = TRUE) fit <- vglm(y ~ 1, sinmad(lss = FALSE, ishape1.a = exp(1)), sdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) # Harder problem (has the shape3.q parameter going to infinity) set.seed(3) sdata <- data.frame(y1 = rbeta(1000, 6, 6)) # hist(with(sdata, y1)) if (FALSE) { # These struggle fit1 <- vglm(y1 ~ 1, sinmad(lss = FALSE), sdata, trace = TRUE) fit1 <- vglm(y1 ~ 1, sinmad(lss = FALSE), sdata, trace = TRUE, crit = "coef") Coef(fit1) } # Try this remedy: fit2 <- vglm(y1 ~ 1, data = sdata, trace = TRUE, stepsize = 0.05, maxit = 99, sinmad(lss = FALSE, ishape3.q = 3, lshape3.q = "logloglink")) coef(fit2, matrix = TRUE) Coef(fit2) ## End(Not run)
Density, distribution function, quantile function and
random generation for the Singh-Maddala distribution with
shape parameters a
and q
, and scale parameter
scale
.
dsinmad(x, scale = 1, shape1.a, shape3.q, log = FALSE) psinmad(q, scale = 1, shape1.a, shape3.q, lower.tail = TRUE, log.p = FALSE) qsinmad(p, scale = 1, shape1.a, shape3.q, lower.tail = TRUE, log.p = FALSE) rsinmad(n, scale = 1, shape1.a, shape3.q)
dsinmad(x, scale = 1, shape1.a, shape3.q, log = FALSE) psinmad(q, scale = 1, shape1.a, shape3.q, lower.tail = TRUE, log.p = FALSE) qsinmad(p, scale = 1, shape1.a, shape3.q, lower.tail = TRUE, log.p = FALSE) rsinmad(n, scale = 1, shape1.a, shape3.q)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations. If |
shape1.a , shape3.q
|
shape parameters. |
scale |
scale parameter. |
log |
Logical.
If |
lower.tail , log.p
|
See sinmad
, which is the VGAM family function
for estimating the parameters by maximum likelihood estimation.
dsinmad
gives the density,
psinmad
gives the distribution function,
qsinmad
gives the quantile function, and
rsinmad
generates random deviates.
The Singh-Maddala distribution is a special case of the 4-parameter generalized beta II distribution.
T. W. Yee and Kai Huang
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
sdata <- data.frame(y = rsinmad(n = 3000, scale = exp(2), shape1 = exp(1), shape3 = exp(1))) fit <- vglm(y ~ 1, sinmad(lss = FALSE, ishape1.a = 2.1), data = sdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit)
sdata <- data.frame(y = rsinmad(n = 3000, scale = exp(2), shape1 = exp(1), shape3 = exp(1))) fit <- vglm(y ~ 1, sinmad(lss = FALSE, ishape1.a = 2.1), data = sdata, trace = TRUE, crit = "coef") coef(fit, matrix = TRUE) Coef(fit)
Estimates the two parameters of a Skellam distribution by maximum likelihood estimation.
skellam(lmu1 = "loglink", lmu2 = "loglink", imu1 = NULL, imu2 = NULL, nsimEIM = 100, parallel = FALSE, zero = NULL)
skellam(lmu1 = "loglink", lmu2 = "loglink", imu1 = NULL, imu2 = NULL, nsimEIM = 100, parallel = FALSE, zero = NULL)
lmu1 , lmu2
|
Link functions for the |
imu1 , imu2
|
Optional initial values for the parameters.
See |
nsimEIM , parallel , zero
|
See |
The Skellam distribution models the difference between two
independent Poisson distributions
(with means , say).
It has density function
where is an integer,
,
.
Here,
is the modified Bessel function of the
first kind with order
.
The mean is
(returned as the fitted values),
and the variance is
.
Simulated Fisher scoring is implemented.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
This VGAM family function seems fragile and very sensitive to the initial values. Use very cautiously!!
Numerical problems may occur for data if and/or
are large.
Skellam, J. G. (1946). The frequency distribution of the difference between two Poisson variates belonging to different populations. Journal of the Royal Statistical Society, Series A, 109, 296.
## Not run: sdata <- data.frame(x2 = runif(nn <- 1000)) sdata <- transform(sdata, mu1 = exp(1 + x2), mu2 = exp(1 + x2)) sdata <- transform(sdata, y = rskellam(nn, mu1, mu2)) fit1 <- vglm(y ~ x2, skellam, data = sdata, trace = TRUE, crit = "coef") fit2 <- vglm(y ~ x2, skellam(parallel = TRUE), data = sdata, trace = TRUE) coef(fit1, matrix = TRUE) coef(fit2, matrix = TRUE) summary(fit1) # Likelihood ratio test for equal means: pchisq(2 * (logLik(fit1) - logLik(fit2)), df = df.residual(fit2) - df.residual(fit1), lower.tail = FALSE) lrtest(fit1, fit2) # Alternative ## End(Not run)
## Not run: sdata <- data.frame(x2 = runif(nn <- 1000)) sdata <- transform(sdata, mu1 = exp(1 + x2), mu2 = exp(1 + x2)) sdata <- transform(sdata, y = rskellam(nn, mu1, mu2)) fit1 <- vglm(y ~ x2, skellam, data = sdata, trace = TRUE, crit = "coef") fit2 <- vglm(y ~ x2, skellam(parallel = TRUE), data = sdata, trace = TRUE) coef(fit1, matrix = TRUE) coef(fit2, matrix = TRUE) summary(fit1) # Likelihood ratio test for equal means: pchisq(2 * (logLik(fit1) - logLik(fit2)), df = df.residual(fit2) - df.residual(fit1), lower.tail = FALSE) lrtest(fit1, fit2) # Alternative ## End(Not run)
Density and random generation for the Skellam distribution.
dskellam(x, mu1, mu2, log = FALSE) rskellam(n, mu1, mu2)
dskellam(x, mu1, mu2, log = FALSE) rskellam(n, mu1, mu2)
x |
vector of quantiles. |
n |
number of observations.
Same as |
mu1 , mu2
|
See |
.
log |
Logical; if TRUE, the logarithm is returned. |
See skellam
, the VGAM family function
for estimating the parameters,
for the formula of the probability density function and other details.
dskellam
gives the density, and
rskellam
generates random deviates.
Numerical problems may occur for data if and/or
are large.
The normal approximation for this case has not been implemented yet.
## Not run: mu1 <- 1; mu2 <- 2; x <- (-7):7 plot(x, dskellam(x, mu1, mu2), type = "h", las = 1, col = "blue", main = paste("Density of Skellam distribution with mu1 = ", mu1, " and mu2 = ", mu2, sep = "")) ## End(Not run)
## Not run: mu1 <- 1; mu2 <- 2; x <- (-7):7 plot(x, dskellam(x, mu1, mu2), type = "h", las = 1, col = "blue", main = paste("Density of Skellam distribution with mu1 = ", mu1, " and mu2 = ", mu2, sep = "")) ## End(Not run)
Density and random generation for the univariate skew-normal distribution.
dskewnorm(x, location = 0, scale = 1, shape = 0, log = FALSE) rskewnorm(n, location = 0, scale = 1, shape = 0)
dskewnorm(x, location = 0, scale = 1, shape = 0, log = FALSE) rskewnorm(n, location = 0, scale = 1, shape = 0)
x |
vector of quantiles. |
n |
number of observations.
Same as |
location |
The location parameter |
scale |
The scale parameter |
shape |
The shape parameter. It is called |
log |
Logical.
If |
See skewnormal
, which currently only estimates the shape
parameter.
More generally here, where
has a standard skew-normal distribution
(see
skewnormal
),
is the location parameter and
is the scale parameter.
dskewnorm
gives the density,
rskewnorm
generates random deviates.
The default values of all three parameters corresponds to the skew-normal being the standard normal distribution.
T. W. Yee
http://tango.stat.unipd.it/SN
.
## Not run: N <- 200 # Grid resolution shape <- 7; x <- seq(-4, 4, len = N) plot(x, dskewnorm(x, shape = shape), type = "l", col = "blue", las = 1, ylab = "", lty = 1, lwd = 2) abline(v = 0, h = 0, col = "grey") lines(x, dnorm(x), col = "orange", lty = 2, lwd = 2) legend("topleft", leg = c(paste("Blue = dskewnorm(x, ", shape,")", sep = ""), "Orange = standard normal density"), lty = 1:2, lwd = 2, col = c("blue", "orange")) ## End(Not run)
## Not run: N <- 200 # Grid resolution shape <- 7; x <- seq(-4, 4, len = N) plot(x, dskewnorm(x, shape = shape), type = "l", col = "blue", las = 1, ylab = "", lty = 1, lwd = 2) abline(v = 0, h = 0, col = "grey") lines(x, dnorm(x), col = "orange", lty = 2, lwd = 2) legend("topleft", leg = c(paste("Blue = dskewnorm(x, ", shape,")", sep = ""), "Orange = standard normal density"), lty = 1:2, lwd = 2, col = c("blue", "orange")) ## End(Not run)
Maximum likelihood estimation of the shape parameter of a univariate skew-normal distribution.
skewnormal(lshape = "identitylink", ishape = NULL, nsimEIM = NULL)
skewnormal(lshape = "identitylink", ishape = NULL, nsimEIM = NULL)
lshape , ishape , nsimEIM
|
See |
The univariate skew-normal distribution has a density function that can be written
where is the shape parameter.
Here,
is the standard normal density and
its cumulative distribution function.
When
the result is a standard normal distribution.
When
it models the distribution of the maximum of
two independent standard normal variates.
When the absolute value of the shape parameter
increases the skewness of the distribution increases.
The limit as the shape parameter tends to positive infinity
results in the folded normal distribution or half-normal distribution.
When the shape parameter changes its sign, the density is reflected
about
.
The mean of the distribution is
and these are returned as the fitted values.
The variance of the distribution is
.
The Newton-Raphson algorithm is used unless the
nsimEIM
argument is used.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
It is well known that the EIM of Azzalini's skew-normal distribution is singular for skewness parameter tending to zero, and thus produces influential problems.
It is a good idea to use several different initial values to ensure that the global solution is obtained.
This family function will be modified (hopefully soon) to handle a location and scale parameter too.
Thomas W. Yee
Azzalini, A. A. (1985). A class of distributions which include the normal. Scandinavian Journal of Statistics, 12, 171–178.
Azzalini, A. and Capitanio, A. (1999). Statistical applications of the multivariate skew-normal distribution. Journal of the Royal Statistical Society, Series B, Methodological, 61, 579–602.
skewnorm
,
uninormal
,
foldnormal
.
sdata <- data.frame(y1 = rskewnorm(nn <- 1000, shape = 5)) fit1 <- vglm(y1 ~ 1, skewnormal, data = sdata, trace = TRUE) coef(fit1, matrix = TRUE) head(fitted(fit1), 1) with(sdata, mean(y1)) ## Not run: with(sdata, hist(y1, prob = TRUE)) x <- with(sdata, seq(min(y1), max(y1), len = 200)) with(sdata, lines(x, dskewnorm(x, shape = Coef(fit1)), col = "blue")) ## End(Not run) sdata <- data.frame(x2 = runif(nn)) sdata <- transform(sdata, y2 = rskewnorm(nn, shape = 1 + 2*x2)) fit2 <- vglm(y2 ~ x2, skewnormal, data = sdata, trace = TRUE, crit = "coef") summary(fit2)
sdata <- data.frame(y1 = rskewnorm(nn <- 1000, shape = 5)) fit1 <- vglm(y1 ~ 1, skewnormal, data = sdata, trace = TRUE) coef(fit1, matrix = TRUE) head(fitted(fit1), 1) with(sdata, mean(y1)) ## Not run: with(sdata, hist(y1, prob = TRUE)) x <- with(sdata, seq(min(y1), max(y1), len = 200)) with(sdata, lines(x, dskewnorm(x, shape = Coef(fit1)), col = "blue")) ## End(Not run) sdata <- data.frame(x2 = runif(nn)) sdata <- transform(sdata, y2 = rskewnorm(nn, shape = 1 + 2*x2)) fit2 <- vglm(y2 ~ x2, skewnormal, data = sdata, trace = TRUE, crit = "coef") summary(fit2)
Estimates the two parameters of the slash distribution by maximum likelihood estimation.
slash(lmu = "identitylink", lsigma = "loglink", imu = NULL, isigma = NULL, gprobs.y = ppoints(8), nsimEIM = 250, zero = NULL, smallno = .Machine$double.eps*1000)
slash(lmu = "identitylink", lsigma = "loglink", imu = NULL, isigma = NULL, gprobs.y = ppoints(8), nsimEIM = 250, zero = NULL, smallno = .Machine$double.eps*1000)
lmu , lsigma
|
Parameter link functions applied to the |
imu , isigma
|
Initial values.
A |
gprobs.y |
Used to compute the initial values for |
nsimEIM , zero
|
See |
smallno |
Small positive number, used to test for the singularity. |
The standard slash distribution is the distribution of the ratio of a standard normal variable to an independent standard uniform(0,1) variable. It is mainly of use in simulation studies. One of its properties is that it has heavy tails, similar to those of the Cauchy.
The general slash distribution can be obtained by replacing
the univariate normal variable by a general normal
random variable.
It has a density that can be written as
where and
are
the mean and standard deviation of
the univariate normal distribution respectively.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
Fisher scoring using simulation is used. Convergence is often quite slow. Numerical problems may occur.
T. W. Yee and C. S. Chee
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994). Continuous Univariate Distributions, 2nd edition, Volume 1, New York: Wiley.
Kafadar, K. (1982). A Biweight Approach to the One-Sample Problem Journal of the American Statistical Association, 77, 416–424.
## Not run: sdata <- data.frame(y = rslash(n = 1000, mu = 4, sigma = exp(2))) fit <- vglm(y ~ 1, slash, data = sdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
## Not run: sdata <- data.frame(y = rslash(n = 1000, mu = 4, sigma = exp(2))) fit <- vglm(y ~ 1, slash, data = sdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) summary(fit) ## End(Not run)
Density function, distribution function, and random generation for the slash distribution.
dslash(x, mu = 0, sigma = 1, log = FALSE, smallno = .Machine$double.eps*1000) pslash(q, mu = 0, sigma = 1, very.negative = -10000, lower.tail = TRUE, log.p = FALSE) rslash(n, mu = 0, sigma = 1)
dslash(x, mu = 0, sigma = 1, log = FALSE, smallno = .Machine$double.eps*1000) pslash(q, mu = 0, sigma = 1, very.negative = -10000, lower.tail = TRUE, log.p = FALSE) rslash(n, mu = 0, sigma = 1)
x , q
|
vector of quantiles. |
n |
Same as |
mu , sigma
|
the mean and standard deviation of the univariate normal distribution. |
log |
Logical.
If |
very.negative |
Numeric, of length 1.
A large negative value.
For |
smallno |
See |
lower.tail , log.p
|
See slash
, the VGAM family function
for estimating the two parameters by maximum likelihood estimation,
for the formula of the probability density function and other details.
Function pslash
uses a for ()
loop and
integrate
, meaning it's very slow.
It may also be inaccurate for extreme values of q
,
and returns with 1 or 0 values when too extreme compared
to very.negative
.
dslash
gives the density, and
pslash
gives the distribution function,
rslash
generates random deviates.
pslash
is very slow.
Thomas W. Yee and C. S. Chee
## Not run: curve(dslash, col = "blue", ylab = "f(x)", -5, 5, ylim = c(0, 0.4), las = 1, main = "Standard slash, normal and Cauchy densities", lwd = 2) curve(dnorm, col = "black", lty = 2, lwd = 2, add = TRUE) curve(dcauchy, col = "orange", lty = 3, lwd = 2, add = TRUE) legend("topleft", c("slash", "normal", "Cauchy"), lty = 1:3, col = c("blue","black","orange"), lwd = 2) curve(pslash, col = "blue", -5, 5, ylim = 0:1) pslash(c(-Inf, -20000, 20000, Inf)) # Gives a warning ## End(Not run)
## Not run: curve(dslash, col = "blue", ylab = "f(x)", -5, 5, ylim = c(0, 0.4), las = 1, main = "Standard slash, normal and Cauchy densities", lwd = 2) curve(dnorm, col = "black", lty = 2, lwd = 2, add = TRUE) curve(dcauchy, col = "orange", lty = 3, lwd = 2, add = TRUE) legend("topleft", c("slash", "normal", "Cauchy"), lty = 1:3, col = c("blue","black","orange"), lwd = 2) curve(pslash, col = "blue", -5, 5, ylim = 0:1) pslash(c(-Inf, -20000, 20000, Inf)) # Gives a warning ## End(Not run)
Computes some square root–log mixture link transformations, including their inverse and the first few derivatives.
sloglink(theta, bvalue = NULL, taumix.log = 1, tol = 1e-13, nmax = 99, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE, c10 = c(2, -2)) lcsloglink(theta, bvalue = NULL, pmix.log = 0.01, tol = 1e-13, nmax = 99, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE, c10 = c(2, -2))
sloglink(theta, bvalue = NULL, taumix.log = 1, tol = 1e-13, nmax = 99, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE, c10 = c(2, -2)) lcsloglink(theta, bvalue = NULL, pmix.log = 0.01, tol = 1e-13, nmax = 99, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE, c10 = c(2, -2))
theta |
Numeric or character. See below for further details. |
bvalue |
See |
taumix.log |
Numeric, of length 1.
Mixing parameter directed at
|
pmix.log |
Numeric, of length 1.
Mixing probability assigned
to |
tol , nmax
|
Arguments fed into a function implementing a vectorized bisection method. |
inverse , deriv , short , tag
|
Details at |
c10 |
For general information see
alogitlink
.
The following holds for the
linear combination (LC)
variant.
For deriv = 0
,
(1 - pmix.log) * sqrtlink(mu, c10 = c10)
+ pmix.log * loglink(mu)
when inverse = FALSE
,
and if inverse = TRUE
then a nonlinear
equation is solved for mu
,
given
eta
passed in as theta
.
For deriv = 1
, then the function
returns d eta
/ d
theta
as a function of theta
if
inverse = FALSE
, else if inverse
= TRUE
then it returns the reciprocal.
The default values for taumix.log
and pmix.log
may change in the future.
The name and order of the arguments
may change too.
Thomas W. Yee
alogitlink
,
sqrtlink
,
loglink
,
Links
,
poissonff
,
hdeff
.
mu <- seq(0.01, 3, length = 10) sloglink(mu) max(abs(sloglink(sloglink(mu), inv = TRUE) - mu)) # 0?
mu <- seq(0.01, 3, length = 10) sloglink(mu) max(abs(sloglink(sloglink(mu), inv = TRUE) - mu)) # 0?
This function represents an O-spline smooth term
in a vgam
formula
and confers automatic smoothing parameter selection.
sm.os(x, ..., niknots = 6, spar = -1, o.order = 2, alg.niknots = c("s", ".nknots.smspl")[1], all.knots = FALSE, ridge.adj = 1e-5, spillover = 0.01, maxspar = 1e12, outer.ok = FALSE, fixspar = FALSE)
sm.os(x, ..., niknots = 6, spar = -1, o.order = 2, alg.niknots = c("s", ".nknots.smspl")[1], all.knots = FALSE, ridge.adj = 1e-5, spillover = 0.01, maxspar = 1e12, outer.ok = FALSE, fixspar = FALSE)
x |
covariate (abscissae) to be smoothed.
Also called the regressor.
If the |
... |
Used to accommodate the other |
niknots |
numeric,
the number of interior knots,
called |
alg.niknots |
character.
The algorithm used to determine the number of interior knots.
Only used when |
all.knots |
logical.
If |
spar , maxspar
|
|
o.order |
The order of the O'Sullivan penalzed spline.
Any one value from |
ridge.adj |
small positive number to stabilize linear dependencies among B-spline bases. |
spillover |
small and positive proportion of the range used on
the outside of the boundary values.
This defines the endpoints |
outer.ok |
Fed into the argument (by the same name)
of |
fixspar |
logical.
If |
This function is currently used by vgam
to
allow automatic smoothing parameter selection based on
O-splines to minimize an UBRE quantity.
In contrast, s
operates by having a
prespecified amount of smoothing, e.g., its df
argument.
When the sample size is reasonably large
this function
is recommended over s
also because backfitting
is not required.
This function therefore allows 2nd-generation VGAMs to be
fitted (called G2-VGAMs, or Penalized-VGAMs).
This function should only be used with vgam
.
This function uses quantile
to
choose the knots, whereas sm.ps
chooses equally-spaced knots.
As Wand and Ormerod (2008) write,
in most situations the differences will be minor,
but it is possible for problems to arise
for either strategy by
constructing certain regression functions and
predictor variable distributions.
Any differences between O-splines and P-splines tend
to be at the boundaries. O-splines have
natural boundary constraints so that the solution is
linear beyond the boundary knots.
Some arguments in decreasing order of precedence are:
all.knots
,
niknots
,
alg.niknots
.
Unlike s
, which is symbolic and does not perform
any smoothing itself, this function does compute the penalized spline
when used by vgam
—it creates the appropriate columns
of the model matrix. When this function is used within
vgam
, automatic smoothing parameter selection is
implemented by calling magic
after the necessary
link-ups are done.
By default this function centres the component function. This function is also smart; it can be used for smart prediction (Section 18.6 of Yee (2015)). Automatic smoothing parameter selection is performed using performance-oriented iteration whereby an optimization problem is solved at each IRLS iteration.
This function works better when the sample size is large, e.g., when in the hundreds, say.
A matrix with attributes that are (only) used by vgam
.
The number of rows of the matrix is length(x)
.
The number of columns is a function of the number
of interior knots and
the order of the O-spline
:
.
In code, this is
niknots + 2 * o.order - 1
,
or using sm.ps
-like arguments,
ps.int + degree - 1
(where ps.int
should be more generally
interpreted as the number of intervals. The formula is
the same as sm.ps
.).
It transpires then that sm.os
and sm.ps
are very similar.
Being introduced into VGAM for the first time, this function (and those associated with it) should be used cautiously. Not all options are fully working or have been tested yet, and there are bound to be some bugs lurking around.
This function is currently under development and may change in the future.
One might try using this function with vglm
so as to fit a regression spline,
however, the default value of niknots
will probably
be too high for most data sets.
T. W. Yee, with some of the essential R code coming from the appendix of Wand and Ormerod (2008).
Wand, M. P. and Ormerod, J. T. (2008). On semiparametric regression with O'Sullivan penalized splines. Australian and New Zealand Journal of Statistics, 50(2): 179–198.
vgam
,
sm.ps
,
s
,
smartpred
,
is.smart
,
summarypvgam
,
smooth.spline
,
splineDesign
,
bs
,
magic
.
sm.os(runif(20)) ## Not run: data("TravelMode", package = "AER") # Need to install "AER" first air.df <- subset(TravelMode, mode == "air") # Form 4 smaller data frames bus.df <- subset(TravelMode, mode == "bus") trn.df <- subset(TravelMode, mode == "train") car.df <- subset(TravelMode, mode == "car") TravelMode2 <- data.frame(income = air.df$income, wait.air = air.df$wait - car.df$wait, wait.trn = trn.df$wait - car.df$wait, wait.bus = bus.df$wait - car.df$wait, gcost.air = air.df$gcost - car.df$gcost, gcost.trn = trn.df$gcost - car.df$gcost, gcost.bus = bus.df$gcost - car.df$gcost, wait = air.df$wait) # Value is unimportant TravelMode2$mode <- subset(TravelMode, choice == "yes")$mode # The response TravelMode2 <- transform(TravelMode2, incom.air = income, incom.trn = 0, incom.bus = 0) set.seed(1) TravelMode2 <- transform(TravelMode2, junkx2 = runif(nrow(TravelMode2))) tfit2 <- vgam(mode ~ sm.os(gcost.air, gcost.trn, gcost.bus) + ns(junkx2, 4) + sm.os(incom.air, incom.trn, incom.bus) + wait , crit = "coef", multinomial(parallel = FALSE ~ 1), data = TravelMode2, xij = list(sm.os(gcost.air, gcost.trn, gcost.bus) ~ sm.os(gcost.air, gcost.trn, gcost.bus) + sm.os(gcost.trn, gcost.bus, gcost.air) + sm.os(gcost.bus, gcost.air, gcost.trn), sm.os(incom.air, incom.trn, incom.bus) ~ sm.os(incom.air, incom.trn, incom.bus) + sm.os(incom.trn, incom.bus, incom.air) + sm.os(incom.bus, incom.air, incom.trn), wait ~ wait.air + wait.trn + wait.bus), form2 = ~ sm.os(gcost.air, gcost.trn, gcost.bus) + sm.os(gcost.trn, gcost.bus, gcost.air) + sm.os(gcost.bus, gcost.air, gcost.trn) + wait + sm.os(incom.air, incom.trn, incom.bus) + sm.os(incom.trn, incom.bus, incom.air) + sm.os(incom.bus, incom.air, incom.trn) + junkx2 + ns(junkx2, 4) + incom.air + incom.trn + incom.bus + gcost.air + gcost.trn + gcost.bus + wait.air + wait.trn + wait.bus) par(mfrow = c(2, 2)) plot(tfit2, se = TRUE, lcol = "orange", scol = "blue", ylim = c(-4, 4)) summary(tfit2) ## End(Not run)
sm.os(runif(20)) ## Not run: data("TravelMode", package = "AER") # Need to install "AER" first air.df <- subset(TravelMode, mode == "air") # Form 4 smaller data frames bus.df <- subset(TravelMode, mode == "bus") trn.df <- subset(TravelMode, mode == "train") car.df <- subset(TravelMode, mode == "car") TravelMode2 <- data.frame(income = air.df$income, wait.air = air.df$wait - car.df$wait, wait.trn = trn.df$wait - car.df$wait, wait.bus = bus.df$wait - car.df$wait, gcost.air = air.df$gcost - car.df$gcost, gcost.trn = trn.df$gcost - car.df$gcost, gcost.bus = bus.df$gcost - car.df$gcost, wait = air.df$wait) # Value is unimportant TravelMode2$mode <- subset(TravelMode, choice == "yes")$mode # The response TravelMode2 <- transform(TravelMode2, incom.air = income, incom.trn = 0, incom.bus = 0) set.seed(1) TravelMode2 <- transform(TravelMode2, junkx2 = runif(nrow(TravelMode2))) tfit2 <- vgam(mode ~ sm.os(gcost.air, gcost.trn, gcost.bus) + ns(junkx2, 4) + sm.os(incom.air, incom.trn, incom.bus) + wait , crit = "coef", multinomial(parallel = FALSE ~ 1), data = TravelMode2, xij = list(sm.os(gcost.air, gcost.trn, gcost.bus) ~ sm.os(gcost.air, gcost.trn, gcost.bus) + sm.os(gcost.trn, gcost.bus, gcost.air) + sm.os(gcost.bus, gcost.air, gcost.trn), sm.os(incom.air, incom.trn, incom.bus) ~ sm.os(incom.air, incom.trn, incom.bus) + sm.os(incom.trn, incom.bus, incom.air) + sm.os(incom.bus, incom.air, incom.trn), wait ~ wait.air + wait.trn + wait.bus), form2 = ~ sm.os(gcost.air, gcost.trn, gcost.bus) + sm.os(gcost.trn, gcost.bus, gcost.air) + sm.os(gcost.bus, gcost.air, gcost.trn) + wait + sm.os(incom.air, incom.trn, incom.bus) + sm.os(incom.trn, incom.bus, incom.air) + sm.os(incom.bus, incom.air, incom.trn) + junkx2 + ns(junkx2, 4) + incom.air + incom.trn + incom.bus + gcost.air + gcost.trn + gcost.bus + wait.air + wait.trn + wait.bus) par(mfrow = c(2, 2)) plot(tfit2, se = TRUE, lcol = "orange", scol = "blue", ylim = c(-4, 4)) summary(tfit2) ## End(Not run)
This function represents a P-spline smooth term
in a vgam
formula
and confers automatic smoothing parameter selection.
sm.ps(x, ..., ps.int = NULL, spar = -1, degree = 3, p.order = 2, ridge.adj = 1e-5, spillover = 0.01, maxspar = 1e12, outer.ok = FALSE, mux = NULL, fixspar = FALSE)
sm.ps(x, ..., ps.int = NULL, spar = -1, degree = 3, p.order = 2, ridge.adj = 1e-5, spillover = 0.01, maxspar = 1e12, outer.ok = FALSE, mux = NULL, fixspar = FALSE)
x , ...
|
See |
ps.int |
the number of equally-spaced B-spline intervals.
Note that the number of knots is equal to
|
spar , maxspar
|
See |
mux |
numeric. If given, then this argument multiplies
|
degree |
degree of B-spline basis. Usually this will be 2 or 3; and the values 1 or 4 might possibly be used. |
p.order |
order of difference penalty (0 is the ridge penalty). |
ridge.adj , spillover
|
See |
outer.ok , fixspar
|
See |
This function can be used by vgam
to
allow automatic smoothing parameter selection based on
P-splines and minimizing an UBRE quantity.
This function should only be used with vgam
and is an alternative to sm.os
;
see that function for some details that also apply here.
A matrix with attributes that are (only) used by vgam
.
The number of rows of the matrix is length(x)
and
the number of columns is ps.int + degree - 1
.
The latter is because the function is centred.
See sm.os
.
This function is currently under development and
may change in the future.
In particular, the default for ps.int
is
subject to change.
B. D. Marx wrote the original function. Subsequent edits were made by T. W. Yee and C. Somchit.
Eilers, P. H. C. and Marx, B. D. (1996). Flexible smoothing with B-splines and penalties (with comments and rejoinder). Statistical Science, 11(2): 89–121.
sm.os
,
s
,
vgam
,
smartpred
,
is.smart
,
summarypvgam
,
splineDesign
,
bs
,
magic
.
sm.ps(runif(20)) sm.ps(runif(20), ps.int = 5) ## Not run: data("TravelMode", package = "AER") # Need to install "AER" first air.df <- subset(TravelMode, mode == "air") # Form 4 smaller data frames bus.df <- subset(TravelMode, mode == "bus") trn.df <- subset(TravelMode, mode == "train") car.df <- subset(TravelMode, mode == "car") TravelMode2 <- data.frame(income = air.df$income, wait.air = air.df$wait - car.df$wait, wait.trn = trn.df$wait - car.df$wait, wait.bus = bus.df$wait - car.df$wait, gcost.air = air.df$gcost - car.df$gcost, gcost.trn = trn.df$gcost - car.df$gcost, gcost.bus = bus.df$gcost - car.df$gcost, wait = air.df$wait) # Value is unimportant TravelMode2$mode <- subset(TravelMode, choice == "yes")$mode # The response TravelMode2 <- transform(TravelMode2, incom.air = income, incom.trn = 0, incom.bus = 0) set.seed(1) TravelMode2 <- transform(TravelMode2, junkx2 = runif(nrow(TravelMode2))) tfit2 <- vgam(mode ~ sm.ps(gcost.air, gcost.trn, gcost.bus) + ns(junkx2, 4) + sm.ps(incom.air, incom.trn, incom.bus) + wait , crit = "coef", multinomial(parallel = FALSE ~ 1), data = TravelMode2, xij = list(sm.ps(gcost.air, gcost.trn, gcost.bus) ~ sm.ps(gcost.air, gcost.trn, gcost.bus) + sm.ps(gcost.trn, gcost.bus, gcost.air) + sm.ps(gcost.bus, gcost.air, gcost.trn), sm.ps(incom.air, incom.trn, incom.bus) ~ sm.ps(incom.air, incom.trn, incom.bus) + sm.ps(incom.trn, incom.bus, incom.air) + sm.ps(incom.bus, incom.air, incom.trn), wait ~ wait.air + wait.trn + wait.bus), form2 = ~ sm.ps(gcost.air, gcost.trn, gcost.bus) + sm.ps(gcost.trn, gcost.bus, gcost.air) + sm.ps(gcost.bus, gcost.air, gcost.trn) + wait + sm.ps(incom.air, incom.trn, incom.bus) + sm.ps(incom.trn, incom.bus, incom.air) + sm.ps(incom.bus, incom.air, incom.trn) + junkx2 + ns(junkx2, 4) + incom.air + incom.trn + incom.bus + gcost.air + gcost.trn + gcost.bus + wait.air + wait.trn + wait.bus) par(mfrow = c(2, 2)) plot(tfit2, se = TRUE, lcol = "orange", scol = "blue", ylim = c(-4, 4)) summary(tfit2) ## End(Not run)
sm.ps(runif(20)) sm.ps(runif(20), ps.int = 5) ## Not run: data("TravelMode", package = "AER") # Need to install "AER" first air.df <- subset(TravelMode, mode == "air") # Form 4 smaller data frames bus.df <- subset(TravelMode, mode == "bus") trn.df <- subset(TravelMode, mode == "train") car.df <- subset(TravelMode, mode == "car") TravelMode2 <- data.frame(income = air.df$income, wait.air = air.df$wait - car.df$wait, wait.trn = trn.df$wait - car.df$wait, wait.bus = bus.df$wait - car.df$wait, gcost.air = air.df$gcost - car.df$gcost, gcost.trn = trn.df$gcost - car.df$gcost, gcost.bus = bus.df$gcost - car.df$gcost, wait = air.df$wait) # Value is unimportant TravelMode2$mode <- subset(TravelMode, choice == "yes")$mode # The response TravelMode2 <- transform(TravelMode2, incom.air = income, incom.trn = 0, incom.bus = 0) set.seed(1) TravelMode2 <- transform(TravelMode2, junkx2 = runif(nrow(TravelMode2))) tfit2 <- vgam(mode ~ sm.ps(gcost.air, gcost.trn, gcost.bus) + ns(junkx2, 4) + sm.ps(incom.air, incom.trn, incom.bus) + wait , crit = "coef", multinomial(parallel = FALSE ~ 1), data = TravelMode2, xij = list(sm.ps(gcost.air, gcost.trn, gcost.bus) ~ sm.ps(gcost.air, gcost.trn, gcost.bus) + sm.ps(gcost.trn, gcost.bus, gcost.air) + sm.ps(gcost.bus, gcost.air, gcost.trn), sm.ps(incom.air, incom.trn, incom.bus) ~ sm.ps(incom.air, incom.trn, incom.bus) + sm.ps(incom.trn, incom.bus, incom.air) + sm.ps(incom.bus, incom.air, incom.trn), wait ~ wait.air + wait.trn + wait.bus), form2 = ~ sm.ps(gcost.air, gcost.trn, gcost.bus) + sm.ps(gcost.trn, gcost.bus, gcost.air) + sm.ps(gcost.bus, gcost.air, gcost.trn) + wait + sm.ps(incom.air, incom.trn, incom.bus) + sm.ps(incom.trn, incom.bus, incom.air) + sm.ps(incom.bus, incom.air, incom.trn) + junkx2 + ns(junkx2, 4) + incom.air + incom.trn + incom.bus + gcost.air + gcost.trn + gcost.bus + wait.air + wait.trn + wait.bus) par(mfrow = c(2, 2)) plot(tfit2, se = TRUE, lcol = "orange", scol = "blue", ylim = c(-4, 4)) summary(tfit2) ## End(Not run)
smart.expression
is an S expression for
a smart function to call itself. It is best if you go through it line
by line, but most users will not need to know anything about it.
It requires the primary argument of the smart function to be called
"x"
.
The list component match.call
must be assigned the
value of match.call()
in the smart function; this is so
that the smart function can call itself later.
print(sm.min2)
print(sm.min2)
Determine which of three modes the smart prediction is currently in.
smart.mode.is(mode.arg = NULL)
smart.mode.is(mode.arg = NULL)
mode.arg |
a character string, either |
Smart functions such as
bs
and
poly
need to know what mode
smart prediction is in. If it is in "write"
mode
then the parameters are saved to .smart.prediction
using put.smart
. If in "read"
mode
then the parameters are read in using get.smart
.
If in "neutral"
mode then the smart function behaves like an
ordinary function.
If mode.arg
is given, then either TRUE
or FALSE
is returned.
If mode.arg
is not given, then the mode ("neutral"
,
"read"
or "write"
)
is returned. Usually, the mode is "neutral"
.
print(sm.min1) smart.mode.is() # Returns "neutral" smart.mode.is(smart.mode.is()) # Returns TRUE
print(sm.min1) smart.mode.is() # Returns "neutral" smart.mode.is(smart.mode.is()) # Returns TRUE
Data-dependent parameters in formula terms
can cause problems in when predicting.
The smartpred package
saves
data-dependent parameters on the object so that the bug is fixed.
The lm
and glm
functions have
been fixed properly. Note that the VGAM package by T. W. Yee
automatically comes with smart prediction.
sm.bs(x, df = NULL, knots = NULL, degree = 3, intercept = FALSE, Boundary.knots = range(x)) sm.ns(x, df = NULL, knots = NULL, intercept = FALSE, Boundary.knots = range(x)) sm.poly(x, ..., degree = 1, coefs = NULL, raw = FALSE) sm.scale(x, center = TRUE, scale = TRUE)
sm.bs(x, df = NULL, knots = NULL, degree = 3, intercept = FALSE, Boundary.knots = range(x)) sm.ns(x, df = NULL, knots = NULL, intercept = FALSE, Boundary.knots = range(x)) sm.poly(x, ..., degree = 1, coefs = NULL, raw = FALSE) sm.scale(x, center = TRUE, scale = TRUE)
x |
The |
df , knots , intercept , Boundary.knots
|
|
degree , ... , coefs , raw
|
See |
center , scale
|
See |
R version 1.6.0 introduced a partial fix for the prediction
problem because it does not work all the time,
e.g., for terms such as
I(poly(x, 3))
,
poly(c(scale(x)), 3)
,
bs(scale(x), 3)
,
scale(scale(x))
.
See the examples below.
Smart prediction, however, will always work.
The basic idea is that the functions in the formula are now smart, and the
modelling functions make use of these smart functions. Smart prediction
works in two ways: using smart.expression
, or using a
combination of put.smart
and get.smart
.
The usual value returned by
bs
,
ns
,
poly
and
scale
,
When used with functions such as vglm
the data-dependent parameters are saved on one slot component called
smart.prediction
.
The variables
.max.smart
,
.smart.prediction
and
.smart.prediction.counter
are created while the model is being fitted.
They are created in a new environment called smartpredenv
.
These variables are deleted after the model has been fitted.
However,
if there is an error in the model fitting function or the fitting
model is killed (e.g., by typing control-C) then these variables will
be left in smartpredenv
. At the beginning of model fitting,
these variables are deleted if present in smartpredenv
.
During prediction, the variables
.smart.prediction
and
.smart.prediction.counter
are reconstructed and read by the smart functions when the model
frame is re-evaluated.
After prediction, these variables are deleted.
If the modelling function is used with argument smart = FALSE
(e.g., vglm(..., smart = FALSE)
) then smart prediction will not
be used, and the results should match with the original R functions.
The functions
bs
,
ns
,
poly
and
scale
are now left alone (from 2014-05 onwards) and no longer smart.
They work via safe prediction.
The smart versions of these functions have been renamed and
they begin with "sm."
.
The functions
predict.bs
and
predict.ns
are not smart.
That is because they operate on objects that contain attributes only
and do not have list components or slots.
The function
predict.poly
is not smart.
T. W. Yee and T. J. Hastie
get.smart.prediction
,
get.smart
,
put.smart
,
smart.expression
,
smart.mode.is
,
setup.smart
,
wrapup.smart
.
For vgam
in VGAM,
sm.ps
is important.
Commonly used data-dependent functions include
scale
,
poly
,
bs
,
ns
.
In R,
the functions bs
and ns
are in the
splines package, and this library is automatically
loaded in because it contains compiled code that
bs
and ns
call.
The functions vglm
,
vgam
,
rrvglm
and
cqo
in T. W. Yee's VGAM
package are examples of modelling functions that employ smart prediction.
# Create some data first n <- 20 set.seed(86) # For reproducibility of the random numbers ldata <- data.frame(x2 = sort(runif(n)), y = sort(runif(n))) library("splines") # To get ns() in R # This will work for R 1.6.0 and later fit <- lm(y ~ ns(x2, df = 5), data = ldata) ## Not run: plot(y ~ x2, data = ldata) lines(fitted(fit) ~ x2, data = ldata) new.ldata <- data.frame(x2 = seq(0, 1, len = n)) points(predict(fit, new.ldata) ~ x2, new.ldata, type = "b", col = 2, err = -1) ## End(Not run) # The following fails for R 1.6.x and later. It can be # made to work with smart prediction provided # ns is changed to sm.ns and scale is changed to sm.scale: fit1 <- lm(y ~ ns(scale(x2), df = 5), data = ldata) ## Not run: plot(y ~ x2, data = ldata, main = "Safe prediction fails") lines(fitted(fit1) ~ x2, data = ldata) points(predict(fit1, new.ldata) ~ x2, new.ldata, type = "b", col = 2, err = -1) ## End(Not run) # Fit the above using smart prediction ## Not run: library("VGAM") # The following requires the VGAM package to be loaded fit2 <- vglm(y ~ sm.ns(sm.scale(x2), df = 5), uninormal, data = ldata) [email protected] plot(y ~ x2, data = ldata, main = "Smart prediction") lines(fitted(fit2) ~ x2, data = ldata) points(predict(fit2, new.ldata, type = "response") ~ x2, data = new.ldata, type = "b", col = 2, err = -1) ## End(Not run)
# Create some data first n <- 20 set.seed(86) # For reproducibility of the random numbers ldata <- data.frame(x2 = sort(runif(n)), y = sort(runif(n))) library("splines") # To get ns() in R # This will work for R 1.6.0 and later fit <- lm(y ~ ns(x2, df = 5), data = ldata) ## Not run: plot(y ~ x2, data = ldata) lines(fitted(fit) ~ x2, data = ldata) new.ldata <- data.frame(x2 = seq(0, 1, len = n)) points(predict(fit, new.ldata) ~ x2, new.ldata, type = "b", col = 2, err = -1) ## End(Not run) # The following fails for R 1.6.x and later. It can be # made to work with smart prediction provided # ns is changed to sm.ns and scale is changed to sm.scale: fit1 <- lm(y ~ ns(scale(x2), df = 5), data = ldata) ## Not run: plot(y ~ x2, data = ldata, main = "Safe prediction fails") lines(fitted(fit1) ~ x2, data = ldata) points(predict(fit1, new.ldata) ~ x2, new.ldata, type = "b", col = 2, err = -1) ## End(Not run) # Fit the above using smart prediction ## Not run: library("VGAM") # The following requires the VGAM package to be loaded fit2 <- vglm(y ~ sm.ns(sm.scale(x2), df = 5), uninormal, data = ldata) fit2@smart.prediction plot(y ~ x2, data = ldata, main = "Smart prediction") lines(fitted(fit2) ~ x2, data = ldata) points(predict(fit2, new.ldata, type = "response") ~ x2, data = new.ldata, type = "b", col = 2, err = -1) ## End(Not run)
Return any special values or quantities in a fitted object, and in particular in a VGLM fit
specials(object, ...) specialsvglm(object, ...)
specials(object, ...) specialsvglm(object, ...)
object |
an object of class |
... |
any additional arguments, to future-proof this function. |
This extractor function was motivated by GAITD regression
(Yee and Ma, 2024)
where the values from three disjoint sets are referred
to as special.
More generally, S4 methods functions can be written so that
specials()
will work on any S4 object, where
what is called special depends on the methodology at hand.
Returns any ‘special’ values or quantities associated with a fitted regression model. This is often something simple such as a list or a vector.
Yee, T. W. and Ma, C. (2024). Generally altered, inflated, truncated and deflated regression. Statistical Science, 39 (in press).
vglm
,
vglm-class
,
inflated
,
altered
,
truncated
,
Gaitdpois
,
gaitdpoisson
.
abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) fit1 <- vglm(y ~ 1, gaitdpoisson(a.mix = 0), data = abdata, weight = w, subset = w > 0) specials(fit1)
abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) fit1 <- vglm(y ~ 1, gaitdpoisson(a.mix = 0), data = abdata, weight = w, subset = w > 0) specials(fit1)
Produces a spike plot of a numeric vector.
spikeplot(x, freq = FALSE, as.table = FALSE, col = par("col"), lty = par("lty"), lwd = par("lwd"), lend = par("lend"), type = "h", xlab = deparse1(substitute(x)), ylab = NULL, capped = FALSE, cex = sqrt(lwd) / 2, pch = 19, pcol = col, scol = NULL, slty = NULL, slwd = NULL, new.plot = TRUE, offset.x = 0, ymux = 1, ...)
spikeplot(x, freq = FALSE, as.table = FALSE, col = par("col"), lty = par("lty"), lwd = par("lwd"), lend = par("lend"), type = "h", xlab = deparse1(substitute(x)), ylab = NULL, capped = FALSE, cex = sqrt(lwd) / 2, pch = 19, pcol = col, scol = NULL, slty = NULL, slwd = NULL, new.plot = TRUE, offset.x = 0, ymux = 1, ...)
x |
Numeric, passed into |
freq |
Logical. If |
as.table |
Logical.
If |
col , type , lty , lwd
|
See |
lend , xlab , ylab
|
See |
capped , cex , pch , pcol
|
First argument is logical.
If |
scol , slty , slwd
|
Similar to |
new.plot , offset.x
|
Logical and numeric.
Add to an existing plot? If so, set |
ymux |
Numeric, y-multiplier. The response is multiplied by |
... |
Additional graphical arguments passed into an ordinary
|
Heaping is a very commonly occurring phenomenon in
retrospective self-reported survey data.
Also known as digit preference data,
it is often characterized by an excess of multiples of 10 or 5
upon rounding.
For this type of data
this simple function is meant to be convenient for
plotting the frequencies or sample proportions of
a vector x
representing a discrete random variable.
This type of plot
is known as a spike plot in STATA circles.
If table(x)
works then this function should hopefully
work.
The default for type
means that any heaping and
seeping should easily be seen.
If such features exist then GAITD regression is
potentially useful—see gaitdpoisson
etc.
Currently missing values are ignored totally because
table(x)
is used without further arguments;
this might change in the future.
Returns invisibly table(x)
.
T. W. Yee.
table
,
plot
,
par
,
deparse1
,
dgaitdplot
,
plotdgaitd
,
gaitdpoisson
.
## Not run: spikeplot(with(marital.nz, age), col = "pink2", lwd = 2) ## End(Not run)
## Not run: spikeplot(with(marital.nz, age), col = "pink2", lwd = 2) ## End(Not run)
Computes the square root and folded square root transformations, including their inverse and their first two derivatives.
foldsqrtlink(theta, min = 0, max = 1, mux = sqrt(2), inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) sqrtlink(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE, c10 = c(2, -2))
foldsqrtlink(theta, min = 0, max = 1, mux = sqrt(2), inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE) sqrtlink(theta, inverse = FALSE, deriv = 0, short = TRUE, tag = FALSE, c10 = c(2, -2))
theta |
Numeric or character. See below for further details. |
min , max , mux
|
These are called |
inverse , deriv , short , tag
|
Details at |
c10 |
Numeric, 2-vector |
The folded square root link function can be applied to
parameters that lie between and
inclusive.
Numerical values of
theta
out of range result in NA
or NaN
.
More general information
can be found at alogitlink
.
For foldsqrtlink
with deriv = 0
:
or
mux * (sqrt(theta-min) - sqrt(max-theta))
when inverse = FALSE
,
and if inverse = TRUE
then some more
complicated function that returns a NA
unless
theta
is between -mux*sqrt(max-min)
and
mux*sqrt(max-min)
.
For sqrtlink
with deriv = 0
and c10 = 1:0
:
when
inverse = FALSE
,
and if inverse = TRUE
then the square
is returned.
For deriv = 1
, then the function returns
d eta
/ d theta
as a
function of theta
if inverse = FALSE
,
else if inverse = TRUE
then it
returns the reciprocal.
For foldsqrtlink
,
the default has, if theta
is 0 or 1,
the link function
value is -sqrt(2)
and +sqrt(2)
respectively.
These are finite values, therefore one cannot use
this link function for
general modelling of probabilities because
of numerical problem,
e.g., with binomialff
,
cumulative
. See
the example below.
Thomas W. Yee
Links
,
poissonff
,
sloglink
,
hdeff
.
p <- seq(0.01, 0.99, by = 0.01) foldsqrtlink(p) max(abs(foldsqrtlink(foldsqrtlink(p), inverse = TRUE) - p)) # 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) foldsqrtlink(p) # Has NAs ## Not run: p <- seq(0.01, 0.99, by = 0.01) par(mfrow = c(2, 2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) for (d in 0:1) { matplot(p, cbind( logitlink(p, deriv = d), foldsqrtlink(p, deriv = d)), col = "blue", ylab = "transformation", main = ifelse(d == 0, "Some probability links", "First derivative"), type = "n", las = 1) lines(p, logitlink(p, deriv = d), col = "green") lines(p, probitlink(p, deriv = d), col = "blue") lines(p, clogloglink(p, deriv = d), col = "red") lines(p, foldsqrtlink(p, deriv = d), col = "tan") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logitlink", "probitlink", "clogloglink", "foldsqrtlink"), lwd = 2, col = c("green", "blue", "red", "tan")) } else abline(v = 0.5, lty = "dashed") } for (d in 0) { matplot(y, cbind( logitlink(y, deriv = d, inverse = TRUE), foldsqrtlink(y, deriv = d, inverse = TRUE)), type = "n", col = "blue", xlab = "transformation", ylab = "p", lwd = 2, las = 1, main = if (d == 0) "Some inverse probability link functions" else "First derivative") lines(y, logitlink(y, deriv=d, inverse=TRUE), col="green") lines(y, probitlink(y, deriv=d, inverse=TRUE), col="blue") lines(y, clogloglink(y, deriv=d, inverse=TRUE), col="red") lines(y, foldsqrtlink(y, deriv=d, inverse=TRUE), col="tan") if (d == 0) { abline(h = 0.5, v = 0, lty = "dashed") legend(-4, 1, c("logitlink", "probitlink", "clogloglink", "foldsqrtlink"), lwd = 2, col = c("green", "blue", "red", "tan")) } } par(lwd = 1) ## End(Not run) # This is lucky to converge fit.h <- vglm(agaaus ~ sm.bs(altitude), binomialff(foldsqrtlink(mux = 5)), hunua, trace = TRUE) ## Not run: plotvgam(fit.h, se = TRUE, lcol = "orange", scol = "orange", main = "Orange is Hunua, Blue is Waitakere") ## End(Not run) head(predict(fit.h, hunua, type = "response")) ## Not run: # The following fails. pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(foldsqrtlink(mux = 10), par = TRUE, rev = TRUE), data = pneumo, trace = TRUE, maxit = 200) ## End(Not run)
p <- seq(0.01, 0.99, by = 0.01) foldsqrtlink(p) max(abs(foldsqrtlink(foldsqrtlink(p), inverse = TRUE) - p)) # 0 p <- c(seq(-0.02, 0.02, by = 0.01), seq(0.97, 1.02, by = 0.01)) foldsqrtlink(p) # Has NAs ## Not run: p <- seq(0.01, 0.99, by = 0.01) par(mfrow = c(2, 2), lwd = (mylwd <- 2)) y <- seq(-4, 4, length = 100) for (d in 0:1) { matplot(p, cbind( logitlink(p, deriv = d), foldsqrtlink(p, deriv = d)), col = "blue", ylab = "transformation", main = ifelse(d == 0, "Some probability links", "First derivative"), type = "n", las = 1) lines(p, logitlink(p, deriv = d), col = "green") lines(p, probitlink(p, deriv = d), col = "blue") lines(p, clogloglink(p, deriv = d), col = "red") lines(p, foldsqrtlink(p, deriv = d), col = "tan") if (d == 0) { abline(v = 0.5, h = 0, lty = "dashed") legend(0, 4.5, c("logitlink", "probitlink", "clogloglink", "foldsqrtlink"), lwd = 2, col = c("green", "blue", "red", "tan")) } else abline(v = 0.5, lty = "dashed") } for (d in 0) { matplot(y, cbind( logitlink(y, deriv = d, inverse = TRUE), foldsqrtlink(y, deriv = d, inverse = TRUE)), type = "n", col = "blue", xlab = "transformation", ylab = "p", lwd = 2, las = 1, main = if (d == 0) "Some inverse probability link functions" else "First derivative") lines(y, logitlink(y, deriv=d, inverse=TRUE), col="green") lines(y, probitlink(y, deriv=d, inverse=TRUE), col="blue") lines(y, clogloglink(y, deriv=d, inverse=TRUE), col="red") lines(y, foldsqrtlink(y, deriv=d, inverse=TRUE), col="tan") if (d == 0) { abline(h = 0.5, v = 0, lty = "dashed") legend(-4, 1, c("logitlink", "probitlink", "clogloglink", "foldsqrtlink"), lwd = 2, col = c("green", "blue", "red", "tan")) } } par(lwd = 1) ## End(Not run) # This is lucky to converge fit.h <- vglm(agaaus ~ sm.bs(altitude), binomialff(foldsqrtlink(mux = 5)), hunua, trace = TRUE) ## Not run: plotvgam(fit.h, se = TRUE, lcol = "orange", scol = "orange", main = "Orange is Hunua, Blue is Waitakere") ## End(Not run) head(predict(fit.h, hunua, type = "response")) ## Not run: # The following fails. pneumo <- transform(pneumo, let = log(exposure.time)) fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(foldsqrtlink(mux = 10), par = TRUE, rev = TRUE), data = pneumo, trace = TRUE, maxit = 200) ## End(Not run)
Fits a stopping ratio logit/probit/cloglog/cauchit/... regression model to an ordered (preferably) factor response.
sratio(link = "logitlink", parallel = FALSE, reverse = FALSE, zero = NULL, ynames = FALSE, Thresh = NULL, Trev = reverse, Tref = if (Trev) "M" else 1, whitespace = FALSE)
sratio(link = "logitlink", parallel = FALSE, reverse = FALSE, zero = NULL, ynames = FALSE, Thresh = NULL, Trev = reverse, Tref = if (Trev) "M" else 1, whitespace = FALSE)
link |
Link function applied to the |
parallel |
A logical, or formula specifying which terms have equal/unequal coefficients. |
reverse |
Logical.
By default, the stopping ratios used are
|
ynames |
See |
zero |
Can be an integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
The values must be from the set {1,2,..., |
Thresh , Trev , Tref
|
See |
whitespace |
See |
In this help file the response is assumed to be a factor
with ordered values
, so that
is the number of linear/additive predictors
.
There are a number of definitions for the continuation ratio
in the literature. To make life easier, in the VGAM package,
we use continuation ratios (see cratio
)
and stopping ratios.
Continuation ratios deal with quantities such as
logitlink(P[Y>j|Y>=j])
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
No check is made to verify that the response is ordinal if the
response is a matrix;
see ordered
.
Boersch-Supan (2021) considers a sparse data set
(called budworm
)
and the numerical problems encountered when
fitting models such as
cratio
,
sratio
,
cumulative
.
Although improvements to links such as
clogloglink
have been made,
currently these family functions have not been
properly adapted to handle sparse data as well as they could.
The response should be either a matrix of counts
(with row sums that
are all positive), or a factor. In both cases,
the y
slot
returned by vglm
/vgam
/rrvglm
is the matrix
of counts.
For a nominal (unordered) factor response, the multinomial
logit model (multinomial
) is more appropriate.
Here is an example of the usage of the parallel
argument.
If there are covariates x1
, x2
and x3
, then
parallel = TRUE ~ x1 + x2 -1
and
parallel = FALSE ~ x3
are equivalent. This would constrain
the regression coefficients for x1
and x2
to be
equal; those of the intercepts and x3
would be different.
Thomas W. Yee
Agresti, A. (2013). Categorical Data Analysis, 3rd ed. Hoboken, NJ, USA: Wiley.
Boersch-Supan, P. H. (2021). Modeling insect phenology using ordinal regression and continuation ratio models. ReScience C, 7.1, 1–14. doi:10.18637/jss.v032.i10.
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Tutz, G. (2012). Regression for Categorical Data, Cambridge: Cambridge University Press.
Yee, T. W. (2010). The VGAM package for categorical data analysis. Journal of Statistical Software, 32, 1–34. doi:10.18637/jss.v032.i10.
cratio
,
acat
,
cumulative
,
multinomial
,
CM.equid
,
CommonVGAMffArguments
,
margeff
,
pneumo
,
budworm
,
logitlink
,
probitlink
,
clogloglink
,
cauchitlink
.
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, sratio(parallel = TRUE), data = pneumo)) coef(fit, matrix = TRUE) constraints(fit) predict(fit) predict(fit, untransform = TRUE)
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, sratio(parallel = TRUE), data = pneumo)) coef(fit, matrix = TRUE) constraints(fit) predict(fit) predict(fit, untransform = TRUE)
Select a formula-based model by AIC.
step4(object, ...) step4vglm(object, scope, direction = c("both", "backward", "forward"), trace = 1, keep = NULL, steps = 1000, k = 2, ...)
step4(object, ...) step4vglm(object, scope, direction = c("both", "backward", "forward"), trace = 1, keep = NULL, steps = 1000, k = 2, ...)
object |
an object of class |
scope |
See |
direction |
See |
trace , keep
|
See |
steps , k
|
See |
... |
any additional arguments to
|
This function is a direct adaptation of
step
for vglm-class
objects.
Since step
is not generic,
the name step4()
was adopted
and it is generic, as well as being S4 rather than S3.
It is the intent that this function should work as similar as
possible to step
.
Internally, the methods function for vglm-class
objects calls add1.vglm
and
drop1.vglm
repeatedly.
The results are placed in the post
slot of the
stepwise-selected model that is returned.
There are up to two additional components.
There is an "anova"
component corresponding to the steps taken in the search,
as well as a "keep"
component if the keep=
argument
was supplied in the call.
In general,
the same warnings in
drop1.glm
and
drop1.vglm
apply here.
This function
add1.vglm
,
drop1.vglm
,
vglm
,
trim.constraints
,
add1.glm
,
drop1.glm
,
backPain2
,
step
,
update
.
data("backPain2", package = "VGAM") summary(backPain2) fit1 <- vglm(pain ~ x2 + x3 + x4 + x2:x3 + x2:x4 + x3:x4, propodds, data = backPain2) spom1 <- step4(fit1) summary(spom1) spom1@post$anova
data("backPain2", package = "VGAM") summary(backPain2) fit1 <- vglm(pain ~ x2 + x3 + x4 + x2:x3 + x2:x4 + x3:x4, propodds, data = backPain2) spom1 <- step4(fit1) summary(spom1) spom1@post$anova
Estimating the parameters of a Student t distribution.
studentt (ldf = "logloglink", idf = NULL, tol1 = 0.1, imethod = 1) studentt2(df = Inf, llocation = "identitylink", lscale = "loglink", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale") studentt3(llocation = "identitylink", lscale = "loglink", ldf = "logloglink", ilocation = NULL, iscale = NULL, idf = NULL, imethod = 1, zero = c("scale", "df"))
studentt (ldf = "logloglink", idf = NULL, tol1 = 0.1, imethod = 1) studentt2(df = Inf, llocation = "identitylink", lscale = "loglink", ilocation = NULL, iscale = NULL, imethod = 1, zero = "scale") studentt3(llocation = "identitylink", lscale = "loglink", ldf = "logloglink", ilocation = NULL, iscale = NULL, idf = NULL, imethod = 1, zero = c("scale", "df"))
llocation , lscale , ldf
|
Parameter link functions for each parameter,
e.g., for degrees of freedom |
ilocation , iscale , idf
|
Optional initial values. If given, the values must be in range. The default is to compute an initial value internally. |
tol1 |
A positive value, the tolerance for testing whether an initial value is 1. Best to leave this argument alone. |
df |
Numeric, user-specified degrees of freedom. It may be of length equal to the number of columns of a response matrix. |
imethod , zero
|
The Student t density function is
for all real .
Then
if
(returned as the fitted values),
and
for
.
When
then the Student
-distribution
corresponds to the standard Cauchy distribution,
cauchy1
.
When with a scale parameter of
sqrt(2)
then
the Student -distribution
corresponds to the standard (Koenker) distribution,
sc.studentt2
.
The degrees of freedom can be treated as a parameter to be estimated,
and as a real and not an integer.
The Student t distribution is used for a variety of reasons
in statistics, including robust regression.
Let where
and
are the location
and scale parameters respectively.
Then
studentt3
estimates the location, scale and
degrees of freedom parameters.
And studentt2
estimates the location, scale parameters
for a user-specified degrees of freedom, df
.
And studentt
estimates the degrees of freedom parameter
only.
The fitted values are the location parameters.
By default the linear/additive predictors are
or subsets thereof.
In general convergence can be slow, especially when there are covariates.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as
vglm
, and vgam
.
studentt3()
and studentt2()
can handle multiple
responses.
Practical experience has shown reasonably good initial values
are required. If convergence failure occurs try using arguments
such as idf
.
Local solutions are also possible, especially when
the degrees of freedom is close to unity or
the scale parameter is close to zero.
A standard normal distribution corresponds to a t
distribution with infinite degrees of freedom. Consequently, if
the data is close to normal, there may be convergence problems;
best to use uninormal
instead.
T. W. Yee
Student (1908). The probable error of a mean. Biometrika, 6, 1–25.
Zhu, D. and Galbraith, J. W. (2010). A generalized asymmetric Student-t distribution with application to financial econometrics. Journal of Econometrics, 157, 297–305.
uninormal
,
cauchy1
,
logistic
,
huber2
,
sc.studentt2
,
TDist
,
simulate.vlm
.
tdata <- data.frame(x2 = runif(nn <- 1000)) tdata <- transform(tdata, y1 = rt(nn, df = exp(exp(0.5 - x2))), y2 = rt(nn, df = exp(exp(0.5 - x2)))) fit1 <- vglm(y1 ~ x2, studentt, data = tdata, trace = TRUE) coef(fit1, matrix = TRUE) # df inputted into studentt2() not quite right: fit2 <- vglm(y1 ~ x2, studentt2(df = exp(exp(0.5))), tdata) coef(fit2, matrix = TRUE) fit3 <- vglm(cbind(y1, y2) ~ x2, studentt3, tdata, trace = TRUE) coef(fit3, matrix = TRUE)
tdata <- data.frame(x2 = runif(nn <- 1000)) tdata <- transform(tdata, y1 = rt(nn, df = exp(exp(0.5 - x2))), y2 = rt(nn, df = exp(exp(0.5 - x2)))) fit1 <- vglm(y1 ~ x2, studentt, data = tdata, trace = TRUE) coef(fit1, matrix = TRUE) # df inputted into studentt2() not quite right: fit2 <- vglm(y1 ~ x2, studentt2(df = exp(exp(0.5))), tdata) coef(fit2, matrix = TRUE) fit3 <- vglm(cbind(y1, y2) ~ x2, studentt3, tdata, trace = TRUE) coef(fit3, matrix = TRUE)
These functions are all methods
for class "drrvglm"
or
"summary.drrvglm"
objects, or
for class "rrvglm"
or
"summary.rrvglm"
objects.
## S3 method for class 'drrvglm' summary(object, correlation = FALSE, dispersion = NULL, digits = NULL, numerical = TRUE, h.step = 0.005, omit123 = FALSE, omit13 = FALSE, fixA = FALSE, presid = FALSE, signif.stars = getOption("show.signif.stars"), nopredictors = FALSE, eval0 = TRUE, ...) ## S3 method for class 'summary.drrvglm' show(x, digits = NULL, quote = TRUE, prefix = "", signif.stars = NULL) ## S3 method for class 'rrvglm' summary(object, correlation = FALSE, dispersion = NULL, digits = NULL, numerical = TRUE, h.step = 0.005, omit123 = FALSE, omit13 = FALSE, fixA = TRUE, presid = FALSE, signif.stars = getOption("show.signif.stars"), nopredictors = FALSE, upgrade = FALSE, ...) ## S3 method for class 'summary.rrvglm' show(x, digits = NULL, quote = TRUE, prefix = "", signif.stars = NULL)
## S3 method for class 'drrvglm' summary(object, correlation = FALSE, dispersion = NULL, digits = NULL, numerical = TRUE, h.step = 0.005, omit123 = FALSE, omit13 = FALSE, fixA = FALSE, presid = FALSE, signif.stars = getOption("show.signif.stars"), nopredictors = FALSE, eval0 = TRUE, ...) ## S3 method for class 'summary.drrvglm' show(x, digits = NULL, quote = TRUE, prefix = "", signif.stars = NULL) ## S3 method for class 'rrvglm' summary(object, correlation = FALSE, dispersion = NULL, digits = NULL, numerical = TRUE, h.step = 0.005, omit123 = FALSE, omit13 = FALSE, fixA = TRUE, presid = FALSE, signif.stars = getOption("show.signif.stars"), nopredictors = FALSE, upgrade = FALSE, ...) ## S3 method for class 'summary.rrvglm' show(x, digits = NULL, quote = TRUE, prefix = "", signif.stars = NULL)
object |
an object of class
|
x |
an object of class
|
dispersion |
used mainly for GLMs. Not really implemented in VGAM so should not be used. |
correlation |
See |
digits |
See |
signif.stars |
See |
presid , quote
|
See |
nopredictors |
See |
upgrade |
Logical.
Upgrade |
numerical |
Logical,
use a finite difference approximation
for partial derivatives?
If |
h.step |
Numeric,
positive and close to 0.
If |
fixA |
Logical,
if |
omit13 |
Logical,
if |
omit123 |
Logical.
If |
prefix |
See |
eval0 |
Logical. Check if V is positive-definite? That is, all its eigenvalues are positive. |
... |
Logical argument |
Most of this document concerns DRR-VGLMs but also apply equally well to RR-VGLMs as a special case.
The overall variance-covariance matrix
The overall variance-covariance matrix
(called V below)
is computed. Since the parameters
comprise the elements of
the matrices A, B1 and C
(called here block matrices 1, 2, 3
respectively),
and an alternating algorithm is used for
estimation, then there are two overlapping
submodels that are fitted within an IRLS
algorithm. These have blocks 1 and 2, and
2 and 3, so that B1 is common to both.
They are combined into one large overall
variance-covariance matrix.
Argument fixA
specifies which submodel
the B1 block is taken from.
Block (1,3) is the most difficult to
compute and numerical approximations based on
first derivatives are used by default for this.
Sometimes the computed V
is not positive-definite.
If so,
then the standard errors will be NA
.
To avoid this problem,
try varying h.step
or refitting the model with a different
Index.corner
.
Argument omit13
and
omit123
can also be used to
give approximate answers.
If V is not positive-definite
then this may indicate
that the model does not fit the
data very well, e.g.,
Rank
is not a good value.
Potentially, there are many ways why
the model may be ill-conditioned.
Try several options and set trace = TRUE
to monitor convergence—this is informative
about how well the model and data agree.
How can one fit an ordinary RR-VGLM as
a DRR-VGLM?
If one uses corner constraints (default) then
one should input H.A
as a list
containing Rank
diag(M)
matrices—one for each column of A.
Then since Corner = TRUE
by default, then
[email protected]
has certain columns
deleted due to corner constraints.
In contrast,
[email protected]
is the
H.A
that was inputted.
FYI, the
alt
suffix indicates the alternating
algorithm, while
the suffix thy
stands for theory.
summarydrrvglm
returns an object
of class "summary.drrvglm"
.
DRR-VGLMs are a recent development so it will take some time to get things totally ironed out. RR-VGLMs were developed a long time ago and are more well-established, however they have only recently been documented here.
Note that vcov
methods exist for rrvglm-class
and drrvglm-class
objects.
Sometimes this function can take a long time and this is because numerical derivatives are computed.
T. W. Yee.
Chapter 5 of: Yee, T. W. (2015). Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: Springer. Sections 5.2.2 and 5.3 are particularly relevant.
rrvglm
,
rrvglm.control
,
vcovdrrvglm
,
CM.free
,
summaryvglm
,
summary.rrvglm-class
,
summary.drrvglm-class
.
## Not run: # Fit a rank-1 RR-VGLM as a DRR-VGLM. set.seed(1); n <- 1000; S <- 6 # S must be even myrank <- 1 rdata <- data.frame(x1 = runif(n), x2 = runif(n), x3 = runif(n), x4 = runif(n)) dval <- ncol(rdata) # Number of covariates # Involves x1, x2, ... a rank-1 model: ymatrix <- with(rdata, matrix(rpois(n*S, exp(3 + x1 - 0.5*x2)), n, S)) H.C <- vector("list", dval) # Ordinary "rrvglm" for (i in 1:dval) H.C[[i]] <- CM.free(myrank) names(H.C) <- paste0("x", 1:dval) H.A <- list(CM.free(S)) # rank-1 rfit1 <- rrvglm(ymatrix ~ x1 + x2 + x3 + x4, poissonff, rdata, trace = TRUE) class(rfit1) dfit1 <- rrvglm(ymatrix ~ x1 + x2 + x3 + x4, poissonff, rdata, trace = TRUE, H.A = H.A, # drrvglm H.C = H.C) # drrvglm class(dfit1) Coef(rfit1) # The RR-VGLM is the same as Coef(dfit1) # the DRR-VGLM. max(abs(predict(rfit1) - predict(dfit1))) # 0 abs(logLik(rfit1) - logLik(dfit1)) # 0 summary(rfit1) summary(dfit1) ## End(Not run)
## Not run: # Fit a rank-1 RR-VGLM as a DRR-VGLM. set.seed(1); n <- 1000; S <- 6 # S must be even myrank <- 1 rdata <- data.frame(x1 = runif(n), x2 = runif(n), x3 = runif(n), x4 = runif(n)) dval <- ncol(rdata) # Number of covariates # Involves x1, x2, ... a rank-1 model: ymatrix <- with(rdata, matrix(rpois(n*S, exp(3 + x1 - 0.5*x2)), n, S)) H.C <- vector("list", dval) # Ordinary "rrvglm" for (i in 1:dval) H.C[[i]] <- CM.free(myrank) names(H.C) <- paste0("x", 1:dval) H.A <- list(CM.free(S)) # rank-1 rfit1 <- rrvglm(ymatrix ~ x1 + x2 + x3 + x4, poissonff, rdata, trace = TRUE) class(rfit1) dfit1 <- rrvglm(ymatrix ~ x1 + x2 + x3 + x4, poissonff, rdata, trace = TRUE, H.A = H.A, # drrvglm H.C = H.C) # drrvglm class(dfit1) Coef(rfit1) # The RR-VGLM is the same as Coef(dfit1) # the DRR-VGLM. max(abs(predict(rfit1) - predict(dfit1))) # 0 abs(logLik(rfit1) - logLik(dfit1)) # 0 summary(rfit1) summary(dfit1) ## End(Not run)
These functions are all methods
for class "pvgam"
or
summary.pvgam
objects.
summarypvgam(object, dispersion = NULL, digits = options()$digits - 2, presid = TRUE) ## S3 method for class 'summary.pvgam' show(x, quote = TRUE, prefix = "", digits = options()$digits - 2, signif.stars = getOption("show.signif.stars"))
summarypvgam(object, dispersion = NULL, digits = options()$digits - 2, presid = TRUE) ## S3 method for class 'summary.pvgam' show(x, quote = TRUE, prefix = "", digits = options()$digits - 2, signif.stars = getOption("show.signif.stars"))
object |
an object of class |
x |
an object of class |
dispersion , digits , presid
|
See |
quote , prefix , signif.stars
|
See |
This methods function reports a summary more similar to
summary.gam
from mgcv than
summary.gam()
from gam.
It applies to G2-VGAMs using
sm.os
and O-splines, else
sm.ps
and P-splines.
In particular, the hypothesis test for whether each
sm.os
or
sm.ps
term can be deleted follows quite closely to
summary.gam
.
The p-values from this type of test tend to be biased downwards (too
small)
and corresponds to p.type = 5
.
It is hoped in the short future that improved p-values be implemented,
somewhat like the default of
summary.gam
.
This methods function was adapted from
summary.gam
.
summarypvgam
returns an object of class "summary.pvgam"
;
see summary.pvgam-class
.
See sm.os
.
vgam
,
summaryvgam
,
summary.pvgam-class
,
sm.os
,
sm.ps
,
summary.glm
,
summary.lm
,
summary.gam
from mgcv,
summaryvgam
for G1-VGAMs.
## Not run: hfit2 <- vgam(agaaus ~ sm.os(altitude), binomialff, data = hunua) coef(hfit2, matrix = TRUE) summary(hfit2) ## End(Not run)
## Not run: hfit2 <- vgam(agaaus ~ sm.os(altitude), binomialff, data = hunua) coef(hfit2, matrix = TRUE) summary(hfit2) ## End(Not run)
These functions are all methods
for class vgam
or
summary.vgam
objects.
summaryvgam(object, dispersion = NULL, digits = options()$digits - 2, presid = TRUE, nopredictors = FALSE) ## S3 method for class 'summary.vgam' show(x, quote = TRUE, prefix = "", digits = options()$digits-2, nopredictors = NULL)
summaryvgam(object, dispersion = NULL, digits = options()$digits - 2, presid = TRUE, nopredictors = FALSE) ## S3 method for class 'summary.vgam' show(x, quote = TRUE, prefix = "", digits = options()$digits-2, nopredictors = NULL)
object |
an object of class |
x |
an object of class |
dispersion , digits , presid
|
See |
quote , prefix , nopredictors
|
See |
This methods function reports a summary more similar to
summary.gam()
from gam than
summary.gam
from mgcv.
It applies to G1-VGAMs using s
and vector backfitting.
In particular, an approximate score test for linearity is conducted
for each s
term—see Section 4.3.4 of Yee (2015) for details.
The p-values from this type of test tend to be biased upwards (too large).
summaryvgam
returns an object of class "summary.vgam"
;
see summary.vgam-class
.
vgam
,
summary.glm
,
summary.lm
,
summary.gam
from mgcv,
summarypvgam
for P-VGAMs.
hfit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) summary(hfit) summary(hfit)@anova # Table for (approximate) testing of linearity
hfit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, data = hunua) summary(hfit) summary(hfit)@anova # Table for (approximate) testing of linearity
These functions are all methods
for
class vglm
or
summary.vglm
objects.
summaryvglm(object, correlation = FALSE, dispersion = NULL, digits = NULL, presid = FALSE, HDEtest = TRUE, hde.NA = TRUE, threshold.hde = 0.001, signif.stars = getOption("show.signif.stars"), nopredictors = FALSE, lrt0.arg = FALSE, score0.arg = FALSE, wald0.arg = FALSE, values0 = 0, subset = NULL, omit1s = TRUE, ...) ## S3 method for class 'summary.vglm' show(x, digits = max(3L, getOption("digits") - 3L), quote = TRUE, prefix = "", presid = length([email protected]) > 0, HDEtest = TRUE, hde.NA = TRUE, threshold.hde = 0.001, signif.stars = NULL, nopredictors = NULL, top.half.only = FALSE, ...)
summaryvglm(object, correlation = FALSE, dispersion = NULL, digits = NULL, presid = FALSE, HDEtest = TRUE, hde.NA = TRUE, threshold.hde = 0.001, signif.stars = getOption("show.signif.stars"), nopredictors = FALSE, lrt0.arg = FALSE, score0.arg = FALSE, wald0.arg = FALSE, values0 = 0, subset = NULL, omit1s = TRUE, ...) ## S3 method for class 'summary.vglm' show(x, digits = max(3L, getOption("digits") - 3L), quote = TRUE, prefix = "", presid = length(x@pearson.resid) > 0, HDEtest = TRUE, hde.NA = TRUE, threshold.hde = 0.001, signif.stars = NULL, nopredictors = NULL, top.half.only = FALSE, ...)
object |
an object of class |
x |
an object of class |
dispersion |
used mainly for GLMs.
See |
correlation |
logical; if |
digits |
the number of significant digits to use when printing. |
signif.stars |
logical;
if |
presid |
Pearson residuals; print out some summary statistics of these? |
HDEtest |
logical;
if |
hde.NA |
logical;
if a test for the Hauck-Donner effect is done
(for each coefficient)
and it is affirmative should that Wald test p-value be replaced by
an |
threshold.hde |
numeric;
used if |
quote |
Fed into |
nopredictors |
logical;
if |
lrt0.arg , score0.arg , wald0.arg
|
Logical.
If |
values0 , subset , omit1s
|
These arguments are used if any of the
|
top.half.only |
logical; if |
prefix |
Not used. |
... |
Not used. |
Originally, summaryvglm()
was written to be
very similar to summary.glm
,
however now there are a quite a few more options available.
By default,
show.summary.vglm()
tries to be smart about formatting the
coefficients, standard errors, etc. and additionally gives
‘significance stars’ if signif.stars
is TRUE
.
The coefficients
component of the result gives the estimated
coefficients and their estimated standard errors, together with their
ratio.
This third column is labelled z value
regardless of
whether the
dispersion is estimated or known
(or fixed by the family). A fourth column gives the two-tailed
p-value corresponding to the z ratio based on a
Normal reference distribution.
In general, the t distribution is not used, but the normal
distribution is.
Correlations are printed to two decimal places (or symbolically): to
see the actual correlations print summary(object)@correlation
directly.
The Hauck-Donner effect (HDE) is tested for almost all models;
see hdeff.vglm
for details.
Arguments hde.NA
and threshold.hde
here are meant
to give some control of the output if this aberration of the
Wald statistic occurs (so that the p-value is biased upwards).
If the HDE is present then using lrt.stat.vlm
to get a more accurate p-value is a good
alternative as p-values based on the likelihood ratio test (LRT)
tend to be more accurate than Wald tests and do not suffer
from the HDE.
Alternatively, if the HDE is present
then using wald0.arg = TRUE
will compute Wald statistics that are HDE-free; see
wald.stat
.
The arguments lrt0.arg
and score0.arg
enable the so-called Wald table to be replaced by
the equivalent LRT and Rao score test table;
see
lrt.stat.vlm
,
score.stat
.
Further IRLS iterations are performed for both of these,
hence the computational cost might be significant.
It is possible for programmers to write a methods function to
print out extra quantities when summary(vglmObject)
is
called.
The generic function is summaryvglmS4VGAM()
, and one
can use the S4 function setMethod
to
compute the quantities needed.
Also needed is the generic function is showsummaryvglmS4VGAM()
to actually print the quantities out.
summaryvglm
returns an object of class "summary.vglm"
;
see summary.vglm-class
.
Currently the SE column is deleted
when lrt0 = TRUE
because SEs are not so meaningful with the LRT.
In the future an SE column may be inserted (with NA
values)
so that it has 4-column output like the other tests.
In the meantime,
the columns of this matrix should be accessed by name and not number.
T. W. Yee.
vglm
,
confintvglm
,
vcovvlm
,
summary.rrvglm
,
summary.glm
,
summary.lm
,
summary
,
hdeff.vglm
,
lrt.stat.vlm
,
score.stat
,
wald.stat
.
## For examples see example(glm) pneumo <- transform(pneumo, let = log(exposure.time)) (afit <- vglm(cbind(normal, mild, severe) ~ let, acat, data = pneumo)) coef(afit, matrix = TRUE) summary(afit) # Might suffer from the Hauck-Donner effect coef(summary(afit)) summary(afit, lrt0 = TRUE, score0 = TRUE, wald0 = TRUE)
## For examples see example(glm) pneumo <- transform(pneumo, let = log(exposure.time)) (afit <- vglm(cbind(normal, mild, severe) ~ let, acat, data = pneumo)) coef(afit, matrix = TRUE) summary(afit) # Might suffer from the Hauck-Donner effect coef(summary(afit)) summary(afit, lrt0 = TRUE, score0 = TRUE, wald0 = TRUE)
Fits a system of seemingly unrelated regressions.
SURff(mle.normal = FALSE, divisor = c("n", "n-max(pj,pk)", "sqrt((n-pj)*(n-pk))"), parallel = FALSE, Varcov = NULL, matrix.arg = FALSE)
SURff(mle.normal = FALSE, divisor = c("n", "n-max(pj,pk)", "sqrt((n-pj)*(n-pk))"), parallel = FALSE, Varcov = NULL, matrix.arg = FALSE)
mle.normal |
Logical.
If |
divisor |
Character, partial matching allowed and the first choice is the default.
The divisor for the estimate of the covariances.
If |
parallel |
See
|
Varcov |
Numeric.
This may be assigned a variance-covariance of the errors.
If |
matrix.arg |
Logical. Of single length. |
Proposed by Zellner (1962), the basic
seemingly unrelated regressions (SUR)
model is a set of LMs ( of them) tied together
at the error term level.
Each LM's model matrix may potentially have its own set
of predictor variables.
Zellner's efficient (ZEF) estimator (also known as
Zellner's two-stage Aitken estimator)
can be obtained by setting
maxit = 1
(and possibly divisor = "sqrt"
or
divisor = "n-max"
).
The default value of maxit
(in vglm.control
)
probably means iterative GLS (IGLS) estimator is computed because
IRLS will probably iterate to convergence.
IGLS means, at each iteration, the residuals are used to estimate
the error variance-covariance matrix, and then the matrix is used
in the GLS.
The IGLS estimator is also known
as Zellner's iterative Aitken estimator, or IZEF.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as
vglm
and vgam
.
The default convergence criterion may be a little loose.
Try setting epsilon = 1e-11
, especially
with mle.normal = TRUE
.
The fitted object has slot @extra$ncols.X.lm
which is
a vector with the number of parameters for each LM.
Also,
@misc$values.divisor
is the -vector of
divisor
values.
Constraint matrices are needed in order to specify which response
variables that each term on the RHS of the formula is a
regressor for.
See the constraints
argument of vglm
for more information.
T. W. Yee.
Zellner, A. (1962). An Efficient Method of Estimating Seemingly Unrelated Regressions and Tests for Aggregation Bias. J. Amer. Statist. Assoc., 57(298), 348–368.
Kmenta, J. and Gilbert, R. F. (1968). Small Sample Properties of Alternative Estimators of Seemingly Unrelated Regressions. J. Amer. Statist. Assoc., 63(324), 1180–1200.
# Obtain some of the results of p.1199 of Kmenta and Gilbert (1968) clist <- list("(Intercept)" = diag(2), "capital.g" = rbind(1, 0), "value.g" = rbind(1, 0), "capital.w" = rbind(0, 1), "value.w" = rbind(0, 1)) zef1 <- vglm(cbind(invest.g, invest.w) ~ capital.g + value.g + capital.w + value.w, SURff(divisor = "sqrt"), maxit = 1, data = gew, trace = TRUE, constraints = clist) round(coef(zef1, matrix = TRUE), digits = 4) # ZEF zef1@extra$ncols.X.lm zef1@misc$divisor zef1@misc$values.divisor round(sqrt(diag(vcov(zef1))), digits = 4) # SEs nobs(zef1, type = "lm") df.residual(zef1, type = "lm") mle1 <- vglm(cbind(invest.g, invest.w) ~ capital.g + value.g + capital.w + value.w, SURff(mle.normal = TRUE), epsilon = 1e-11, data = gew, trace = TRUE, constraints = clist) round(coef(mle1, matrix = TRUE), digits = 4) # MLE round(sqrt(diag(vcov(mle1))), digits = 4) # SEs
# Obtain some of the results of p.1199 of Kmenta and Gilbert (1968) clist <- list("(Intercept)" = diag(2), "capital.g" = rbind(1, 0), "value.g" = rbind(1, 0), "capital.w" = rbind(0, 1), "value.w" = rbind(0, 1)) zef1 <- vglm(cbind(invest.g, invest.w) ~ capital.g + value.g + capital.w + value.w, SURff(divisor = "sqrt"), maxit = 1, data = gew, trace = TRUE, constraints = clist) round(coef(zef1, matrix = TRUE), digits = 4) # ZEF zef1@extra$ncols.X.lm zef1@misc$divisor zef1@misc$values.divisor round(sqrt(diag(vcov(zef1))), digits = 4) # SEs nobs(zef1, type = "lm") df.residual(zef1, type = "lm") mle1 <- vglm(cbind(invest.g, invest.w) ~ capital.g + value.g + capital.w + value.w, SURff(mle.normal = TRUE), epsilon = 1e-11, data = gew, trace = TRUE, constraints = clist) round(coef(mle1, matrix = TRUE), digits = 4) # MLE round(sqrt(diag(vcov(mle1))), digits = 4) # SEs
Create a survival object, usually used as a response variable in a model formula.
SurvS4(time, time2, event, type =, origin = 0) is.SurvS4(x)
SurvS4(time, time2, event, type =, origin = 0) is.SurvS4(x)
time |
for right censored data, this is the follow up time. For interval data, the first argument is the starting time for the interval. |
x |
any R object. |
event |
The status indicator, normally 0=alive, 1=dead. Other choices are
|
time2 |
ending time of the interval for interval censored or counting
process data only. Intervals are assumed to be open on the left and
closed on the right, |
type |
character string specifying the type of censoring. Possible values
are |
origin |
for counting process data, the hazard function origin. This is most often used in conjunction with a model containing time dependent strata in order to align the subjects properly when they cross over from one strata to another. |
Typical usages are
SurvS4(time, event) SurvS4(time, time2, event, type=, origin=0)
In theory it is possible to represent interval censored data without a third column containing the explicit status. Exact, right censored, left censored and interval censored observation would be represented as intervals of (a,a), (a, infinity), (-infinity,b), and (a,b) respectively; each specifying the interval within which the event is known to have occurred.
If type = "interval2"
then the representation given
above is assumed, with NA taking the place of infinity.
If 'type="interval" event
must be given.
If event
is 0
, 1
, or 2
,
the relevant information is assumed to be contained in
time
, the value in time2
is ignored, and the
second column of the result will contain a placeholder.
Presently, the only methods allowing interval
censored data are the parametric models computed by
survreg
, so the distinction between
open and closed intervals is unimportant. The distinction
is important for counting process data and the Cox model.
The function tries to distinguish between the use of 0/1
and 1/2 coding for left and right censored data using
if (max(status)==2)
. If 1/2 coding is used and all
the subjects are censored, it will guess wrong. Use 0/1
coding in this case.
An object of class SurvS4
(formerly Surv
).
There are methods for print
, is.na
, and
subscripting survival objects. SurvS4
objects are
implemented as a matrix of 2 or 3 columns.
In the case of is.SurvS4
, a logical value
TRUE
if x
inherits from class
"SurvS4"
, otherwise a FALSE
.
The purpose of having SurvS4
in VGAM is so that
the same input can be fed into vglm
as functions in
survival such as survreg
. The class
name has been changed from "Surv"
to "SurvS4"
; see
SurvS4-class
.
The format J+
is interpreted in VGAM as .
If
type="interval"
then these should not be used in VGAM:
(L,U-]
or (L,U+]
.
The code and documentation comes from survival.
Slight modifications have been made for conversion to S4 by T. W. Yee.
Also, for "interval"
data, as.character.SurvS4()
has
been modified to print intervals of the form
(start, end]
and not
[start, end]
as previously.
(This makes a difference for discrete data, such as for
cens.poisson
).
All VGAM family functions beginning with "cen"
require
the packaging function Surv
to format the input.
SurvS4-class
,
cens.poisson
,
survreg
,
leukemia
.
with(leukemia, SurvS4(time, status)) class(with(leukemia, SurvS4(time, status)))
with(leukemia, SurvS4(time, status)) class(with(leukemia, SurvS4(time, status)))
S4 version of the Surv class.
A virtual Class: No objects may be created from it.
Class "Surv"
, directly.
Class "matrix"
, directly.
Class "oldClass"
, by class "Surv", distance 2.
Class "structure"
, by class "matrix", distance 2.
Class "array"
, by class "matrix", distance 2.
Class "vector"
, by class "matrix", distance 3, with explicit coerce.
Class "vector"
, by class "matrix", distance 4, with explicit coerce.
signature(object = "SurvS4")
: ...
This code has not been thoroughly tested.
The purpose of having SurvS4
in VGAM is so that
the same input can be fed into vglm
as functions in
survival such as survreg
.
T. W. Yee.
See survival.
showClass("SurvS4")
showClass("SurvS4")
Calculates the Takeuchi information criterion for a fitted model object for which a log-likelihood value has been obtained.
TIC(object, ...) TICvlm(object, ...)
TIC(object, ...) TICvlm(object, ...)
object |
A VGAM object having
class |
... |
Other possible arguments fed into
|
The following formula is used for VGLMs:
,
where
is the inverse of the EIM from the fitted model,
and
is the outer product of the score vectors.
Both
and
are order-
matrices.
One has
equal to
vcov(object)
,
and is computed by taking the outer product of
the output from the
deriv
slot multiplied by the
large VLM matrix and then taking their sum.
Hence for the huge majority of models,
the penalty is computed at the MLE and is empirical in nature.
Theoretically, if the fitted model is the true model then
AIC equals TIC.
When there are prior weights the score vectors are divided
by the square root of these,
because .
This code relies on the log-likelihood being defined, and computed, for the object. When comparing fitted objects, the smaller the TIC, the better the fit. The log-likelihood and hence the TIC is only defined up to an additive constant.
Currently
any estimated scale parameter (in GLM parlance) is ignored by
treating its value as unity.
Also,
currently
this function is written only for vglm
objects and
not vgam
or rrvglm
, etc., objects.
Returns a numeric TIC value.
This code has not been double-checked.
The general applicability of TIC
for the VGLM/VGAM classes
has not been developed fully.
In particular, TIC
should not be run on some VGAM family
functions because of violation of certain regularity conditions, etc.
Some authors note that quite large sample sizes are needed for this IC to work reasonably well.
TIC has not been defined for RR-VGLMs, QRR-VGLMs, etc., yet.
See AICvlm
about models
such as posbernoulli.tb
that require posbinomial(omit.constant = TRUE)
.
T. W. Yee.
Takeuchi, K. (1976). Distribution of informational statistics and a criterion of model fitting. (In Japanese). Suri-Kagaku (Mathematic Sciences), 153, 12–18.
Burnham, K. P. and Anderson, D. R. (2002). Model Selection and Multi-Model Inference: A Practical Information-Theoretic Approach, 2nd ed. New York, USA: Springer.
VGLMs are described in vglm-class
;
AIC
,
AICvlm
.
BICvlm
.
pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) coef(fit1, matrix = TRUE) TIC(fit1) (fit2 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), data = pneumo)) coef(fit2, matrix = TRUE) TIC(fit2)
pneumo <- transform(pneumo, let = log(exposure.time)) (fit1 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), data = pneumo)) coef(fit1, matrix = TRUE) TIC(fit1) (fit2 <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = FALSE, reverse = TRUE), data = pneumo)) coef(fit2, matrix = TRUE) TIC(fit2)
Fits a Tobit regression model.
tobit(Lower = 0, Upper = Inf, lmu = "identitylink", lsd = "loglink", imu = NULL, isd = NULL, type.fitted = c("uncensored", "censored", "mean.obs"), byrow.arg = FALSE, imethod = 1, zero = "sd")
tobit(Lower = 0, Upper = Inf, lmu = "identitylink", lsd = "loglink", imu = NULL, isd = NULL, type.fitted = c("uncensored", "censored", "mean.obs"), byrow.arg = FALSE, imethod = 1, zero = "sd")
Lower |
Numeric. It is the value |
Upper |
Numeric. It is the value |
lmu , lsd
|
Parameter link functions for the mean and
standard deviation parameters.
See |
imu , isd , byrow.arg
|
See |
type.fitted |
Type of fitted value returned.
The first choice is default and is the ordinary uncensored or
unbounded linear model.
If |
imethod |
Initialization method. Either 1 or 2 or 3, this specifies
some methods for obtaining initial values for the parameters.
See |
zero |
A vector, e.g., containing the value 1 or 2. If so,
the mean or standard deviation respectively are modelled
as an intercept-only.
Setting |
The Tobit model can be written
where the
independently and
.
However, we measure
only if
and
for some
cutpoints
and
.
Otherwise we let
or
, whatever is closer.
The Tobit model is thus a multiple linear regression
but with censored
responses if it is below or above certain cutpoints.
The defaults for Lower
and Upper
and
lmu
correspond to the standard Tobit model.
Fisher scoring is used for the standard and nonstandard
models.
By default, the mean is
the first linear/additive predictor, and the log of
the standard deviation is the second linear/additive
predictor. The Fisher information matrix for uncensored
data is diagonal. The fitted values are the estimates
of
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
If values of the response and Lower
and/or Upper
are not integers then there is the danger that the value is
wrongly interpreted as uncensored.
For example, if the first 10 values of the response were
runif(10)
and Lower
was assigned these value then
testing y[1:10] == Lower[1:10]
is numerically fraught.
Currently, if any y < Lower
or y > Upper
then
a warning is issued.
The function round2
may be useful.
The response can be a matrix.
If so, then Lower
and Upper
are recycled into a matrix with the number of columns equal
to the number of responses,
and the recycling is done row-wise if
byrow.arg = TRUE
.
The default order is as matrix
, which
is byrow.arg = FALSE
.
For example, these are returned in fit4@misc$Lower
and
fit4@misc$Upper
below.
If there is no censoring then
uninormal
is recommended instead.
Any value of the
response less than Lower
or greater
than Upper
will
be assigned the value Lower
and Upper
respectively,
and a warning will be issued.
The fitted object has components censoredL
and censoredU
in the extra
slot which specifies whether
observations
are censored in that direction.
The function cens.normal
is an alternative
to tobit()
.
When obtaining initial values, if the algorithm would otherwise want to fit an underdetermined system of equations, then it uses the entire data set instead. This might result in rather poor quality initial values, and consequently, monitoring convergence is advised.
Thomas W. Yee
Tobin, J. (1958). Estimation of relationships for limited dependent variables. Econometrica 26, 24–36.
rtobit
,
cens.normal
,
uninormal
,
double.cens.normal
,
posnormal
,
CommonVGAMffArguments
,
round2
,
mills.ratio
,
margeff
,
rnorm
.
# Here, fit1 is a standard Tobit model and fit2 is nonstandard tdata <- data.frame(x2 = seq(-1, 1, length = (nn <- 100))) set.seed(1) Lower <- 1; Upper <- 4 # For the nonstandard Tobit model tdata <- transform(tdata, Lower.vec = rnorm(nn, Lower, 0.5), Upper.vec = rnorm(nn, Upper, 0.5)) meanfun1 <- function(x) 0 + 2*x meanfun2 <- function(x) 2 + 2*x meanfun3 <- function(x) 3 + 2*x tdata <- transform(tdata, y1 = rtobit(nn, mean = meanfun1(x2)), # Standard Tobit model y2 = rtobit(nn, mean = meanfun2(x2), Lower = Lower, Upper = Upper), y3 = rtobit(nn, mean = meanfun3(x2), Lower = Lower.vec, Upper = Upper.vec), y4 = rtobit(nn, mean = meanfun3(x2), Lower = Lower.vec, Upper = Upper.vec)) with(tdata, table(y1 == 0)) # How many censored values? with(tdata, table(y2 == Lower | y2 == Upper)) # Ditto with(tdata, table(attr(y2, "cenL"))) with(tdata, table(attr(y2, "cenU"))) fit1 <- vglm(y1 ~ x2, tobit, data = tdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) fit2 <- vglm(y2 ~ x2, tobit(Lower = Lower, Upper = Upper, type.f = "cens"), data = tdata, trace = TRUE) table(fit2@extra$censoredL) table(fit2@extra$censoredU) coef(fit2, matrix = TRUE) fit3 <- vglm(y3 ~ x2, tobit(Lower = with(tdata, Lower.vec), Upper = with(tdata, Upper.vec), type.f = "cens"), data = tdata, trace = TRUE) table(fit3@extra$censoredL) table(fit3@extra$censoredU) coef(fit3, matrix = TRUE) # fit4 is fit3 but with type.fitted = "uncen". fit4 <- vglm(cbind(y3, y4) ~ x2, tobit(Lower = rep(with(tdata, Lower.vec), each = 2), Upper = rep(with(tdata, Upper.vec), each = 2), byrow.arg = TRUE), data = tdata, crit = "coeff", trace = TRUE) head(fit4@extra$censoredL) # A matrix head(fit4@extra$censoredU) # A matrix head(fit4@misc$Lower) # A matrix head(fit4@misc$Upper) # A matrix coef(fit4, matrix = TRUE) ## Not run: # Plot fit1--fit4 par(mfrow = c(2, 2)) plot(y1 ~ x2, tdata, las = 1, main = "Standard Tobit model", col = as.numeric(attr(y1, "cenL")) + 3, pch = as.numeric(attr(y1, "cenL")) + 1) legend(x = "topleft", leg = c("censored", "uncensored"), pch = c(2, 1), col = c("blue", "green")) legend(-1.0, 2.5, c("Truth", "Estimate", "Naive"), lwd = 2, col = c("purple", "orange", "black"), lty = c(1, 2, 2)) lines(meanfun1(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit1) ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y1 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! plot(y2 ~ x2, data = tdata, las = 1, main = "Tobit model", col = as.numeric(attr(y2, "cenL")) + 3 + as.numeric(attr(y2, "cenU")), pch = as.numeric(attr(y2, "cenL")) + 1 + as.numeric(attr(y2, "cenU"))) legend(x = "topleft", leg = c("censored", "uncensored"), pch = c(2, 1), col = c("blue", "green")) legend(-1.0, 3.5, c("Truth", "Estimate", "Naive"), lwd = 2, col = c("purple", "orange", "black"), lty = c(1, 2, 2)) lines(meanfun2(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit2) ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y2 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! plot(y3 ~ x2, data = tdata, las = 1, main = "Tobit model with nonconstant censor levels", col = as.numeric(attr(y3, "cenL")) + 2 + as.numeric(attr(y3, "cenU") * 2), pch = as.numeric(attr(y3, "cenL")) + 1 + as.numeric(attr(y3, "cenU") * 2)) legend(x = "topleft", pch = c(2, 3, 1), col = c(3, 4, 2), leg = c("censoredL", "censoredU", "uncensored")) legend(-1.0, 3.5, c("Truth", "Estimate", "Naive"), lwd = 2, col = c("purple", "orange", "black"), lty = c(1, 2, 2)) lines(meanfun3(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit3) ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y3 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! plot(y3 ~ x2, data = tdata, las = 1, main = "Tobit model with nonconstant censor levels", col = as.numeric(attr(y3, "cenL")) + 2 + as.numeric(attr(y3, "cenU") * 2), pch = as.numeric(attr(y3, "cenL")) + 1 + as.numeric(attr(y3, "cenU") * 2)) legend(x = "topleft", pch = c(2, 3, 1), col = c(3, 4, 2), leg = c("censoredL", "censoredU", "uncensored")) legend(-1.0, 3.5, c("Truth", "Estimate", "Naive"), lwd = 2, col = c("purple", "orange", "black"), lty = c(1, 2, 2)) lines(meanfun3(x2) ~ x2, data = tdata, col = "purple", lwd = 2) lines(fitted(fit4)[, 1] ~ x2, tdata, col="orange", lwd = 2, lty = 2) lines(fitted(lm(y3 ~ x2, tdata)) ~ x2, data = tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! ## End(Not run)
# Here, fit1 is a standard Tobit model and fit2 is nonstandard tdata <- data.frame(x2 = seq(-1, 1, length = (nn <- 100))) set.seed(1) Lower <- 1; Upper <- 4 # For the nonstandard Tobit model tdata <- transform(tdata, Lower.vec = rnorm(nn, Lower, 0.5), Upper.vec = rnorm(nn, Upper, 0.5)) meanfun1 <- function(x) 0 + 2*x meanfun2 <- function(x) 2 + 2*x meanfun3 <- function(x) 3 + 2*x tdata <- transform(tdata, y1 = rtobit(nn, mean = meanfun1(x2)), # Standard Tobit model y2 = rtobit(nn, mean = meanfun2(x2), Lower = Lower, Upper = Upper), y3 = rtobit(nn, mean = meanfun3(x2), Lower = Lower.vec, Upper = Upper.vec), y4 = rtobit(nn, mean = meanfun3(x2), Lower = Lower.vec, Upper = Upper.vec)) with(tdata, table(y1 == 0)) # How many censored values? with(tdata, table(y2 == Lower | y2 == Upper)) # Ditto with(tdata, table(attr(y2, "cenL"))) with(tdata, table(attr(y2, "cenU"))) fit1 <- vglm(y1 ~ x2, tobit, data = tdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1) fit2 <- vglm(y2 ~ x2, tobit(Lower = Lower, Upper = Upper, type.f = "cens"), data = tdata, trace = TRUE) table(fit2@extra$censoredL) table(fit2@extra$censoredU) coef(fit2, matrix = TRUE) fit3 <- vglm(y3 ~ x2, tobit(Lower = with(tdata, Lower.vec), Upper = with(tdata, Upper.vec), type.f = "cens"), data = tdata, trace = TRUE) table(fit3@extra$censoredL) table(fit3@extra$censoredU) coef(fit3, matrix = TRUE) # fit4 is fit3 but with type.fitted = "uncen". fit4 <- vglm(cbind(y3, y4) ~ x2, tobit(Lower = rep(with(tdata, Lower.vec), each = 2), Upper = rep(with(tdata, Upper.vec), each = 2), byrow.arg = TRUE), data = tdata, crit = "coeff", trace = TRUE) head(fit4@extra$censoredL) # A matrix head(fit4@extra$censoredU) # A matrix head(fit4@misc$Lower) # A matrix head(fit4@misc$Upper) # A matrix coef(fit4, matrix = TRUE) ## Not run: # Plot fit1--fit4 par(mfrow = c(2, 2)) plot(y1 ~ x2, tdata, las = 1, main = "Standard Tobit model", col = as.numeric(attr(y1, "cenL")) + 3, pch = as.numeric(attr(y1, "cenL")) + 1) legend(x = "topleft", leg = c("censored", "uncensored"), pch = c(2, 1), col = c("blue", "green")) legend(-1.0, 2.5, c("Truth", "Estimate", "Naive"), lwd = 2, col = c("purple", "orange", "black"), lty = c(1, 2, 2)) lines(meanfun1(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit1) ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y1 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! plot(y2 ~ x2, data = tdata, las = 1, main = "Tobit model", col = as.numeric(attr(y2, "cenL")) + 3 + as.numeric(attr(y2, "cenU")), pch = as.numeric(attr(y2, "cenL")) + 1 + as.numeric(attr(y2, "cenU"))) legend(x = "topleft", leg = c("censored", "uncensored"), pch = c(2, 1), col = c("blue", "green")) legend(-1.0, 3.5, c("Truth", "Estimate", "Naive"), lwd = 2, col = c("purple", "orange", "black"), lty = c(1, 2, 2)) lines(meanfun2(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit2) ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y2 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! plot(y3 ~ x2, data = tdata, las = 1, main = "Tobit model with nonconstant censor levels", col = as.numeric(attr(y3, "cenL")) + 2 + as.numeric(attr(y3, "cenU") * 2), pch = as.numeric(attr(y3, "cenL")) + 1 + as.numeric(attr(y3, "cenU") * 2)) legend(x = "topleft", pch = c(2, 3, 1), col = c(3, 4, 2), leg = c("censoredL", "censoredU", "uncensored")) legend(-1.0, 3.5, c("Truth", "Estimate", "Naive"), lwd = 2, col = c("purple", "orange", "black"), lty = c(1, 2, 2)) lines(meanfun3(x2) ~ x2, tdata, col = "purple", lwd = 2) lines(fitted(fit3) ~ x2, tdata, col = "orange", lwd = 2, lty = 2) lines(fitted(lm(y3 ~ x2, tdata)) ~ x2, tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! plot(y3 ~ x2, data = tdata, las = 1, main = "Tobit model with nonconstant censor levels", col = as.numeric(attr(y3, "cenL")) + 2 + as.numeric(attr(y3, "cenU") * 2), pch = as.numeric(attr(y3, "cenL")) + 1 + as.numeric(attr(y3, "cenU") * 2)) legend(x = "topleft", pch = c(2, 3, 1), col = c(3, 4, 2), leg = c("censoredL", "censoredU", "uncensored")) legend(-1.0, 3.5, c("Truth", "Estimate", "Naive"), lwd = 2, col = c("purple", "orange", "black"), lty = c(1, 2, 2)) lines(meanfun3(x2) ~ x2, data = tdata, col = "purple", lwd = 2) lines(fitted(fit4)[, 1] ~ x2, tdata, col="orange", lwd = 2, lty = 2) lines(fitted(lm(y3 ~ x2, tdata)) ~ x2, data = tdata, col = "black", lty = 2, lwd = 2) # This is simplest but wrong! ## End(Not run)
Density, distribution function, quantile function and random generation for the Tobit model.
dtobit(x, mean = 0, sd = 1, Lower = 0, Upper = Inf, log = FALSE) ptobit(q, mean = 0, sd = 1, Lower = 0, Upper = Inf, lower.tail = TRUE, log.p = FALSE) qtobit(p, mean = 0, sd = 1, Lower = 0, Upper = Inf, lower.tail = TRUE, log.p = FALSE) rtobit(n, mean = 0, sd = 1, Lower = 0, Upper = Inf)
dtobit(x, mean = 0, sd = 1, Lower = 0, Upper = Inf, log = FALSE) ptobit(q, mean = 0, sd = 1, Lower = 0, Upper = Inf, lower.tail = TRUE, log.p = FALSE) qtobit(p, mean = 0, sd = 1, Lower = 0, Upper = Inf, lower.tail = TRUE, log.p = FALSE) rtobit(n, mean = 0, sd = 1, Lower = 0, Upper = Inf)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
Lower , Upper
|
vector of lower and upper thresholds. |
mean , sd , lower.tail , log , log.p
|
see |
See tobit
, the VGAM family function
for estimating the parameters,
for details.
Note that the density at Lower
and Upper
is the
the area to the left and right of those points.
Thus there are two spikes (but less in value);
see the example below.
Consequently, dtobit(Lower) + dtobit(Upper) +
the area
in between equals unity.
dtobit
gives the density,
ptobit
gives the distribution function,
qtobit
gives the quantile function, and
rtobit
generates random deviates.
T. W. Yee
mu <- 0.5; x <- seq(-2, 4, by = 0.01) Lower <- -1; Upper <- 2.0 integrate(dtobit, lower = Lower, upper = Upper, mean = mu, Lower = Lower, Upper = Upper)$value + dtobit(Lower, mean = mu, Lower = Lower, Upper = Upper) + dtobit(Upper, mean = mu, Lower = Lower, Upper = Upper) # Adds to 1 ## Not run: plot(x, ptobit(x, m = mu, Lower = Lower, Upper = Upper), type = "l", ylim = 0:1, las = 1, col = "orange", ylab = paste("ptobit(m = ", mu, ", sd = 1, Lower =", Lower, ", Upper =", Upper, ")"), main = "Orange is the CDF; blue is density", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0) lines(x, dtobit(x, m = mu, L = Lower, U = Upper), col = "blue") probs <- seq(0.1, 0.9, by = 0.1) Q <- qtobit(probs, m = mu, Lower = Lower, Upper = Upper) lines(Q, ptobit(Q, m = mu, Lower = Lower, Upper = Upper), col = "purple", lty = "dashed", type = "h") lines(Q, dtobit(Q, m = mu, Lower = Lower, Upper = Upper), col = "darkgreen", lty = "dashed", type = "h") abline(h = probs, col = "purple", lty = "dashed") max(abs(ptobit(Q, mu, L = Lower, U = Upper) - probs)) # Should be 0 epts <- c(Lower, Upper) # Endpoints have a spike (not quite, actually) lines(epts, dtobit(epts, m = mu, Lower = Lower, Upper = Upper), col = "blue", lwd = 3, type = "h") ## End(Not run)
mu <- 0.5; x <- seq(-2, 4, by = 0.01) Lower <- -1; Upper <- 2.0 integrate(dtobit, lower = Lower, upper = Upper, mean = mu, Lower = Lower, Upper = Upper)$value + dtobit(Lower, mean = mu, Lower = Lower, Upper = Upper) + dtobit(Upper, mean = mu, Lower = Lower, Upper = Upper) # Adds to 1 ## Not run: plot(x, ptobit(x, m = mu, Lower = Lower, Upper = Upper), type = "l", ylim = 0:1, las = 1, col = "orange", ylab = paste("ptobit(m = ", mu, ", sd = 1, Lower =", Lower, ", Upper =", Upper, ")"), main = "Orange is the CDF; blue is density", sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0) lines(x, dtobit(x, m = mu, L = Lower, U = Upper), col = "blue") probs <- seq(0.1, 0.9, by = 0.1) Q <- qtobit(probs, m = mu, Lower = Lower, Upper = Upper) lines(Q, ptobit(Q, m = mu, Lower = Lower, Upper = Upper), col = "purple", lty = "dashed", type = "h") lines(Q, dtobit(Q, m = mu, Lower = Lower, Upper = Upper), col = "darkgreen", lty = "dashed", type = "h") abline(h = probs, col = "purple", lty = "dashed") max(abs(ptobit(Q, mu, L = Lower, U = Upper) - probs)) # Should be 0 epts <- c(Lower, Upper) # Endpoints have a spike (not quite, actually) lines(epts, dtobit(epts, m = mu, Lower = Lower, Upper = Upper), col = "blue", lwd = 3, type = "h") ## End(Not run)
Generic function for the tolerances of a model.
Tol(object, ...)
Tol(object, ...)
object |
An object for which the computation or extraction of a tolerance or tolerances is meaningful. |
... |
Other arguments fed into the specific
methods function of the model. Sometimes they are fed
into the methods function for |
Different models can define an optimum in different ways. Many models have no such notion or definition.
Tolerances occur in quadratic ordination, i.e., CQO and UQO. They have ecological meaning because a high tolerance for a species means the species can survive over a large environmental range (stenoecous species), whereas a small tolerance means the species' niche is small (eurycous species). Mathematically, the tolerance is like the variance of a normal distribution.
The value returned depends specifically on the methods
function invoked.
For a cqo
binomial or Poisson fit, this
function returns a
array, where
is the rank
and
is the number of species.
Each tolerance matrix ought to be positive-definite, and
for a rank-1 fit, taking the square root of each tolerance
matrix results in each species' tolerance (like a standard
deviation).
There is a direct inverse relationship between the scaling of
the latent variables (site scores) and the tolerances.
One normalization is for the latent variables to have unit
variance.
Another normalization is for all the tolerances to be unit.
These two normalization cannot simultaneously hold in general.
For rank-R>1 models it becomes more complicated because
the latent variables are also uncorrelated. An important
argument when fitting quadratic ordination models is whether
eq.tolerances
is TRUE
or FALSE
.
See Yee (2004) for details.
Tolerances are undefined for ‘linear’ and additive ordination models. They are well-defined for quadratic ordination models.
Thomas W. Yee
Yee, T. W. (2004). A new technique for maximum-likelihood canonical Gaussian ordination. Ecological Monographs, 74, 685–701.
Yee, T. W. (2006). Constrained additive ordination. Ecology, 87, 203–213.
Tol.qrrvglm
.
Max
,
Opt
,
cqo
,
rcim
for UQO.
## Not run: set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[, 1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE) Tol(p1) ## End(Not run)
## Not run: set.seed(111) # This leads to the global solution hspider[,1:6] <- scale(hspider[, 1:6]) # Standardized environmental vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE) Tol(p1) ## End(Not run)
Estimating the parameter of the Topp-Leone distribution by maximum likelihood estimation.
topple(lshape = "logitlink", zero = NULL, gshape = ppoints(8), parallel = FALSE, percentiles = 50, type.fitted = c("mean", "percentiles", "Qlink"))
topple(lshape = "logitlink", zero = NULL, gshape = ppoints(8), parallel = FALSE, percentiles = 50, type.fitted = c("mean", "percentiles", "Qlink"))
lshape , gshape
|
Details at |
zero , parallel
|
Details at |
type.fitted , percentiles
|
See |
The Topple distribution has a probability density function that can be written
for and shape parameter
.
The mean of
is
(returned as the fitted values).
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
Fisher-scoring and Newton-Raphson are the same here. A related distribution is the triangle distribution. This VGAM family function handles multiple responses.
T. W. Yee
Topp, C. W. and F. C. Leone (1955). A family of J-shaped frequency functions. Journal of the American Statistical Association, 50, 209–219.
tdata <- data.frame(y = rtopple(1000, logitlink(1, inverse = TRUE))) tfit <- vglm(y ~ 1, topple, tdata, trace = TRUE, crit = "coef") coef(tfit, matrix = TRUE) Coef(tfit)
tdata <- data.frame(y = rtopple(1000, logitlink(1, inverse = TRUE))) tfit <- vglm(y ~ 1, topple, tdata, trace = TRUE, crit = "coef") coef(tfit, matrix = TRUE) Coef(tfit)
Density, distribution function, quantile function and random generation for the Topp-Leone distribution.
dtopple(x, shape, log = FALSE) ptopple(q, shape, lower.tail = TRUE, log.p = FALSE) qtopple(p, shape) rtopple(n, shape)
dtopple(x, shape, log = FALSE) ptopple(q, shape, lower.tail = TRUE, log.p = FALSE) qtopple(p, shape) rtopple(n, shape)
x , q , p , n
|
Same as |
shape |
the (shape) parameter, which lies in |
log |
Logical.
If |
lower.tail , log.p
|
See topple
, the VGAM
family function for
estimating the (shape) parameter by
maximum likelihood
estimation, for the formula of the
probability density function.
dtopple
gives the density,
ptopple
gives the distribution function,
qtopple
gives the quantile function, and
rtopple
generates random deviates.
The Topp-Leone distribution is related to the triangle distribution.
T. W. Yee
Topp, C. W. and F. C. Leone (1955). A family of J-shaped frequency functions. Journal of the American Statistical Association, 50, 209–219.
## Not run: shape <- 0.7; x <- seq(0.02, 0.999, length = 300) plot(x, dtopple(x, shape = shape), type = "l", col = "blue", main = "Blue is density, orange is CDF", ylab = "", las = 1, sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "blue", lty = 2) lines(x, ptopple(x, shape = shape), type = "l", col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qtopple(probs, shape = shape) lines(Q, dtopple(Q, shape), col = "purple", lty = 3, type = "h") lines(Q, ptopple(Q, shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(ptopple(Q, shape) - probs)) # Should be zero ## End(Not run)
## Not run: shape <- 0.7; x <- seq(0.02, 0.999, length = 300) plot(x, dtopple(x, shape = shape), type = "l", col = "blue", main = "Blue is density, orange is CDF", ylab = "", las = 1, sub = "Purple lines are the 10,20,...,90 percentiles") abline(h = 0, col = "blue", lty = 2) lines(x, ptopple(x, shape = shape), type = "l", col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qtopple(probs, shape = shape) lines(Q, dtopple(Q, shape), col = "purple", lty = 3, type = "h") lines(Q, ptopple(Q, shape), col = "purple", lty = 3, type = "h") abline(h = probs, col = "purple", lty = 3) max(abs(ptopple(Q, shape) - probs)) # Should be zero ## End(Not run)
Toxoplasmosis data in 34 cities in El Salvador.
data(toxop)
data(toxop)
A data frame with 34 observations on the following 4 variables.
rainfall
a numeric vector; the amount of rainfall in each city.
ssize
a numeric vector; sample size.
cityNo
a numeric vector; the city number.
positive
a numeric vector; the number of subjects testing positive for the disease.
See the references for details.
See the references for details.
Efron, B. (1978). Regression and ANOVA With zero-one data: measures of residual variation. Journal of the American Statistical Association, 73, 113–121.
Efron, B. (1986). Double exponential families and their use in generalized linear regression. Journal of the American Statistical Association, 81, 709–721.
## Not run: with(toxop, plot(rainfall, positive/ssize, col = "blue")) plot(toxop, col = "blue") ## End(Not run)
## Not run: with(toxop, plot(rainfall, positive/ssize, col = "blue")) plot(toxop, col = "blue") ## End(Not run)
Density, distribution function, quantile function and random
generation for the Triangle distribution with parameter
theta
.
dtriangle(x, theta, lower = 0, upper = 1, log = FALSE) ptriangle(q, theta, lower = 0, upper = 1, lower.tail = TRUE, log.p = FALSE) qtriangle(p, theta, lower = 0, upper = 1, lower.tail = TRUE, log.p = FALSE) rtriangle(n, theta, lower = 0, upper = 1)
dtriangle(x, theta, lower = 0, upper = 1, log = FALSE) ptriangle(q, theta, lower = 0, upper = 1, lower.tail = TRUE, log.p = FALSE) qtriangle(p, theta, lower = 0, upper = 1, lower.tail = TRUE, log.p = FALSE) rtriangle(n, theta, lower = 0, upper = 1)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
Same as |
theta |
the theta parameter which lies between |
lower , upper
|
lower and upper limits of the distribution. Must be finite. |
log |
Logical.
If |
lower.tail , log.p
|
See triangle
,
the VGAM family function
for estimating the parameter by
maximum likelihood estimation, however the regular
conditions do not hold so the algorithm crawls
to the solution if lucky.
dtriangle
gives the density,
ptriangle
gives the distribution function,
qtriangle
gives the quantile function, and
rtriangle
generates random deviates.
T. W. Yee and Kai Huang
## Not run: x <- seq(-0.1, 1.1, by = 0.01); theta <- 0.75 plot(x, dtriangle(x, theta = theta), type = "l", col = "blue", las = 1, main = "Blue is density, orange is the CDF", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = c(0,2), ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, ptriangle(x, theta = theta), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qtriangle(probs, theta = theta) lines(Q, dtriangle(Q, theta = theta), col = "purple", lty = 3, type = "h") ptriangle(Q, theta = theta) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) ## End(Not run)
## Not run: x <- seq(-0.1, 1.1, by = 0.01); theta <- 0.75 plot(x, dtriangle(x, theta = theta), type = "l", col = "blue", las = 1, main = "Blue is density, orange is the CDF", sub = "Purple lines are the 10,20,...,90 percentiles", ylim = c(0,2), ylab = "") abline(h = 0, col = "blue", lty = 2) lines(x, ptriangle(x, theta = theta), col = "orange") probs <- seq(0.1, 0.9, by = 0.1) Q <- qtriangle(probs, theta = theta) lines(Q, dtriangle(Q, theta = theta), col = "purple", lty = 3, type = "h") ptriangle(Q, theta = theta) - probs # Should be all zero abline(h = probs, col = "purple", lty = 3) ## End(Not run)
Deletes statistically nonsignficant regression coefficients via their constraint matrices, for future refitting.
trim.constraints(object, sig.level = 0.05, max.num = Inf, intercepts = TRUE, ...)
trim.constraints(object, sig.level = 0.05, max.num = Inf, intercepts = TRUE, ...)
object |
Some VGAM object, especially having
class |
sig.level |
Significance levels, with values in |
max.num |
Numeric, positive and integer-valued.
Maximum number of regression coefficients allowable for deletion.
This allows one to limit the number of deleted coefficients.
For example,
if |
intercepts |
Logical. Trim the intercept term?
If |
... |
Unused but for provision in the future. |
This utility function is intended to simplify an existing
vglm
object having
variables (terms) that affect unnecessary parameters.
Suppose the explanatory variables in the formula
includes a simple numeric covariate called x2
.
This variable will affect every linear predictor if
zero = NULL
in the VGAM family function.
This situation may correspond to the constraint matrices having
unnecessary columns because their regression coefficients are
statistically nonsignificant.
This function attempts to delete those columns and
return a possibly simplified list of constraint matrices
that can make refitting a simpler model easy to do.
P-values obtained from summaryvglm
(with HDEtest = FALSE
for increased speed)
are compared to sig.level
to test for
statistical significance.
For terms that generate more than one column of the
"lm"
model matrix,
such as bs
and poly
,
the column is deleted if all regression coefficients
are statistically nonsignificant.
Incidentally, users should instead use
sm.bs
,
sm.ns
,
sm.poly
,
etc.,
for smart and safe prediction.
One can think of this function as facilitating
backward elimination for variable selection,
especially if max.num = 1
and ,
however usually more than one regression coefficient is deleted
here by default.
A list of possibly simpler constraint matrices
that can be fed back into the model using the
constraints
argument
(usually zero = NULL
is needed to avoid a warning).
Consequently, they are required to be of the "term"
-type.
After the model is refitted, applying
summaryvglm
should result in
regression coefficients that are ‘all’ statistically
significant.
This function has not been tested thoroughly.
One extreme is that a term is totally deleted because
none of its regression coefficients are needed,
and that situation has not yet been finalized.
Ideally, object
only contains terms where at least
one regression coefficient has a p-value less than
sig.level
.
For ordered factors and other situations, deleting
certain columns may not make sense and destroy interpretability.
As stated above, max.num
may not work properly
when there are terms that
generate more than one column of the LM model matrix.
However, this limitation may change in the future.
This function is experimental and may be replaced by some other function in the future. This function does not use S4 object oriented programming but may be converted to such in the future.
T. W. Yee
constraints
,
vglm
,
summaryvglm
,
model.matrixvlm
,
drop1.vglm
,
step4vglm
,
sm.bs
,
sm.ns
,
sm.poly
.
## Not run: data("xs.nz", package = "VGAMdata") fit1 <- vglm(cbind(worry, worrier) ~ bs(age) + sex + ethnicity + cat + dog, binom2.or(zero = NULL), data = xs.nz, trace = TRUE) summary(fit1, HDEtest = FALSE) # 'cat' is not significant at all dim(constraints(fit1, matrix = TRUE)) (tclist1 <- trim.constraints(fit1)) # No 'cat' fit2 <- # Delete 'cat' manually from the formula: vglm(cbind(worry, worrier) ~ bs(age) + sex + ethnicity + dog, binom2.or(zero = NULL), data = xs.nz, constraints = tclist1, trace = TRUE) summary(fit2, HDEtest = FALSE) # A simplified model dim(constraints(fit2, matrix = TRUE)) # Fewer regression coefficients ## End(Not run)
## Not run: data("xs.nz", package = "VGAMdata") fit1 <- vglm(cbind(worry, worrier) ~ bs(age) + sex + ethnicity + cat + dog, binom2.or(zero = NULL), data = xs.nz, trace = TRUE) summary(fit1, HDEtest = FALSE) # 'cat' is not significant at all dim(constraints(fit1, matrix = TRUE)) (tclist1 <- trim.constraints(fit1)) # No 'cat' fit2 <- # Delete 'cat' manually from the formula: vglm(cbind(worry, worrier) ~ bs(age) + sex + ethnicity + dog, binom2.or(zero = NULL), data = xs.nz, constraints = tclist1, trace = TRUE) summary(fit2, HDEtest = FALSE) # A simplified model dim(constraints(fit2, matrix = TRUE)) # Fewer regression coefficients ## End(Not run)
Density and random generation for the trivariate normal distribution distribution.
dtrinorm(x1, x2, x3, mean1 = 0, mean2 = 0, mean3 = 0, var1 = 1, var2 = 1, var3 = 1, cov12 = 0, cov23 = 0, cov13 = 0, log = FALSE) rtrinorm(n, mean1 = 0, mean2 = 0, mean3 = 0, var1 = 1, var2 = 1, var3 = 1, cov12 = 0, cov23 = 0, cov13 = 0)
dtrinorm(x1, x2, x3, mean1 = 0, mean2 = 0, mean3 = 0, var1 = 1, var2 = 1, var3 = 1, cov12 = 0, cov23 = 0, cov13 = 0, log = FALSE) rtrinorm(n, mean1 = 0, mean2 = 0, mean3 = 0, var1 = 1, var2 = 1, var3 = 1, cov12 = 0, cov23 = 0, cov13 = 0)
x1 , x2 , x3
|
vector of quantiles. |
mean1 , mean2 , mean3
|
vectors of means. |
var1 , var2 , var3
|
vectors of variances. |
cov12 , cov23 , cov13
|
vectors of covariances. |
n |
number of observations.
Same as |
log |
Logical.
If |
The default arguments correspond to the standard trivariate normal
distribution with correlation parameters equal to 0,
which corresponds to three independent standard normal distributions.
Let sd1
(say) be sqrt(var1)
and
written , etc.
Then the general formula for each correlation coefficient is
of the form
,
and similarly for the two others.
Thus if the
var
arguments are left alone then
the cov
can be inputted with s.
dtrinorm
gives the density,
rtrinorm
generates random deviates ( by 3 matrix).
dtrinorm()
's arguments might change in the future!
It's safest to use the full argument names
to future-proof possible changes!
For rtrinorm()
,
if the th variance-covariance matrix is not
positive-definite then the
th row is all
NA
s.
pnorm
,
trinormal
,
uninormal
,
binormal
,
rbinorm
.
## Not run: nn <- 1000 tdata <- data.frame(x2 = sort(runif(nn))) tdata <- transform(tdata, mean1 = 1 + 2 * x2, mean2 = 3 + 1 * x2, mean3 = 4, var1 = exp( 1), var2 = exp( 1), var3 = exp( 1), rho12 = rhobitlink( 1, inverse = TRUE), rho23 = rhobitlink( 1, inverse = TRUE), rho13 = rhobitlink(-1, inverse = TRUE)) ymat <- with(tdata, rtrinorm(nn, mean1, mean2, mean3, var1, var2, var3, sqrt(var1)*sqrt(var1)*rho12, sqrt(var2)*sqrt(var3)*rho23, sqrt(var1)*sqrt(var3)*rho13)) pairs(ymat, col = "blue") ## End(Not run)
## Not run: nn <- 1000 tdata <- data.frame(x2 = sort(runif(nn))) tdata <- transform(tdata, mean1 = 1 + 2 * x2, mean2 = 3 + 1 * x2, mean3 = 4, var1 = exp( 1), var2 = exp( 1), var3 = exp( 1), rho12 = rhobitlink( 1, inverse = TRUE), rho23 = rhobitlink( 1, inverse = TRUE), rho13 = rhobitlink(-1, inverse = TRUE)) ymat <- with(tdata, rtrinorm(nn, mean1, mean2, mean3, var1, var2, var3, sqrt(var1)*sqrt(var1)*rho12, sqrt(var2)*sqrt(var3)*rho23, sqrt(var1)*sqrt(var3)*rho13)) pairs(ymat, col = "blue") ## End(Not run)
Maximum likelihood estimation of the nine parameters of a trivariate normal distribution.
trinormal(zero = c("sd", "rho"), eq.mean = FALSE, eq.sd = FALSE, eq.cor = FALSE, lmean1 = "identitylink", lmean2 = "identitylink", lmean3 = "identitylink", lsd1 = "loglink", lsd2 = "loglink", lsd3 = "loglink", lrho12 = "rhobitlink", lrho23 = "rhobitlink", lrho13 = "rhobitlink", imean1 = NULL, imean2 = NULL, imean3 = NULL, isd1 = NULL, isd2 = NULL, isd3 = NULL, irho12 = NULL, irho23 = NULL, irho13 = NULL, imethod = 1)
trinormal(zero = c("sd", "rho"), eq.mean = FALSE, eq.sd = FALSE, eq.cor = FALSE, lmean1 = "identitylink", lmean2 = "identitylink", lmean3 = "identitylink", lsd1 = "loglink", lsd2 = "loglink", lsd3 = "loglink", lrho12 = "rhobitlink", lrho23 = "rhobitlink", lrho13 = "rhobitlink", imean1 = NULL, imean2 = NULL, imean3 = NULL, isd1 = NULL, isd2 = NULL, isd3 = NULL, irho12 = NULL, irho23 = NULL, irho13 = NULL, imethod = 1)
lmean1 , lmean2 , lmean3 , lsd1 , lsd2 , lsd3
|
Link functions applied to the means and standard deviations.
See |
lrho12 , lrho23 , lrho13
|
Link functions applied to the correlation parameters.
See |
imean1 , imean2 , imean3 , isd1 , isd2 , isd3
|
See |
irho12 , irho23 , irho13 , imethod , zero
|
See |
eq.mean , eq.sd , eq.cor
|
Logical. Constrain the means or the standard deviations or correlation parameters to be equal? |
For the trivariate normal distribution,
this fits a linear model (LM) to the means, and
by default,
the other parameters are intercept-only.
The response should be a three-column matrix.
The three correlation parameters are prefixed by rho
,
and the default gives them values between and
however, this may be problematic when the correlation parameters
are constrained to be equal, etc..
The fitted means are returned as the fitted values, which is in
the form of a three-column matrix.
Fisher scoring is implemented.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The default parameterization does not make the estimated
variance-covariance matrix positive-definite.
In order for the variance-covariance matrix to be positive-definite
the quantity
1 - rho12^2 - rho13^2 - rho23^2 + 2 * rho12 * rho13 * rho23
must be positive, and if eq.cor = TRUE
then
this means that the rho
s must be between -0.5 and 1.
T. W. Yee
uninormal
,
binormal
,
rtrinorm
.
## Not run: set.seed(123); nn <- 1000 tdata <- data.frame(x2 = runif(nn), x3 = runif(nn)) tdata <- transform(tdata, y1 = rnorm(nn, 1 + 2 * x2), y2 = rnorm(nn, 3 + 4 * x2), y3 = rnorm(nn, 4 + 5 * x2)) fit1 <- vglm(cbind(y1, y2, y3) ~ x2, data = tdata, trinormal(eq.sd = TRUE, eq.cor = TRUE), trace = TRUE) coef(fit1, matrix = TRUE) constraints(fit1) summary(fit1) # Try this when eq.sd = TRUE, eq.cor = TRUE: fit2 <- vglm(cbind(y1, y2, y3) ~ x2, data = tdata, stepsize = 0.25, trinormal(eq.sd = TRUE, eq.cor = TRUE, lrho12 = extlogitlink(min = -0.5), lrho23 = extlogitlink(min = -0.5), lrho13 = extlogitlink(min = -0.5)), trace = TRUE) coef(fit2, matrix = TRUE) ## End(Not run)
## Not run: set.seed(123); nn <- 1000 tdata <- data.frame(x2 = runif(nn), x3 = runif(nn)) tdata <- transform(tdata, y1 = rnorm(nn, 1 + 2 * x2), y2 = rnorm(nn, 3 + 4 * x2), y3 = rnorm(nn, 4 + 5 * x2)) fit1 <- vglm(cbind(y1, y2, y3) ~ x2, data = tdata, trinormal(eq.sd = TRUE, eq.cor = TRUE), trace = TRUE) coef(fit1, matrix = TRUE) constraints(fit1) summary(fit1) # Try this when eq.sd = TRUE, eq.cor = TRUE: fit2 <- vglm(cbind(y1, y2, y3) ~ x2, data = tdata, stepsize = 0.25, trinormal(eq.sd = TRUE, eq.cor = TRUE, lrho12 = extlogitlink(min = -0.5), lrho23 = extlogitlink(min = -0.5), lrho13 = extlogitlink(min = -0.5)), trace = TRUE) coef(fit2, matrix = TRUE) ## End(Not run)
Generic function for a trajectory plot.
trplot(object, ...)
trplot(object, ...)
object |
An object for which a trajectory plot is meaningful. |
... |
Other arguments fed into the specific
methods function of the model. They usually are graphical
parameters, and sometimes they are fed
into the methods function for |
Trajectory plots can be defined in different ways for different models. Many models have no such notion or definition.
For quadratic and additive ordination models they plot the fitted values of two species against each other (more than two is theoretically possible, but not implemented in this software yet).
The value returned depends specifically on the methods function invoked.
Thomas W. Yee
Yee, T. W. (2020). On constrained and unconstrained quadratic ordination. Manuscript in preparation.
trplot.qrrvglm
,
perspqrrvglm
,
lvplot
.
## Not run: set.seed(123) hspider[, 1:6] <- scale(hspider[, 1:6]) # Stdze environ. vars p1cqo <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE) nos <- ncol(depvar(p1cqo)) clr <- 1:nos # OR (1:(nos+1))[-7] to omit yellow trplot(p1cqo, which.species = 1:3, log = "xy", lwd = 2, col = c("blue", "orange", "green"), label = TRUE) -> ii legend(0.00005, 0.3, paste(ii$species[, 1], ii$species[, 2], sep = " and "), lwd = 2, lty = 1, col = c("blue", "orange", "green")) abline(a = 0, b = 1, lty = "dashed", col = "grey") ## End(Not run)
## Not run: set.seed(123) hspider[, 1:6] <- scale(hspider[, 1:6]) # Stdze environ. vars p1cqo <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, Crow1positive = FALSE) nos <- ncol(depvar(p1cqo)) clr <- 1:nos # OR (1:(nos+1))[-7] to omit yellow trplot(p1cqo, which.species = 1:3, log = "xy", lwd = 2, col = c("blue", "orange", "green"), label = TRUE) -> ii legend(0.00005, 0.3, paste(ii$species[, 1], ii$species[, 2], sep = " and "), lwd = 2, lty = 1, col = c("blue", "orange", "green")) abline(a = 0, b = 1, lty = "dashed", col = "grey") ## End(Not run)
Produces a trajectory plot for
quadratic reduced-rank vector generalized linear models
(QRR-VGLMs).
It is only applicable for rank-1 models with argument
noRRR = ~ 1
.
trplot.qrrvglm(object, which.species = NULL, add = FALSE, show.plot = TRUE, label.sites = FALSE, sitenames = rownames(object@y), axes.equal = TRUE, cex = par()$cex, col = 1:(nos * (nos - 1)/2), log = "", lty = rep_len(par()$lty, nos * (nos - 1)/2), lwd = rep_len(par()$lwd, nos * (nos - 1)/2), tcol = rep_len(par()$col, nos * (nos - 1)/2), xlab = NULL, ylab = NULL, main = "", type = "b", check.ok = TRUE, ...)
trplot.qrrvglm(object, which.species = NULL, add = FALSE, show.plot = TRUE, label.sites = FALSE, sitenames = rownames(object@y), axes.equal = TRUE, cex = par()$cex, col = 1:(nos * (nos - 1)/2), log = "", lty = rep_len(par()$lty, nos * (nos - 1)/2), lwd = rep_len(par()$lwd, nos * (nos - 1)/2), tcol = rep_len(par()$col, nos * (nos - 1)/2), xlab = NULL, ylab = NULL, main = "", type = "b", check.ok = TRUE, ...)
object |
Object of class |
which.species |
Integer or character vector specifying the species to be plotted. If integer, these are the columns of the response matrix. If character, these must match exactly with the species' names. The default is to use all species. |
add |
Logical. Add to an existing plot?
If |
show.plot |
Logical. Plot it? |
label.sites |
Logical. If |
sitenames |
Character vector. The names of the sites. |
axes.equal |
Logical. If |
cex |
Character expansion of the labelling
of the site names.
Used only if |
col |
Color of the lines.
See the |
log |
Character, specifying which (if any) of the x- and
y-axes are to be on a logarithmic scale.
See the |
lty |
Line type.
See the |
lwd |
Line width.
See the |
tcol |
Color of the text for the site names.
See the |
xlab |
Character caption for the x-axis.
By default, a suitable caption is found.
See the |
ylab |
Character caption for the y-axis.
By default, a suitable caption is found.
See the |
main |
Character, giving the title of the plot.
See the |
type |
Character, giving the type of plot. A common
option is to use |
check.ok |
Logical. Whether a check is performed to see
that |
... |
Arguments passed into the |
A trajectory plot plots the fitted values of a ‘second’ species
against a ‘first’ species. The argument which.species
must
therefore contain at least two species. By default, all of the
species that were fitted in object
are plotted.
With more than a few species
the resulting plot will be very congested, and so it
is recommended
that only a few species be selected for plotting.
In the above, is the number of species selected
for plotting,
so there will be
curves/trajectories
in total.
A trajectory plot will be fitted only
if noRRR = ~ 1
because
otherwise the trajectory will not be a smooth function
of the latent
variables.
A list with the following components.
species.names |
A matrix of characters giving the ‘first’ and ‘second’ species. The number of different combinations of species is given by the number of rows. This is useful for creating a legend. |
sitenames |
A character vector of site names, sorted by the latent variable (from low to high). |
Plotting the axes on a log scale is often a good idea.
The use of xlim
and ylim
to control
the axis limits
is also a good idea, so as to limit the extent
of the curves at low
abundances or probabilities.
Setting label.sites = TRUE
is a good idea only if the number of
sites is small, otherwise there is too much clutter.
Thomas W. Yee
Yee, T. W. (2020). On constrained and unconstrained quadratic ordination. Manuscript in preparation.
## Not run: set.seed(111) # Leads to the global solution # hspider[,1:6] <- scale(hspider[,1:6]) # Stdze the environ vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, trace = FALSE) trplot(p1, which.species = 1:3, log = "xy", type = "b", lty = 1, main = "Trajectory plot of three hunting spiders species", col = c("blue","red","green"), lwd = 2, label = TRUE) -> ii legend(0.00005, 0.3, lwd = 2, lty = 1, col = c("blue", "red", "green"), with(ii, paste(species.names[,1], species.names[,2], sep = " and "))) abline(a = 0, b = 1, lty = "dashed", col = "grey") # Ref. line ## End(Not run)
## Not run: set.seed(111) # Leads to the global solution # hspider[,1:6] <- scale(hspider[,1:6]) # Stdze the environ vars p1 <- cqo(cbind(Alopacce, Alopcune, Alopfabr, Arctlute, Arctperi, Auloalbi, Pardlugu, Pardmont, Pardnigr, Pardpull, Trocterr, Zoraspin) ~ WaterCon + BareSand + FallTwig + CoveMoss + CoveHerb + ReflLux, poissonff, data = hspider, trace = FALSE) trplot(p1, which.species = 1:3, log = "xy", type = "b", lty = 1, main = "Trajectory plot of three hunting spiders species", col = c("blue","red","green"), lwd = 2, label = TRUE) -> ii legend(0.00005, 0.3, lwd = 2, lty = 1, col = c("blue", "red", "green"), with(ii, paste(species.names[,1], species.names[,2], sep = " and "))) abline(a = 0, b = 1, lty = "dashed", col = "grey") # Ref. line ## End(Not run)
Given the minimum and maximum values in a response variable, and a positive multiplier, returns the truncated values for generally-truncated regression
Trunc(Range, mux = 2, location = 0, omits = TRUE)
Trunc(Range, mux = 2, location = 0, omits = TRUE)
Range |
Numeric, of length 2 containing the minimum and maximum
(in that order) of the untransformed data.
Alternatively, if |
mux |
Numeric, the multiplier. A positive integer. |
location |
Numeric, the location parameter, allows a shift to the right. |
omits |
Logical.
The default is to return the truncated values (those being
omitted).
If |
Generally-truncated regression can handle underdispersion with respect to some parent or base distribution such as the Poisson. Yee and Ma (2023) call this the GT-Expansion (GTE) method, which is a special case of the GT-location-scale (GT-LS) method. This is a utility function to help make life easier. It is assumed that the response is a count variable.
A vector of values to be fed into the truncate
argument
of a VGAM family function such as gaitdpoisson
.
If mux = 1
then the function will return a NULL
rather than integer(0)
.
T. W. Yee
gaitdpoisson
,
gaitdlog
,
gaitdzeta
,
range
,
setdiff
,
goffset
.
Trunc(c(1, 8), 2) ## Not run: set.seed(1) # The following example is based on the normal mymean <- 20; m.truth <- 3 # approximation to the Poisson. gdata <- data.frame(y1 = round(rnorm((nn <- 1000), mymean, sd = sqrt(mymean / m.truth)))) org1 <- with(gdata, range(y1)) # Original range of the raw data m.max <- 5 # Try multipliers 1:m.max logliks <- numeric(m.max) names(logliks) <- as.character(1:m.max) for (i in 1:m.max) { logliks[i] <- logLik(vglm(i * y1 ~ offset(rep(log(i), nn)), gaitdpoisson(truncate = Trunc(org1, i)), data = gdata)) } sort(logliks, decreasing = TRUE) # Best to worst par(mfrow = c(1, 2)) plot(with(gdata, table(y1))) # Underdispersed wrt Poisson plot(logliks, col = "blue", type = "b", xlab = "Multiplier") ## End(Not run)
Trunc(c(1, 8), 2) ## Not run: set.seed(1) # The following example is based on the normal mymean <- 20; m.truth <- 3 # approximation to the Poisson. gdata <- data.frame(y1 = round(rnorm((nn <- 1000), mymean, sd = sqrt(mymean / m.truth)))) org1 <- with(gdata, range(y1)) # Original range of the raw data m.max <- 5 # Try multipliers 1:m.max logliks <- numeric(m.max) names(logliks) <- as.character(1:m.max) for (i in 1:m.max) { logliks[i] <- logLik(vglm(i * y1 ~ offset(rep(log(i), nn)), gaitdpoisson(truncate = Trunc(org1, i)), data = gdata)) } sort(logliks, decreasing = TRUE) # Best to worst par(mfrow = c(1, 2)) plot(with(gdata, table(y1))) # Underdispersed wrt Poisson plot(logliks, col = "blue", type = "b", xlab = "Multiplier") ## End(Not run)
Density, distribution function, quantile function and random
generation for the upper truncated Pareto(I) distribution with
parameters lower
, upper
and shape
.
dtruncpareto(x, lower, upper, shape, log = FALSE) ptruncpareto(q, lower, upper, shape, lower.tail = TRUE, log.p = FALSE) qtruncpareto(p, lower, upper, shape) rtruncpareto(n, lower, upper, shape)
dtruncpareto(x, lower, upper, shape, log = FALSE) ptruncpareto(q, lower, upper, shape, lower.tail = TRUE, log.p = FALSE) qtruncpareto(p, lower, upper, shape) rtruncpareto(n, lower, upper, shape)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n , log
|
Same meaning as |
lower , upper , shape
|
the lower, upper and shape ( |
lower.tail , log.p
|
See truncpareto
, the VGAM family function
for estimating the parameter by maximum likelihood estimation,
for the formula of the probability density function and the
range restrictions imposed on the parameters.
dtruncpareto
gives the density,
ptruncpareto
gives the distribution function,
qtruncpareto
gives the quantile function, and
rtruncpareto
generates random deviates.
T. W. Yee and Kai Huang
Aban, I. B., Meerschaert, M. M. and Panorska, A. K. (2006). Parameter estimation for the truncated Pareto distribution, Journal of the American Statistical Association, 101(473), 270–277.
lower <- 3; upper <- 8; kay <- exp(0.5) ## Not run: xx <- seq(lower - 0.5, upper + 0.5, len = 401) plot(xx, dtruncpareto(xx, low = lower, upp = upper, shape = kay), main = "Truncated Pareto density split into 10 equal areas", type = "l", ylim = 0:1, xlab = "x") abline(h = 0, col = "blue", lty = 2) qq <- qtruncpareto(seq(0.1, 0.9, by = 0.1), low = lower, upp = upper, shape = kay) lines(qq, dtruncpareto(qq, low = lower, upp = upper, shape = kay), col = "purple", lty = 3, type = "h") lines(xx, ptruncpareto(xx, low = lower, upp = upper, shape = kay), col = "orange") ## End(Not run) pp <- seq(0.1, 0.9, by = 0.1) qq <- qtruncpareto(pp, lower = lower, upper = upper, shape = kay) ptruncpareto(qq, lower = lower, upper = upper, shape = kay) qtruncpareto(ptruncpareto(qq, lower = lower, upper = upper, shape = kay), lower = lower, upper = upper, shape = kay) - qq # Should be all 0
lower <- 3; upper <- 8; kay <- exp(0.5) ## Not run: xx <- seq(lower - 0.5, upper + 0.5, len = 401) plot(xx, dtruncpareto(xx, low = lower, upp = upper, shape = kay), main = "Truncated Pareto density split into 10 equal areas", type = "l", ylim = 0:1, xlab = "x") abline(h = 0, col = "blue", lty = 2) qq <- qtruncpareto(seq(0.1, 0.9, by = 0.1), low = lower, upp = upper, shape = kay) lines(qq, dtruncpareto(qq, low = lower, upp = upper, shape = kay), col = "purple", lty = 3, type = "h") lines(xx, ptruncpareto(xx, low = lower, upp = upper, shape = kay), col = "orange") ## End(Not run) pp <- seq(0.1, 0.9, by = 0.1) qq <- qtruncpareto(pp, lower = lower, upper = upper, shape = kay) ptruncpareto(qq, lower = lower, upper = upper, shape = kay) qtruncpareto(ptruncpareto(qq, lower = lower, upper = upper, shape = kay), lower = lower, upper = upper, shape = kay) - qq # Should be all 0
Maximum likelihood estimation of the 2-parameter Weibull distribution with lower truncation. No observations should be censored.
truncweibull(lower.limit = 1e-5, lAlpha = "loglink", lBetaa = "loglink", iAlpha = NULL, iBetaa = NULL, nrfs = 1, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = "Betaa")
truncweibull(lower.limit = 1e-5, lAlpha = "loglink", lBetaa = "loglink", iAlpha = NULL, iBetaa = NULL, nrfs = 1, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = "Betaa")
lower.limit |
Positive lower truncation limits.
Recycled to the same dimension as the response, going
across rows first.
The default, being close to 0, should mean
effectively the same
results as |
lAlpha , lBetaa
|
Parameter link functions applied to the
(positive) parameters |
iAlpha , iBetaa
|
|
imethod , nrfs , zero , probs.y
|
Details at |
MLE of the two parameters of the Weibull distribution are
computed, subject to lower truncation.
That is, all response values are greater
than lower.limit
,
element-wise.
For a particular observation this is any known positive value.
This function is currently based directly on
Wingo (1989) and his parameterization is used (it differs
from weibullR
.)
In particular,
and
where
and
are as in
weibullR
and
dweibull
.
Upon fitting the extra
slot has a component called
lower.limit
which is of the same dimension as the
response.
The fitted values are the mean, which are computed
using pgamma.deriv
and pgamma.deriv.unscaled
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
This function may be converted to the same parameterization as
weibullR
at any time.
Yet to do: one element of the EIM may be wrong (due to
two interpretations of a formula; but it seems to work).
Convergence is slower than usual and this may imply something
is wrong; use argument maxit
.
In fact, it's probably
because pgamma.deriv.unscaled
is
inaccurate at q = 1
and q = 2
.
Also,
convergence should be monitored, especially if the truncation
means that a large proportion of the data is lost
compared to an
ordinary Weibull distribution.
More improvements need to be made, e.g., initial values are currently based on no truncation. This VGAM family function handles multiple responses.
T. W. Yee
Wingo, D. R. (1989). The left-truncated Weibull distribution: theory and computation. Statistical Papers, 30(1), 39–48.
weibullR
,
dweibull
,
pgamma.deriv
,
pgamma.deriv.unscaled
.
## Not run: nn <- 5000; prop.lost <- 0.40 # Proportion lost to truncation wdata <- data.frame(x2 = runif(nn)) # Complete Weibull data wdata <- transform(wdata, Betaa = exp(1)) # > 2 okay (satisfies regularity conds) wdata <- transform(wdata, Alpha = exp(0.5 - 1 * x2)) wdata <- transform(wdata, Shape = Betaa, # aaa = Betaa, # bbb = 1 / Alpha^(1 / Betaa), Scale = 1 / Alpha^(1 / Betaa)) wdata <- transform(wdata, y2 = rweibull(nn, Shape, scale = Scale)) summary(wdata) # Proportion lost: lower.limit2 <- with(wdata, quantile(y2, prob = prop.lost)) # Smaller due to truncation: wdata <- subset(wdata, y2 > lower.limit2) fit1 <- vglm(y2 ~ x2, maxit = 100, trace = TRUE, truncweibull(lower.limit = lower.limit2), wdata) coef(fit1, matrix = TRUE) summary(fit1) vcov(fit1) head(fit1@extra$lower.limit) ## End(Not run)
## Not run: nn <- 5000; prop.lost <- 0.40 # Proportion lost to truncation wdata <- data.frame(x2 = runif(nn)) # Complete Weibull data wdata <- transform(wdata, Betaa = exp(1)) # > 2 okay (satisfies regularity conds) wdata <- transform(wdata, Alpha = exp(0.5 - 1 * x2)) wdata <- transform(wdata, Shape = Betaa, # aaa = Betaa, # bbb = 1 / Alpha^(1 / Betaa), Scale = 1 / Alpha^(1 / Betaa)) wdata <- transform(wdata, y2 = rweibull(nn, Shape, scale = Scale)) summary(wdata) # Proportion lost: lower.limit2 <- with(wdata, quantile(y2, prob = prop.lost)) # Smaller due to truncation: wdata <- subset(wdata, y2 > lower.limit2) fit1 <- vglm(y2 ~ x2, maxit = 100, trace = TRUE, truncweibull(lower.limit = lower.limit2), wdata) coef(fit1, matrix = TRUE) summary(fit1) vcov(fit1) head(fit1@extra$lower.limit) ## End(Not run)
University California Berkeley Graduate Admissions: counts cross-classified by acceptance/rejection and gender, for the six largest departments.
data(ucberk)
data(ucberk)
A data frame with 6 departmental groups with the following 5 columns.
Counts of men denied admission.
Counts of men admitted.
Counts of women denied admission.
Counts of women admitted.
Department (the six largest),
called A
, B
, ..., F
.
From Bickel et al. (1975), the data consists of applications for admission to graduate study at the University of California, Berkeley, for the fall 1973 quarter. In the admissions cycle for that quarter, the Graduate Division at Berkeley received approximately 15,000 applications, some of which were later withdrawn or transferred to a different proposed entry quarter by the applicants. Of the applications finally remaining for the fall 1973 cycle 12,763 were sufficiently complete to permit a decision. There were about 101 graduate department and interdepartmental graduate majors. There were 8442 male applicants and 4321 female applicants. About 44 percent of the males and about 35 percent of the females were admitted. The data are well-known for illustrating Simpson's paradox.
Bickel, P. J., Hammel, E. A. and O'Connell, J. W. (1975). Sex bias in graduate admissions: data from Berkeley. Science, 187(4175): 398–404.
Freedman, D., Pisani, R. and Purves, R. (1998). Chapter 2 of Statistics, 3rd. ed., W. W. Norton & Company.
summary(ucberk)
summary(ucberk)
Maximum likelihood estimation of the two parameters of a univariate normal distribution.
uninormal(lmean = "identitylink", lsd = "loglink", lvar = "loglink", var.arg = FALSE, imethod = 1, isd = NULL, parallel = FALSE, vfl = FALSE, Form2 = NULL, smallno = 1e-05, zero = if (var.arg) "var" else "sd")
uninormal(lmean = "identitylink", lsd = "loglink", lvar = "loglink", var.arg = FALSE, imethod = 1, isd = NULL, parallel = FALSE, vfl = FALSE, Form2 = NULL, smallno = 1e-05, zero = if (var.arg) "var" else "sd")
lmean , lsd , lvar
|
Link functions applied to the mean and standard
deviation/variance. See |
var.arg |
Logical.
If |
smallno |
Numeric, positive but close to 0.
Used specifically for quasi-variances; if the link for the
mean is |
imethod , parallel , isd , zero
|
See |
vfl , Form2
|
See |
This fits a linear model (LM) as the first linear/additive predictor. So, by default, this is just the mean. By default, the log of the standard deviation is the second linear/additive predictor. The Fisher information matrix is diagonal. This VGAM family function can handle multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
gaussianff()
was deprecated but has been brought back
into VGAM nominally.
It should be called Mickey Mouse.
It gives a warning and calls
uninormal
instead
(hopefully all the arguments should pass in correctly).
Users should avoid calling gaussianff()
;
use glm
with
gaussian
instead.
It is dangerous to treat what is an
uninormal
fit as a
gaussianff()
object.
Yet to do: allow an argument such as eq.sd
that enables
the standard devations to be the same.
Also, this function used to be called normal1()
too,
but it has been decommissioned.
T. W. Yee
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
posnormal
,
mix2normal
,
ordsup
,
N1binomial
,
N1poisson
,
Qvar
,
tobit
,
cens.normal
,
foldnormal
,
skewnormal
,
double.cens.normal
,
SURff
,
AR1
,
huber2
,
studentt
,
binormal
,
trinormal
,
dnorm
,
simulate.vlm
,
hdeff.vglm
.
udata <- data.frame(x2 = rnorm(nn <- 200)) udata <- transform(udata, y1 = rnorm(nn, m = 1 - 3*x2, sd = exp(1 + 0.2*x2)), y2a = rnorm(nn, m = 1 + 2*x2, sd = exp(1 + 2.0*x2)^0.5), y2b = rnorm(nn, m = 1 + 2*x2, sd = exp(1 + 2.0*x2)^0.5)) fit1 <- vglm(y1 ~ x2, uninormal(zero = NULL), udata, trace = TRUE) coef(fit1, matrix = TRUE) fit2 <- vglm(cbind(y2a, y2b) ~ x2, data = udata, trace = TRUE, uninormal(var = TRUE, parallel = TRUE ~ x2, zero = NULL)) coef(fit2, matrix = TRUE) # Generate data from N(mu=theta=10, sigma=theta) and estimate theta. theta <- 10 udata <- data.frame(y3 = rnorm(100, m = theta, sd = theta)) fit3a <- vglm(y3 ~ 1, uninormal(lsd = "identitylink"), data = udata, constraints = list("(Intercept)" = rbind(1, 1))) fit3b <- vglm(y3 ~ 1, uninormal(lsd = "identitylink", parallel = TRUE ~ 1, zero = NULL), udata) coef(fit3a, matrix = TRUE) coef(fit3b, matrix = TRUE) # Same as fit3a
udata <- data.frame(x2 = rnorm(nn <- 200)) udata <- transform(udata, y1 = rnorm(nn, m = 1 - 3*x2, sd = exp(1 + 0.2*x2)), y2a = rnorm(nn, m = 1 + 2*x2, sd = exp(1 + 2.0*x2)^0.5), y2b = rnorm(nn, m = 1 + 2*x2, sd = exp(1 + 2.0*x2)^0.5)) fit1 <- vglm(y1 ~ x2, uninormal(zero = NULL), udata, trace = TRUE) coef(fit1, matrix = TRUE) fit2 <- vglm(cbind(y2a, y2b) ~ x2, data = udata, trace = TRUE, uninormal(var = TRUE, parallel = TRUE ~ x2, zero = NULL)) coef(fit2, matrix = TRUE) # Generate data from N(mu=theta=10, sigma=theta) and estimate theta. theta <- 10 udata <- data.frame(y3 = rnorm(100, m = theta, sd = theta)) fit3a <- vglm(y3 ~ 1, uninormal(lsd = "identitylink"), data = udata, constraints = list("(Intercept)" = rbind(1, 1))) fit3b <- vglm(y3 ~ 1, uninormal(lsd = "identitylink", parallel = TRUE ~ 1, zero = NULL), udata) coef(fit3a, matrix = TRUE) coef(fit3b, matrix = TRUE) # Same as fit3a
A set of common utility functions used by VGAM family functions.
param.names(string, S = 1, skip1 = FALSE, sep = "") dimm(M, hbw = M) interleave.VGAM(.M, M1, inverse = FALSE)
param.names(string, S = 1, skip1 = FALSE, sep = "") dimm(M, hbw = M) interleave.VGAM(.M, M1, inverse = FALSE)
string |
Character. Name of the parameter. |
M , .M
|
Numeric. The total number of linear/additive predictors, called
|
M1 |
Numeric. The number of linear/additive predictors for one response, called
|
inverse |
Logical. Useful for the inverse function of |
S |
Numeric. The number of responses. |
skip1 , sep
|
The former is logical;
should one skip (or omit) |
hbw |
Numeric. The half-bandwidth, which measures the number of bands emanating from the central diagonal band. |
See Yee (2015) for some details about some of these functions.
For param.names()
, this function returns the parameter names
for responses,
i.e.,
string
is returned unchanged if ,
else
paste(string, 1:S, sep = "")
.
For dimm()
, this function returns the number of elements
to be stored for each of the working weight matrices.
They are represented as columns in the matrix wz
in
e.g., vglm.fit()
.
See the matrix-band format described in
Section 18.3.5 of Yee (2015).
For interleave.VGAM()
, this function returns a reordering
of the linear/additive predictors depending on the number of responses.
The arguments presented in Table 18.5 may not be valid
in your version of Yee (2015).
T. W. Yee.
Victor Miranda added the inverse
argument
to interleave.VGAM()
.
Yee, T. W. (2015). Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: Springer.
CommonVGAMffArguments
,
VGAM-package
.
param.names("shape", 1) # "shape" param.names("shape", 3) # c("shape1", "shape2", "shape3") dimm(3, hbw = 1) # Diagonal matrix; the 3 elements need storage. dimm(3) # A general 3 x 3 symmetrix matrix has 6 unique elements. dimm(3, hbw = 2) # Tridiagonal matrix; the 3-3 element is 0 and unneeded. M1 <- 2; ncoly <- 3; M <- ncoly * M1 mynames1 <- param.names("location", ncoly) mynames2 <- param.names("scale", ncoly) (parameters.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)]) # The following is/was in Yee (2015) and has a poor/deceptive style: (parameters.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = M1)]) parameters.names[interleave.VGAM(M, M1 = M1, inverse = TRUE)]
param.names("shape", 1) # "shape" param.names("shape", 3) # c("shape1", "shape2", "shape3") dimm(3, hbw = 1) # Diagonal matrix; the 3 elements need storage. dimm(3) # A general 3 x 3 symmetrix matrix has 6 unique elements. dimm(3, hbw = 2) # Tridiagonal matrix; the 3-3 element is 0 and unneeded. M1 <- 2; ncoly <- 3; M <- ncoly * M1 mynames1 <- param.names("location", ncoly) mynames2 <- param.names("scale", ncoly) (parameters.names <- c(mynames1, mynames2)[interleave.VGAM(M, M1 = M1)]) # The following is/was in Yee (2015) and has a poor/deceptive style: (parameters.names <- c(mynames1, mynames2)[interleave.VGAM(M, M = M1)]) parameters.names[interleave.VGAM(M, M1 = M1, inverse = TRUE)]
A small count data set. During WWII V1 flying-bombs were fired from sites in France (Pas-de-Calais) and Dutch coasts towards London. The number of hits per square grid around London were recorded.
data(V1)
data(V1)
A data frame with the following variables.
Values between 0 and 4, and 7. Actually, the 7 is really imputed from the paper (it was recorded as "5 and over").
Observed frequency, i.e., the number of grids with that many hits.
The data concerns 576 square grids each of 0.25 square kms about south London. The area was selected comprising 144 square kms over which the basic probability function of the distribution was very nearly constant. V1s, which were one type of flying-bomb, were a “Vergeltungswaffen” or vengeance weapon fired during the summer of 1944 at London. The V1s were informally called Buzz Bombs or Doodlebugs, and they were pulse-jet-powered with a warhead of 850 kg of explosives. Over 9500 were launched at London, and many were shot down by artillery and the RAF. Over the period considered the total number of bombs within the area was 537.
It was asserted that the bombs tended to be grouped in clusters. However, a basic Poisson analysis shows this is not the case. Their guidance system being rather primitive, the data is consistent with a Poisson distribution (random).
Compared to Clarke (1946), the more modern analysis of Shaw and Shaw (2019). shows a higher density of hits in south London, hence the distribution is not really uniform over the entire region.
Clarke, R. D. (1946). An application of the Poisson distribution. Journal of the Institute of Actuaries, 72(3), 481.
Shaw, L. P. and Shaw, L. F. (2019). The flying bomb and the actuary. Significance, 16(5): 12–17.
V1 mean(with(V1, rep(hits, times = ofreq))) var(with(V1, rep(hits, times = ofreq))) sum(with(V1, rep(hits, times = ofreq))) ## Not run: barplot(with(V1, ofreq), names.arg = as.character(with(V1, hits)), main = "London V1 buzz bomb hits", col = "lightblue", las = 1, ylab = "Frequency", xlab = "Hits") ## End(Not run)
V1 mean(with(V1, rep(hits, times = ofreq))) var(with(V1, rep(hits, times = ofreq))) sum(with(V1, rep(hits, times = ofreq))) ## Not run: barplot(with(V1, ofreq), names.arg = as.character(with(V1, hits)), main = "London V1 buzz bomb hits", col = "lightblue", las = 1, ylab = "Frequency", xlab = "Hits") ## End(Not run)
A small count data set. During WWII V2 missiles were fired from the continent mainly towards London. The number of hits per square grid around London were recorded.
data(V2)
data(V2)
A data frame with the following variables.
Values between 0 and 3.
Observed frequency, i.e., the number of grids with that many hits.
The data concerns 408 square grids each of 0.25 square kms
about south London (south of the River Thames).
They were picked in a rectangular region of 102 square kilometres
where the density of hits were roughly uniformly distributed.
The data is somewhat comparable to V1
albeit
is a smaller data set.
Shaw, L. P. and Shaw, L. F. (2019). The flying bomb and the actuary. Significance, 16(5): 12–17.
V2 mean(with(V2, rep(hits, times = ofreq))) var(with(V2, rep(hits, times = ofreq))) sum(with(V2, rep(hits, times = ofreq))) ## Not run: barplot(with(V2, ofreq), names.arg = as.character(with(V2, hits)), main = "London V2 rocket hits", col = "lightgreen", las = 1, ylab = "Frequency", xlab = "Hits") ## End(Not run)
V2 mean(with(V2, rep(hits, times = ofreq))) var(with(V2, rep(hits, times = ofreq))) sum(with(V2, rep(hits, times = ofreq))) ## Not run: barplot(with(V2, ofreq), names.arg = as.character(with(V2, hits)), main = "London V2 rocket hits", col = "lightgreen", las = 1, ylab = "Frequency", xlab = "Hits") ## End(Not run)
Returns the variance-covariance matrix of the
parameters of
a fitted vlm-class
or
rrvglm-class
or
drrvglm-class
object.
vcovvlm(object, dispersion = NULL, untransform = FALSE, complete = TRUE, ...) vcovrrvglm(object, ...) vcovdrrvglm(object, ...) vcovqrrvglm(object, ...)
vcovvlm(object, dispersion = NULL, untransform = FALSE, complete = TRUE, ...) vcovrrvglm(object, ...) vcovdrrvglm(object, ...) vcovqrrvglm(object, ...)
object |
A fitted model object,
having class |
dispersion |
Numerical. This argument should not be used as VGAM will be phasing out dispersion parameters. Formerly, a value may be specified, else it is estimated for quasi-GLMs (e.g., method of moments). For almost all other types of VGLMs it is usually unity. The value is multiplied by the raw variance-covariance matrix. |
untransform |
logical.
For intercept-only models with trivial
constraints;
if set |
complete |
An argument currently ignored.
Added only so that
|
... |
Same as |
This methods function is based on the QR decomposition
of the (large) VLM model matrix and working weight matrices.
Currently
vcovvlm
operates on the fundamental
vlm-class
objects because pretty well
all modelling functions in VGAM inherit from this.
Currently
vcovrrvglm
is not entirely reliable because the elements of the
A–C part of the matrix sometimes cannot be
computed very accurately, so that the entire matrix is
not positive-definite.
For "qrrvglm"
objects,
vcovqrrvglm
is currently working with Rank = 1
objects or
when I.tolerances = TRUE
.
Then the answer is conditional given C.
The code is based on
model.matrixqrrvglm
so that the dimnames
are the same.
Same as vcov
.
For some models inflated standard errors can occur, such as
parameter estimates near the boundary of the parameter space.
Detection for this is available for some models using
hdeff.vglm
, which tests for an
Hauck-Donner effect (HDE) for each regression coefficient.
If the HDE is present, using
lrt.stat.vlm
should return more accurate p-values.
Thomas W. Yee
confintvglm
,
summaryvglm
,
vcov
,
hdeff.vglm
,
lrt.stat.vlm
,
model.matrixqrrvglm
.
## Not run: ndata <- data.frame(x2 = runif(nn <- 300)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x2), exp(1)), y2 = rnbinom(nn, mu = exp(2-x2), exp(0))) fit1 <- vglm(cbind(y1, y2) ~ x2, negbinomial, ndata, trace = TRUE) fit2 <- rrvglm(y1 ~ x2, negbinomial(zero = NULL), data = ndata) coef(fit1, matrix = TRUE) vcov(fit1) vcov(fit2) ## End(Not run)
## Not run: ndata <- data.frame(x2 = runif(nn <- 300)) ndata <- transform(ndata, y1 = rnbinom(nn, mu = exp(3+x2), exp(1)), y2 = rnbinom(nn, mu = exp(2-x2), exp(0))) fit1 <- vglm(cbind(y1, y2) ~ x2, negbinomial, ndata, trace = TRUE) fit2 <- rrvglm(y1 ~ x2, negbinomial(zero = NULL), data = ndata) coef(fit1, matrix = TRUE) vcov(fit1) vcov(fit2) ## End(Not run)
Some sea levels data sets recorded at Venice, Italy.
data(venice) data(venice90)
data(venice) data(venice90)
venice
is a data frame with 51 observations
on the following 11
variables.
It concerns the maximum heights of sea levels between
1931 and 1981.
a numeric vector.
numeric vectors;
r1
is the highest recorded value,
r2
is the second highest recorded value, etc.
venice90
is a data frame with 455 observations
on the following
7 variables.
numeric vectors; actual time of the recording.
numeric; sea level.
numeric; number of hours since the midnight of 31 Dec 1939 and 1 Jan 1940.
numeric vector;
approximate year as a real number.
The formula is start.year + ohour / (365.26 * 24)
where start.year
is 1940.
One can treat Year
as continuous whereas
year
can be treated as both continuous and discrete.
Sea levels are in cm.
For venice90
, the value 0 corresponds to a fixed
reference point (e.g., the mean sea level
in 1897 at an old
palace of Venice). Clearly since the relative (perceived)
mean sea level has been increasing in trend over time (more
than an overall 0.4 m increase by 2010),
therefore the value 0 is
(now) a very low and unusual measurement.
For venice
, in 1935 only the top six values
were recorded.
For venice90
, this is a subset of
a data set provided by
Paolo Pirazzoli consisting of hourly sea
levels from 1940 to 2009.
Values greater than 90 cm were extracted,
and then declustered
(each cluster provides no more than one value, and
each value is at least 24 hours apart).
Thus the values are more likely to be independent.
Of the original (2009-1940+1)*365.26*24
values
about 7 percent of these comprise venice90
.
Yet to do: check for consistency between the data sets. Some external data sets elsewhere have some extremes recorded at times not exactly on the hour.
Pirazzoli, P. (1982) Maree estreme a Venezia (periodo 1872–1981). Acqua Aria, 10, 1023–1039.
Thanks to Paolo Pirazzoli and Alberto Tomasin
for the venice90
data.
Smith, R. L. (1986). Extreme value theory based on the r largest annual events. Journal of Hydrology, 86, 27–43.
Battistin, D. and Canestrelli, P. (2006). La serie storica delle maree a Venezia, 1872–2004 (in Italian), Comune di Venezia. Istituzione Centro Previsione e Segnalazioni Maree.
## Not run: matplot(venice[["year"]], venice[, -1], xlab = "Year", ylab = "Sea level (cm)", type = "l") ymat <- as.matrix(venice[, paste("r", 1:10, sep = "")]) fit1 <- vgam(ymat ~ s(year, df = 3), gumbel(R = 365, mpv = TRUE), venice, trace = TRUE, na.action = na.pass) head(fitted(fit1)) par(mfrow = c(2, 1), xpd = TRUE) plot(fit1, se = TRUE, lcol = "blue", llwd = 2, slty = "dashed") par(mfrow = c(1,1), bty = "l", xpd = TRUE, las = 1) qtplot(fit1, mpv = TRUE, lcol = c(1, 2, 5), tcol = c(1, 2, 5), llwd = 2, pcol = "blue", tadj = 0.1) plot(sealevel ~ Year, data = venice90, type = "h", col = "blue") summary(venice90) dim(venice90) round(100 * nrow(venice90)/((2009-1940+1)*365.26*24), dig = 3) ## End(Not run)
## Not run: matplot(venice[["year"]], venice[, -1], xlab = "Year", ylab = "Sea level (cm)", type = "l") ymat <- as.matrix(venice[, paste("r", 1:10, sep = "")]) fit1 <- vgam(ymat ~ s(year, df = 3), gumbel(R = 365, mpv = TRUE), venice, trace = TRUE, na.action = na.pass) head(fitted(fit1)) par(mfrow = c(2, 1), xpd = TRUE) plot(fit1, se = TRUE, lcol = "blue", llwd = 2, slty = "dashed") par(mfrow = c(1,1), bty = "l", xpd = TRUE, las = 1) qtplot(fit1, mpv = TRUE, lcol = c(1, 2, 5), tcol = c(1, 2, 5), llwd = 2, pcol = "blue", tadj = 0.1) plot(sealevel ~ Year, data = venice90, type = "h", col = "blue") summary(venice90) dim(venice90) round(100 * nrow(venice90)/((2009-1940+1)*365.26*24), dig = 3) ## End(Not run)
Fit a vector generalized additive model (VGAM). Both 1st-generation VGAMs (based on backfitting) and 2nd-generation VGAMs (based on P-splines, with automatic smoothing parameter selection) are implemented. This is a large class of models that includes generalized additive models (GAMs) and vector generalized linear models (VGLMs) as special cases.
vgam(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action, etastart = NULL, mustart = NULL, coefstart = NULL, control = vgam.control(...), offset = NULL, method = "vgam.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), form2 = NULL, qr.arg = FALSE, smart = TRUE, ...)
vgam(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action, etastart = NULL, mustart = NULL, coefstart = NULL, control = vgam.control(...), offset = NULL, method = "vgam.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), form2 = NULL, qr.arg = FALSE, smart = TRUE, ...)
formula |
a symbolic description of the model to be fit.
The RHS of the formula is applied to each
linear/additive predictor,
and should include at least one
|
family |
Same as for |
data |
an optional data frame containing the variables
in the model.
By default the variables are taken from
|
weights , subset , na.action
|
Same as for |
etastart , mustart , coefstart
|
Same as for |
control |
a list of parameters for controlling the fitting process.
See |
method |
the method to be used in fitting the model.
The default (and presently only) method |
constraints , model , offset
|
Same as for |
x.arg , y.arg
|
logical values indicating whether the
model matrix and response
vector/matrix used in the fitting process
should be assigned in the
|
contrasts , extra , form2 , qr.arg , smart
|
Same as for |
... |
further arguments passed into |
A vector generalized additive model (VGAM)
is loosely defined
as a statistical model that is a function
of additive predictors.
The central formula is given by
where is the
th explanatory variable
(almost always
for the intercept term),
and
are smooth functions of
that are estimated
by smoothers.
The first term in the summation is just the intercept.
Currently
two types of smoothers are
implemented:
s
represents
the older and more traditional one, called a
vector (cubic smoothing spline) smoother and is
based on Yee and Wild (1996);
it is more similar to the R package gam.
The newer one is represented by
sm.os
and
sm.ps
, and these are
based on O-splines and P-splines—they allow automatic
smoothing parameter selection; it is more similar
to the R package mgcv.
In the above, where
is finite.
If all the functions are constrained to be linear then
the resulting model is a vector generalized linear model
(VGLM). VGLMs are best fitted with
vglm
.
Vector (cubic smoothing spline) smoothers are represented
by s()
(see s
). Local
regression via lo()
is not supported. The
results of vgam
will differ from the gam()
(in the gam) because vgam()
uses a different
knot selection algorithm. In general, fewer knots are
chosen because the computation becomes expensive when
the number of additive predictors is large.
Second-generation VGAMs are based on the
O-splines and P-splines.
The latter is due to Eilers and Marx (1996).
Backfitting is not required, and estimation is
performed using IRLS.
The function sm.os
represents a smart
implementation of O-splines.
The function sm.ps
represents a smart
implementation of P-splines.
Written G2-VGAMs or P-VGAMs, this methodology
should not be used
unless the sample size is reasonably large.
Usually an UBRE predictive criterion is optimized
(at each IRLS iteration)
because the
scale parameter for VGAMs is usually assumed to be known.
This search for optimal smoothing parameters
does not always converge,
and neither is it totally reliable.
G2-VGAMs implicitly set criterion = "coefficients"
so that
convergence occurs when the change in the
regression coefficients
between 2 IRLS iterations is sufficiently small.
Otherwise the search for the optimal
smoothing parameters might
cause the log-likelihood to decrease
between 2 IRLS iterations.
Currently outer iteration is implemented,
by default,
rather than performance iteration because the latter
is more easy to converge to a local solution; see
Wood (2004) for details.
One can use performance iteration
by setting Maxit.outer = 1
in
vgam.control
.
The underlying algorithm of VGAMs is IRLS.
First-generation VGAMs (called G1-VGAMs)
are estimated by modified vector backfitting
using vector splines. O-splines are used as
the basis functions
for the vector (smoothing) splines, which are
a lower dimensional
version of natural B-splines.
The function vgam.fit()
actually does the
work. The smoothing code is based on F. O'Sullivan's
BART code.
A closely related methodology based on VGAMs called
constrained additive ordination (CAO) first forms
a linear combination of the explanatory variables (called
latent variables) and then fits a GAM to these.
This is implemented in the function cao
for a very limited choice of family functions.
For G1-VGAMs and G2-VGAMs, an object of class
"vgam"
or
"pvgam"
respectively
(see vgam-class
and pvgam-class
for further information).
For G1-VGAMs,
currently vgam
can only handle
constraint matrices cmat
,
say, such that crossprod(cmat)
is diagonal.
It can be detected by is.buggy
.
VGAMs with constraint matrices that have
non-orthogonal columns should
be fitted with
sm.os
or
sm.ps
terms
instead of s
.
See warnings in vglm.control
.
This function can fit a wide variety of
statistical models. Some of
these are harder to fit than others because
of inherent numerical
difficulties associated with some of them.
Successful model fitting
benefits from cumulative experience.
Varying the values of arguments
in the VGAM family function itself
is a good first step if
difficulties arise, especially if initial
values can be inputted.
A second, more general step, is to vary
the values of arguments in
vgam.control
.
A third step is to make use of arguments
such as etastart
,
coefstart
and mustart
.
Some VGAM family functions end in "ff"
to avoid interference with other functions, e.g.,
binomialff
, poissonff
.
This is because VGAM family functions
are incompatible with
glm
(and also
gam()
in gam and
gam
in mgcv).
The smart prediction (smartpred
) library
is packed with the VGAM library.
The theory behind the scaling parameter is currently being made more rigorous, but it it should give the same value as the scale parameter for GLMs.
Thomas W. Yee
Wood, S. N. (2004). Stable and efficient multiple smoothing parameter estimation for generalized additive models. J. Amer. Statist. Assoc., 99(467): 673–686.
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
Yee, T. W. (2008).
The VGAM
Package.
R News, 8, 28–39.
Yee, T. W. (2015). Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: Springer.
Yee, T. W. (2016). Comments on “Smoothing parameter and model selection for general smooth models” by Wood, S. N. and Pya, N. and Safken, N., J. Amer. Statist. Assoc., 110(516).
is.buggy
,
vgam.control
,
vgam-class
,
vglmff-class
,
plotvgam
,
summaryvgam
,
summarypvgam
,
sm.os
,
sm.ps
,
s
,
magic
,
vglm
,
vsmooth.spline
,
cao
.
# Nonparametric proportional odds model pneumo <- transform(pneumo, let = log(exposure.time)) vgam(cbind(normal, mild, severe) ~ s(let), cumulative(parallel = TRUE), data = pneumo, trace = TRUE) # Nonparametric logistic regression hfit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, hunua) ## Not run: plot(hfit, se = TRUE) phfit <- predict(hfit, type = "terms", raw = TRUE, se = TRUE) names(phfit) head(phfit$fitted) head(phfit$se.fit) phfit$df phfit$sigma ## Not run: # Fit two species simultaneously hfit2 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(2, 3)), binomialff(multiple.responses = TRUE), data = hunua) coef(hfit2, matrix = TRUE) # Not really interpretable plot(hfit2, se = TRUE, overlay = TRUE, lcol = 3:4, scol = 3:4) ooo <- with(hunua, order(altitude)) with(hunua, matplot(altitude[ooo], fitted(hfit2)[ooo,], ylim = c(0, 0.8), las = 1,type = "l", lwd = 2, xlab = "Altitude (m)", ylab = "Probability of presence", main = "Two plant species' response curves")) with(hunua, rug(altitude)) # The 'subset' argument does not work here. Use subset() instead. set.seed(1) zdata <- data.frame(x2 = runif(nn <- 500)) zdata <- transform(zdata, y = rbinom(nn, 1, 0.5)) zdata <- transform(zdata, subS = runif(nn) < 0.7) sub.zdata <- subset(zdata, subS) # Use this instead if (FALSE) fit4a <- vgam(cbind(y, y) ~ s(x2, df = 2), binomialff(multiple.responses = TRUE), data = zdata, subset = subS) # This fails!!! fit4b <- vgam(cbind(y, y) ~ s(x2, df = 2), binomialff(multiple.responses = TRUE), data = sub.zdata) # This succeeds!!! fit4c <- vgam(cbind(y, y) ~ sm.os(x2), binomialff(multiple.responses = TRUE), data = sub.zdata) # This succeeds!!! par(mfrow = c(2, 2)) plot(fit4b, se = TRUE, shade = TRUE, shcol = "pink") plot(fit4c, se = TRUE, shade = TRUE, shcol = "pink") ## End(Not run)
# Nonparametric proportional odds model pneumo <- transform(pneumo, let = log(exposure.time)) vgam(cbind(normal, mild, severe) ~ s(let), cumulative(parallel = TRUE), data = pneumo, trace = TRUE) # Nonparametric logistic regression hfit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, hunua) ## Not run: plot(hfit, se = TRUE) phfit <- predict(hfit, type = "terms", raw = TRUE, se = TRUE) names(phfit) head(phfit$fitted) head(phfit$se.fit) phfit$df phfit$sigma ## Not run: # Fit two species simultaneously hfit2 <- vgam(cbind(agaaus, kniexc) ~ s(altitude, df = c(2, 3)), binomialff(multiple.responses = TRUE), data = hunua) coef(hfit2, matrix = TRUE) # Not really interpretable plot(hfit2, se = TRUE, overlay = TRUE, lcol = 3:4, scol = 3:4) ooo <- with(hunua, order(altitude)) with(hunua, matplot(altitude[ooo], fitted(hfit2)[ooo,], ylim = c(0, 0.8), las = 1,type = "l", lwd = 2, xlab = "Altitude (m)", ylab = "Probability of presence", main = "Two plant species' response curves")) with(hunua, rug(altitude)) # The 'subset' argument does not work here. Use subset() instead. set.seed(1) zdata <- data.frame(x2 = runif(nn <- 500)) zdata <- transform(zdata, y = rbinom(nn, 1, 0.5)) zdata <- transform(zdata, subS = runif(nn) < 0.7) sub.zdata <- subset(zdata, subS) # Use this instead if (FALSE) fit4a <- vgam(cbind(y, y) ~ s(x2, df = 2), binomialff(multiple.responses = TRUE), data = zdata, subset = subS) # This fails!!! fit4b <- vgam(cbind(y, y) ~ s(x2, df = 2), binomialff(multiple.responses = TRUE), data = sub.zdata) # This succeeds!!! fit4c <- vgam(cbind(y, y) ~ sm.os(x2), binomialff(multiple.responses = TRUE), data = sub.zdata) # This succeeds!!! par(mfrow = c(2, 2)) plot(fit4b, se = TRUE, shade = TRUE, shcol = "pink") plot(fit4c, se = TRUE, shade = TRUE, shcol = "pink") ## End(Not run)
Vector generalized additive models.
Objects can be created by calls of the form vgam(...)
.
nl.chisq
:Object of class "numeric"
.
Nonlinear chi-squared values.
nl.df
:Object of class "numeric"
.
Nonlinear chi-squared degrees of freedom values.
spar
:Object of class "numeric"
containing the (scaled) smoothing parameters.
s.xargument
:Object of
class "character"
holding the variable name of any s()
terms.
var
:Object of class "matrix"
holding
approximate pointwise standard error information.
Bspline
:Object of class "list"
holding the scaled (internal and boundary) knots, and the
fitted B-spline coefficients. These are used
for prediction.
extra
:Object of class "list"
;
the extra
argument on entry to vglm
. This
contains any extra information that might be needed
by the family function.
family
:Object of class "vglmff"
.
The family function.
iter
:Object of class "numeric"
.
The number of IRLS iterations used.
predictors
:Object of class "matrix"
with columns which holds
the
linear predictors.
assign
:Object of class "list"
,
from class "vlm"
.
This named list gives information matching
the columns and the
(LM) model matrix terms.
call
:Object of class "call"
,
from class
"vlm"
.
The matched call.
coefficients
:Object of class
"numeric"
, from class "vlm"
.
A named vector of coefficients.
constraints
:Object of
class "list"
, from
class "vlm"
.
A named list of constraint matrices used in the fitting.
contrasts
:Object of
class "list"
, from
class "vlm"
.
The contrasts used (if any).
control
:Object of class "list"
,
from class
"vlm"
.
A list of parameters for controlling the fitting process.
See vglm.control
for details.
criterion
:Object of
class "list"
, from
class "vlm"
.
List of convergence criterion evaluated at the
final IRLS iteration.
df.residual
:Object of class
"numeric"
, from class "vlm"
.
The residual degrees of freedom.
df.total
:Object of class "numeric"
,
from class "vlm"
.
The total degrees of freedom.
dispersion
:Object of class "numeric"
,
from class "vlm"
.
The scaling parameter.
effects
:Object of class "numeric"
,
from class "vlm"
.
The effects.
fitted.values
:Object of class
"matrix"
, from class "vlm"
.
The fitted values. This is usually the mean but may be
quantiles, or the location parameter,
e.g., in the Cauchy model.
misc
:Object of class "list"
,
from class "vlm"
.
A named list to hold miscellaneous parameters.
model
:Object of class "data.frame"
,
from class "vlm"
.
The model frame.
na.action
:Object of class "list"
,
from class "vlm"
.
A list holding information about missing values.
offset
:Object of class "matrix"
,
from class "vlm"
.
If non-zero, a -column matrix of offsets.
post
:Object of class "list"
,
from class "vlm"
where post-analysis results may be put.
preplot
:Object of class "list"
,
from class "vlm"
used by plotvgam
; the plotting parameters
may be put here.
prior.weights
:Object of class
"matrix"
, from class "vlm"
holding the initially supplied weights.
qr
:Object of class "list"
,
from class "vlm"
.
QR decomposition at the final iteration.
R
:Object of class "matrix"
,
from class "vlm"
.
The R matrix in the QR decomposition used
in the fitting.
rank
:Object of class "integer"
,
from class "vlm"
.
Numerical rank of the fitted model.
residuals
:Object of class "matrix"
,
from class "vlm"
.
The working residuals at the final IRLS iteration.
ResSS
:Object of class "numeric"
,
from class "vlm"
.
Residual sum of squares at the final IRLS iteration with
the adjusted dependent vectors and weight matrices.
smart.prediction
:Object of class
"list"
, from class "vlm"
.
A list of data-dependent parameters (if any)
that are used by smart prediction.
terms
:Object of class "list"
,
from class "vlm"
.
The terms
object used.
weights
:Object of class "matrix"
,
from class "vlm"
.
The weight matrices at the final IRLS iteration.
This is in matrix-band form.
x
:Object of class "matrix"
,
from class "vlm"
.
The model matrix (LM, not VGLM).
xlevels
:Object of class "list"
,
from class "vlm"
.
The levels of the factors, if any, used in fitting.
y
:Object of class "matrix"
,
from class "vlm"
.
The response, in matrix form.
Xm2
:Object of class "matrix"
,
from class "vlm"
.
See vglm-class
).
Ym2
:Object of class "matrix"
,
from class "vlm"
.
See vglm-class
).
callXm2
:Object of class "call"
, from class "vlm"
.
The matched call for argument form2
.
Class "vglm"
, directly.
Class "vlm"
, by class "vglm"
.
signature(object = "vglm")
:
cumulative distribution function.
Useful for quantile regression and extreme value data models.
signature(object = "vglm")
:
density plot.
Useful for quantile regression models.
signature(object = "vglm")
:
deviance of the model (where applicable).
signature(x = "vglm")
:
diagnostic plots.
signature(object = "vglm")
:
extract the additive predictors or
predict the additive predictors at a new data frame.
signature(x = "vglm")
:
short summary of the object.
signature(object = "vglm")
:
quantile plot (only applicable to some models).
signature(object = "vglm")
:
residuals. There are various types of these.
signature(object = "vglm")
:
residuals. Shorthand for resid
.
signature(object = "vglm")
:
return level plot.
Useful for extreme value data models.
signature(object = "vglm")
:
a more detailed summary of the object.
VGAMs have all the slots that vglm
objects
have (vglm-class
), plus the first few slots
described in the section above.
Thomas W. Yee
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
vgam.control
,
vglm
,
s
,
vglm-class
,
vglmff-class
.
# Fit a nonparametric proportional odds model pneumo <- transform(pneumo, let = log(exposure.time)) vgam(cbind(normal, mild, severe) ~ s(let), cumulative(parallel = TRUE), data = pneumo)
# Fit a nonparametric proportional odds model pneumo <- transform(pneumo, let = log(exposure.time)) vgam(cbind(normal, mild, severe) ~ s(let), cumulative(parallel = TRUE), data = pneumo)
Algorithmic constants and parameters for running vgam
are set using this function.
vgam.control(all.knots = FALSE, bf.epsilon = 1e-07, bf.maxit = 30, checkwz=TRUE, Check.rank = TRUE, Check.cm.rank = TRUE, criterion = names(.min.criterion.VGAM), epsilon = 1e-07, maxit = 30, Maxit.outer = 10, noWarning = FALSE, na.action = na.fail, nk = NULL, save.weights = FALSE, se.fit = TRUE, trace = FALSE, wzepsilon = .Machine$double.eps^0.75, xij = NULL, gamma.arg = 1, ...)
vgam.control(all.knots = FALSE, bf.epsilon = 1e-07, bf.maxit = 30, checkwz=TRUE, Check.rank = TRUE, Check.cm.rank = TRUE, criterion = names(.min.criterion.VGAM), epsilon = 1e-07, maxit = 30, Maxit.outer = 10, noWarning = FALSE, na.action = na.fail, nk = NULL, save.weights = FALSE, se.fit = TRUE, trace = FALSE, wzepsilon = .Machine$double.eps^0.75, xij = NULL, gamma.arg = 1, ...)
all.knots |
logical indicating if all distinct points of
the smoothing variables are to be used as knots.
By default, |
bf.epsilon |
tolerance used by the modified vector backfitting algorithm for testing convergence. Must be a positive number. |
bf.maxit |
maximum number of iterations allowed in the modified vector backfitting algorithm. Must be a positive integer. |
checkwz |
logical indicating whether the diagonal elements of
the working weight matrices should be checked
whether they are
sufficiently positive, i.e., greater
than |
Check.rank , Check.cm.rank
|
See |
criterion |
character variable describing what criterion is to
be used to test for convergence.
The possibilities are listed
in |
epsilon |
positive convergence tolerance epsilon. Roughly
speaking, the
Newton-Raphson/Fisher-scoring/local-scoring iterations
are assumed to have
converged when two successive |
maxit |
maximum number of Newton-Raphson/Fisher-scoring/local-scoring iterations allowed. |
Maxit.outer |
maximum number of
outer iterations allowed when there are
Note that |
na.action |
how to handle missing values.
Unlike the SPLUS |
nk |
vector of length |
save.weights |
logical indicating whether the |
se.fit |
logical indicating whether approximate
pointwise standard errors are to be saved on the object.
If |
trace |
logical indicating if output should be produced for each iteration. |
wzepsilon |
Small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive. |
noWarning |
Same as |
xij |
Same as |
gamma.arg |
Numeric; same as |
... |
other parameters that may be picked up from control functions that are specific to the VGAM family function. |
Most of the control parameters are used within
vgam.fit
and you will have to look at that
to understand the full details. Many of the control
parameters are used in a similar manner by vglm.fit
(vglm
) because the algorithm (IRLS) is
very similar.
Setting save.weights=FALSE
is useful for some
models because the weights
slot of the object is
often the largest and so less memory is used to store the
object. However, for some VGAM family function,
it is necessary to set save.weights=TRUE
because
the weights
slot cannot be reconstructed later.
A list with components matching the input names. A little
error checking is done, but not much. The list is assigned
to the control
slot of vgam
objects.
See vglm.control
.
vgam
does not implement half-stepsizing,
therefore parametric models should be fitted with
vglm
. Also, vgam
is slower
than vglm
too.
Thomas W. Yee
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
vgam
,
vglm.control
,
vsmooth.spline
,
vglm
.
pneumo <- transform(pneumo, let = log(exposure.time)) vgam(cbind(normal, mild, severe) ~ s(let, df = 2), multinomial, data = pneumo, trace = TRUE, eps = 1e-4, maxit = 10)
pneumo <- transform(pneumo, let = log(exposure.time)) vgam(cbind(normal, mild, severe) ~ s(let, df = 2), multinomial, data = pneumo, trace = TRUE, eps = 1e-4, maxit = 10)
vglm
fits vector generalized linear models (VGLMs).
This very large class of models includes
generalized linear models (GLMs) as a special case.
vglm(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action, etastart = NULL, mustart = NULL, coefstart = NULL, control = vglm.control(...), offset = NULL, method = "vglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), form2 = NULL, qr.arg = TRUE, smart = TRUE, ...)
vglm(formula, family = stop("argument 'family' needs to be assigned"), data = list(), weights = NULL, subset = NULL, na.action, etastart = NULL, mustart = NULL, coefstart = NULL, control = vglm.control(...), offset = NULL, method = "vglm.fit", model = FALSE, x.arg = TRUE, y.arg = TRUE, contrasts = NULL, constraints = NULL, extra = list(), form2 = NULL, qr.arg = TRUE, smart = TRUE, ...)
formula |
a symbolic description of the model to be fit.
The RHS of the formula is applied to each linear
predictor.
The effect of different variables in each linear predictor
can be controlled by specifying constraint matrices—see
|
family |
a function of class |
data |
an optional data frame containing the variables in the model.
By default the variables are taken from
|
weights |
an optional vector or matrix of (prior fixed and known) weights
to be used in the fitting process.
If the VGAM family function handles multiple responses
( Currently the |
subset |
an optional logical vector specifying a subset of observations to be used in the fitting process. |
na.action |
a function which indicates what should happen when
the data contain |
etastart |
optional starting values for the linear predictors.
It is a |
mustart |
optional starting values for the fitted values.
It can be a vector or a matrix;
if a matrix, then it has the same number of rows
as the response.
Usually |
coefstart |
optional starting values for the coefficient vector.
The length and order must match that of |
control |
a list of parameters for controlling the fitting process.
See |
offset |
a vector or |
method |
the method to be used in fitting the model.
The default (and
presently only) method |
model |
a logical value indicating whether the
model frame
should be assigned in the |
x.arg , y.arg
|
logical values indicating whether
the LM matrix and response vector/matrix used in the fitting
process should be assigned in the |
contrasts |
an optional list. See the |
constraints |
an optional If the Properties:
each constraint matrix must have As mentioned above, the labelling of each constraint matrix
must match exactly, e.g.,
|
extra |
an optional list with any extra information that might be needed by the VGAM family function. |
form2 |
the second (optional) formula.
If argument |
qr.arg |
logical value indicating whether the slot |
smart |
logical value indicating whether smart prediction
( |
... |
further arguments passed into |
A vector generalized linear model (VGLM) is loosely defined
as a statistical model that is a function of linear
predictors and can be estimated by Fisher scoring.
The central formula is given by
where is a vector of explanatory variables
(sometimes just a 1 for an intercept),
and
is a vector of regression coefficients
to be estimated.
Here,
, where
is finite.
Then one can write
as a vector of linear predictors.
Most users will find vglm
similar in flavour to
glm
.
The function vglm.fit
actually does the work.
An object of class "vglm"
, which has the
following slots. Some of these may not be assigned to save
space, and will be recreated if necessary later.
extra |
the list |
family |
the family function (of class |
iter |
the number of IRLS iterations used. |
predictors |
a |
assign |
a named list which matches the columns and the (LM) model matrix terms. |
call |
the matched call. |
coefficients |
a named vector of coefficients. |
constraints |
a named list of constraint matrices used in the fitting. |
contrasts |
the contrasts used (if any). |
control |
list of control parameter used in the fitting. |
criterion |
list of convergence criterion evaluated at the final IRLS iteration. |
df.residual |
the residual degrees of freedom. |
df.total |
the total degrees of freedom. |
dispersion |
the scaling parameter. |
effects |
the effects. |
fitted.values |
the fitted values, as a matrix. This is often the mean but may be quantiles, or the location parameter, e.g., in the Cauchy model. |
misc |
a list to hold miscellaneous parameters. |
model |
the model frame. |
na.action |
a list holding information about missing values. |
offset |
if non-zero, a |
post |
a list where post-analysis results may be put. |
preplot |
used by |
prior.weights |
initially supplied weights
(the |
qr |
the QR decomposition used in the fitting. |
R |
the R matrix in the QR decomposition used in the fitting. |
rank |
numerical rank of the fitted model. |
residuals |
the working residuals at the final IRLS iteration. |
ResSS |
residual sum of squares at the final IRLS iteration with the adjusted dependent vectors and weight matrices. |
smart.prediction |
a list of data-dependent parameters (if any) that are used by smart prediction. |
terms |
the |
weights |
the working weight matrices at the final IRLS iteration. This is in matrix-band form. |
x |
the model matrix (linear model LM, not VGLM). |
xlevels |
the levels of the factors, if any, used in fitting. |
y |
the response, in matrix form. |
This slot information is repeated at vglm-class
.
See warnings in vglm.control
.
Also, see warnings under weights
above regarding
sampling weights from complex sampling designs.
This function can fit a wide variety of
statistical models. Some of
these are harder to fit than others because
of inherent numerical
difficulties associated with some of them.
Successful model fitting
benefits from cumulative experience.
Varying the values of arguments
in the VGAM family function itself
is a good first step if
difficulties arise, especially if initial
values can be inputted.
A second, more general step, is to vary the
values of arguments in
vglm.control
.
A third step is to make use of arguments such
as etastart
,
coefstart
and mustart
.
Some VGAM family functions end in "ff"
to avoid
interference with other functions, e.g.,
binomialff
,
poissonff
.
This is because VGAM family
functions are incompatible with glm
(and also gam()
in gam and
gam
in the mgcv library).
The smart prediction (smartpred
)
library is incorporated
within the VGAM library.
The theory behind the scaling parameter is currently being made more rigorous, but it it should give the same value as the scale parameter for GLMs.
In Example 5 below, the xij
argument to
illustrate covariates
that are specific to a linear predictor.
Here, lop
/rop
are
the ocular pressures of the left/right eye
(artificial data).
Variables leye
and reye
might be
the presence/absence of
a particular disease on the LHS/RHS eye respectively.
See
vglm.control
and
fill1
for more details and examples.
Thomas W. Yee
Yee, T. W. (2015). Vector Generalized Linear and Additive Models: With an Implementation in R. New York, USA: Springer.
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
Yee, T. W. (2014). Reduced-rank vector generalized linear models with two linear predictors. Computational Statistics and Data Analysis, 71, 889–902.
Yee, T. W. (2008).
The VGAM
Package.
R News, 8, 28–39.
vglm.control
,
vglm-class
,
vglmff-class
,
smartpred
,
vglm.fit
,
fill1
,
rrvglm
,
vgam
.
Methods functions include
add1.vglm
,
anova.vglm
,
AICvlm
,
coefvlm
,
confintvglm
,
constraints.vlm
,
drop1.vglm
,
fittedvlm
,
hatvaluesvlm
,
hdeff.vglm
,
Influence.vglm
,
linkfunvlm
,
lrt.stat.vlm
,
score.stat.vlm
,
wald.stat.vlm
,
nobs.vlm
,
npred.vlm
,
plotvglm
,
predictvglm
,
residualsvglm
,
step4vglm
,
summaryvglm
,
lrtest_vglm
,
update
,
TypicalVGAMfamilyFunction
,
etc.
# Example 1. See help(glm) (d.AD <- data.frame(treatment = gl(3, 3), outcome = gl(3, 1, 9), counts = c(18,17,15,20,10,20,25,13,12))) vglm.D93 <- vglm(counts ~ outcome + treatment, poissonff, data = d.AD, trace = TRUE) summary(vglm.D93) # Example 2. Multinomial logit model pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, multinomial, pneumo) # Example 3. Proportional odds model fit3 <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo) coef(fit3, matrix = TRUE) constraints(fit3) model.matrix(fit3, type = "lm") # LM model matrix model.matrix(fit3) # Larger VGLM (or VLM) matrix # Example 4. Bivariate logistic model fit4 <- vglm(cbind(nBnW, nBW, BnW, BW) ~ age, binom2.or, coalminers) coef(fit4, matrix = TRUE) depvar(fit4) # Response are proportions weights(fit4, type = "prior") # Example 5. The use of the xij argument (simple case). # The constraint matrix for 'op' has one column. nn <- 1000 eyesdat <- round(data.frame(lop = runif(nn), rop = runif(nn), op = runif(nn)), digits = 2) eyesdat <- transform(eyesdat, eta1 = -1 + 2 * lop, eta2 = -1 + 2 * lop) eyesdat <- transform(eyesdat, leye = rbinom(nn, 1, prob = logitlink(eta1, inv = TRUE)), reye = rbinom(nn, 1, prob = logitlink(eta2, inv = TRUE))) head(eyesdat) fit5 <- vglm(cbind(leye, reye) ~ op, binom2.or(exchangeable = TRUE, zero = 3), data = eyesdat, trace = TRUE, xij = list(op ~ lop + rop + fill1(lop)), form2 = ~ op + lop + rop + fill1(lop)) coef(fit5) coef(fit5, matrix = TRUE) constraints(fit5) fit5@control$xij head(model.matrix(fit5)) # Example 6. The use of the 'constraints' argument. as.character(~ bs(year,df=3)) # Get the white spaces right clist <- list("(Intercept)" = diag(3), "bs(year, df = 3)" = rbind(1, 0, 0)) fit1 <- vglm(r1 ~ bs(year,df=3), gev(zero = NULL), data = venice, constraints = clist, trace = TRUE) coef(fit1, matrix = TRUE) # Check
# Example 1. See help(glm) (d.AD <- data.frame(treatment = gl(3, 3), outcome = gl(3, 1, 9), counts = c(18,17,15,20,10,20,25,13,12))) vglm.D93 <- vglm(counts ~ outcome + treatment, poissonff, data = d.AD, trace = TRUE) summary(vglm.D93) # Example 2. Multinomial logit model pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, multinomial, pneumo) # Example 3. Proportional odds model fit3 <- vglm(cbind(normal, mild, severe) ~ let, propodds, pneumo) coef(fit3, matrix = TRUE) constraints(fit3) model.matrix(fit3, type = "lm") # LM model matrix model.matrix(fit3) # Larger VGLM (or VLM) matrix # Example 4. Bivariate logistic model fit4 <- vglm(cbind(nBnW, nBW, BnW, BW) ~ age, binom2.or, coalminers) coef(fit4, matrix = TRUE) depvar(fit4) # Response are proportions weights(fit4, type = "prior") # Example 5. The use of the xij argument (simple case). # The constraint matrix for 'op' has one column. nn <- 1000 eyesdat <- round(data.frame(lop = runif(nn), rop = runif(nn), op = runif(nn)), digits = 2) eyesdat <- transform(eyesdat, eta1 = -1 + 2 * lop, eta2 = -1 + 2 * lop) eyesdat <- transform(eyesdat, leye = rbinom(nn, 1, prob = logitlink(eta1, inv = TRUE)), reye = rbinom(nn, 1, prob = logitlink(eta2, inv = TRUE))) head(eyesdat) fit5 <- vglm(cbind(leye, reye) ~ op, binom2.or(exchangeable = TRUE, zero = 3), data = eyesdat, trace = TRUE, xij = list(op ~ lop + rop + fill1(lop)), form2 = ~ op + lop + rop + fill1(lop)) coef(fit5) coef(fit5, matrix = TRUE) constraints(fit5) fit5@control$xij head(model.matrix(fit5)) # Example 6. The use of the 'constraints' argument. as.character(~ bs(year,df=3)) # Get the white spaces right clist <- list("(Intercept)" = diag(3), "bs(year, df = 3)" = rbind(1, 0, 0)) fit1 <- vglm(r1 ~ bs(year,df=3), gev(zero = NULL), data = venice, constraints = clist, trace = TRUE) coef(fit1, matrix = TRUE) # Check
Vector generalized linear models.
Objects can be created by calls of the form vglm(...)
.
In the following, is the number of linear predictors.
extra
:Object of class "list"
;
the extra
argument on entry to vglm
. This
contains any extra information that might be needed
by the family function.
family
:Object of class "vglmff"
.
The family function.
iter
:Object of class "numeric"
.
The number of IRLS iterations used.
predictors
:Object of class "matrix"
with columns which holds the
linear predictors.
assign
:Object of class "list"
,
from class "vlm"
.
This named list gives information matching the columns and the
(LM) model matrix terms.
call
:Object of class "call"
, from class
"vlm"
.
The matched call.
coefficients
:Object of class
"numeric"
, from class "vlm"
.
A named vector of coefficients.
constraints
:Object of class "list"
, from
class "vlm"
.
A named list of constraint matrices used in the fitting.
contrasts
:Object of class "list"
, from
class "vlm"
.
The contrasts used (if any).
control
:Object of class "list"
, from class
"vlm"
.
A list of parameters for controlling the fitting process.
See vglm.control
for details.
criterion
:Object of class "list"
, from
class "vlm"
.
List of convergence criterion evaluated at the
final IRLS iteration.
df.residual
:Object of class
"numeric"
, from class "vlm"
.
The residual degrees of freedom.
df.total
:Object of class "numeric"
,
from class "vlm"
.
The total degrees of freedom.
dispersion
:Object of class "numeric"
,
from class "vlm"
.
The scaling parameter.
effects
:Object of class "numeric"
,
from class "vlm"
.
The effects.
fitted.values
:Object of class
"matrix"
, from class "vlm"
.
The fitted values.
misc
:Object of class "list"
,
from class "vlm"
.
A named list to hold miscellaneous parameters.
model
:Object of class "data.frame"
,
from class "vlm"
.
The model frame.
na.action
:Object of class "list"
,
from class "vlm"
.
A list holding information about missing values.
offset
:Object of class "matrix"
,
from class "vlm"
.
If non-zero, a -column matrix of offsets.
post
:Object of class "list"
,
from class "vlm"
where post-analysis results may be put.
preplot
:Object of class "list"
,
from class "vlm"
used by plotvgam
; the plotting parameters
may be put here.
prior.weights
:Object of class
"matrix"
, from class "vlm"
holding the initially supplied weights.
qr
:Object of class "list"
,
from class "vlm"
.
QR decomposition at the final iteration.
R
:Object of class "matrix"
,
from class "vlm"
.
The R matrix in the QR decomposition used in the fitting.
rank
:Object of class "integer"
,
from class "vlm"
.
Numerical rank of the fitted model.
residuals
:Object of class "matrix"
,
from class "vlm"
.
The working residuals at the final IRLS iteration.
ResSS
:Object of class "numeric"
,
from class "vlm"
.
Residual sum of squares at the final IRLS iteration with
the adjusted dependent vectors and weight matrices.
smart.prediction
:Object of class
"list"
, from class "vlm"
.
A list of data-dependent parameters (if any)
that are used by smart prediction.
terms
:Object of class "list"
,
from class "vlm"
.
The terms
object used.
weights
:Object of class "matrix"
,
from class "vlm"
.
The weight matrices at the final IRLS iteration.
This is in matrix-band form.
x
:Object of class "matrix"
,
from class "vlm"
.
The model matrix (LM, not VGLM).
xlevels
:Object of class "list"
,
from class "vlm"
.
The levels of the factors, if any, used in fitting.
y
:Object of class "matrix"
,
from class "vlm"
.
The response, in matrix form.
Xm2
:Object of class "matrix"
,
from class "vlm"
.
See vglm-class
).
Ym2
:Object of class "matrix"
,
from class "vlm"
.
See vglm-class
).
callXm2
:Object of class "call"
, from class "vlm"
.
The matched call for argument form2
.
Class "vlm"
, directly.
signature(object = "vglm")
:
cumulative distribution function.
Applicable to, e.g., quantile regression and extreme value data models.
signature(object = "vglm")
:
Applicable to, e.g., quantile regression.
signature(object = "vglm")
:
deviance of the model (where applicable).
signature(x = "vglm")
:
diagnostic plots.
signature(object = "vglm")
:
extract the linear predictors or
predict the linear predictors at a new data frame.
signature(x = "vglm")
:
short summary of the object.
signature(object = "vglm")
:
quantile plot (only applicable to some models).
signature(object = "vglm")
:
residuals. There are various types of these.
signature(object = "vglm")
:
residuals. Shorthand for resid
.
signature(object = "vglm")
: return level plot.
Useful for extreme value data models.
signature(object = "vglm")
:
a more detailed summary of the object.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
vglm
,
vglmff-class
,
vgam-class
.
# Multinomial logit model pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, multinomial, data = pneumo)
# Multinomial logit model pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, multinomial, data = pneumo)
Algorithmic constants and parameters for
running vglm
are set
using this function.
vglm.control(checkwz = TRUE, Check.rank = TRUE, Check.cm.rank = TRUE, criterion = names(.min.criterion.VGAM), epsilon = 1e-07, half.stepsizing = TRUE, maxit = 30, noWarning = FALSE, stepsize = 1, save.weights = FALSE, trace = FALSE, wzepsilon = .Machine$double.eps^0.75, xij = NULL, ...)
vglm.control(checkwz = TRUE, Check.rank = TRUE, Check.cm.rank = TRUE, criterion = names(.min.criterion.VGAM), epsilon = 1e-07, half.stepsizing = TRUE, maxit = 30, noWarning = FALSE, stepsize = 1, save.weights = FALSE, trace = FALSE, wzepsilon = .Machine$double.eps^0.75, xij = NULL, ...)
checkwz |
logical indicating whether the diagonal elements
of the working weight matrices should be checked
whether they are sufficiently positive, i.e., greater
than |
Check.rank |
logical indicating whether the rank of the VLM matrix should be checked. If this is not of full column rank then the results are not to be trusted. The default is to give an error message if the VLM matrix is not of full column rank. |
Check.cm.rank |
logical indicating whether the rank of each constraint matrix should be checked. If this is not of full column rank then an error will occur. Under no circumstances should any constraint matrix have a rank less than the number of columns. |
criterion |
character variable describing what criterion is to be
used to test for convergence. The possibilities are
listed in |
epsilon |
positive convergence tolerance epsilon. Roughly speaking,
the Newton-Raphson/Fisher-scoring iterations are assumed
to have converged when two successive |
half.stepsizing |
logical indicating if half-stepsizing is allowed. For
example, in maximizing a log-likelihood, if the next
iteration has a log-likelihood that is less than
the current value of the log-likelihood, then a half
step will be taken. If the log-likelihood is still
less than at the current position, a quarter-step
will be taken etc. Eventually a step will be taken
so that an improvement is made to the convergence
criterion. |
maxit |
maximum number of (usually Fisher-scoring) iterations allowed. Sometimes Newton-Raphson is used. |
noWarning |
logical indicating whether to suppress a warning if
convergence is not obtained within |
stepsize |
usual step size to be taken between each Newton-Raphson/Fisher-scoring iteration. It should be a value between 0 and 1, where a value of unity corresponds to an ordinary step. A value of 0.5 means half-steps are taken. Setting a value near zero will cause convergence to be generally slow but may help increase the chances of successful convergence for some family functions. |
save.weights |
logical indicating whether the |
trace |
logical indicating if output should be produced for each
iteration. Setting |
wzepsilon |
small positive number used to test whether the diagonals of the working weight matrices are sufficiently positive. |
xij |
A list of formulas.
Each formula has a RHS giving A formula or a list of formulas. The function |
... |
other parameters that may be picked up from control functions that are specific to the VGAM family function. |
Most of the control parameters are used within
vglm.fit
and you will have to look at that to
understand the full details.
Setting save.weights = FALSE
is useful for some
models because the weights
slot of the object
is the largest and so less memory is used to store the
object. However, for some VGAM family function,
it is necessary to set save.weights = TRUE
because
the weights
slot cannot be reconstructed later.
A list with components matching the input names.
A little error
checking is done, but not much.
The list is assigned to the control
slot of
vglm
objects.
For some applications the default convergence criterion should
be tightened.
Setting something like criterion = "coef", epsilon = 1e-09
is one way to achieve this, and also add
trace = TRUE
to monitor the convergence.
Setting maxit
to some higher number is usually not
needed, and needing to do so suggests something is wrong, e.g.,
an ill-conditioned model, over-fitting or under-fitting.
Reiterating from above,
setting trace = TRUE
is recommended in general.
In Example 2 below there are two covariates that have
linear/additive predictor specific values. These are
handled using the xij
argument.
Thomas W. Yee
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
vglm
,
TypicalVGAMfamilyFunction
,
fill1
.
The author's homepage has further documentation about
the xij
argument;
see also Select
.
# Example 1. pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, multinomial, pneumo, crit = "coef", step = 0.5, trace = TRUE, epsil = 1e-8, maxit = 40) # Example 2. The use of the xij argument (simple case). ymat <- rdiric(n <- 1000, shape = rep(exp(2), len = 4)) mydat <- data.frame(x1 = runif(n), x2 = runif(n), x3 = runif(n), x4 = runif(n), z1 = runif(n), z2 = runif(n), z3 = runif(n), z4 = runif(n)) mydat <- transform(mydat, X = x1, Z = z1) mydat <- round(mydat, digits = 2) fit2 <- vglm(ymat ~ X + Z, dirichlet(parallel = TRUE), mydat, trace = TRUE, xij = list(Z ~ z1 + z2 + z3 + z4, X ~ x1 + x2 + x3 + x4), form2 = ~ Z + z1 + z2 + z3 + z4 + X + x1 + x2 + x3 + x4) head(model.matrix(fit2, type = "lm")) # LM model matrix head(model.matrix(fit2, type = "vlm")) # Big VLM model matrix coef(fit2) coef(fit2, matrix = TRUE) max(abs(predict(fit2)-predict(fit2, new = mydat))) # Predicts ok summary(fit2) ## Not run: # plotvgam(fit2, se = TRUE, xlab = "x1", which.term = 1) # Bug! # plotvgam(fit2, se = TRUE, xlab = "z1", which.term = 2) # Bug! plotvgam(fit2, xlab = "x1") # Correct plotvgam(fit2, xlab = "z1") # Correct ## End(Not run) # Example 3. The use of the xij argument (complex case). set.seed(123) coalminers <- transform(coalminers, Age = (age - 42) / 5, dum1 = round(runif(nrow(coalminers)), digits = 2), dum2 = round(runif(nrow(coalminers)), digits = 2), dum3 = round(runif(nrow(coalminers)), digits = 2), dumm = round(runif(nrow(coalminers)), digits = 2)) BS <- function(x, ..., df = 3) sm.bs(c(x,...), df = df)[1:length(x),,drop = FALSE] NS <- function(x, ..., df = 3) sm.ns(c(x,...), df = df)[1:length(x),,drop = FALSE] # Equivalently... BS <- function(x, ..., df = 3) head(sm.bs(c(x,...), df = df), length(x), drop = FALSE) NS <- function(x, ..., df = 3) head(sm.ns(c(x,...), df = df), length(x), drop = FALSE) fit3 <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age + NS(dum1, dum2), fam = binom2.or(exchangeable = TRUE, zero = 3), xij = list(NS(dum1, dum2) ~ NS(dum1, dum2) + NS(dum2, dum1) + fill1(NS( dum1))), form2 = ~ NS(dum1, dum2) + NS(dum2, dum1) + fill1(NS(dum1)) + dum1 + dum2 + dum3 + Age + age + dumm, data = coalminers, trace = TRUE) head(model.matrix(fit3, type = "lm")) # LM model matrix head(model.matrix(fit3, type = "vlm")) # Big VLM model matrix coef(fit3) coef(fit3, matrix = TRUE) ## Not run: plotvgam(fit3, se = TRUE, lcol = 2, scol = 4, xlab = "dum1") ## End(Not run)
# Example 1. pneumo <- transform(pneumo, let = log(exposure.time)) vglm(cbind(normal, mild, severe) ~ let, multinomial, pneumo, crit = "coef", step = 0.5, trace = TRUE, epsil = 1e-8, maxit = 40) # Example 2. The use of the xij argument (simple case). ymat <- rdiric(n <- 1000, shape = rep(exp(2), len = 4)) mydat <- data.frame(x1 = runif(n), x2 = runif(n), x3 = runif(n), x4 = runif(n), z1 = runif(n), z2 = runif(n), z3 = runif(n), z4 = runif(n)) mydat <- transform(mydat, X = x1, Z = z1) mydat <- round(mydat, digits = 2) fit2 <- vglm(ymat ~ X + Z, dirichlet(parallel = TRUE), mydat, trace = TRUE, xij = list(Z ~ z1 + z2 + z3 + z4, X ~ x1 + x2 + x3 + x4), form2 = ~ Z + z1 + z2 + z3 + z4 + X + x1 + x2 + x3 + x4) head(model.matrix(fit2, type = "lm")) # LM model matrix head(model.matrix(fit2, type = "vlm")) # Big VLM model matrix coef(fit2) coef(fit2, matrix = TRUE) max(abs(predict(fit2)-predict(fit2, new = mydat))) # Predicts ok summary(fit2) ## Not run: # plotvgam(fit2, se = TRUE, xlab = "x1", which.term = 1) # Bug! # plotvgam(fit2, se = TRUE, xlab = "z1", which.term = 2) # Bug! plotvgam(fit2, xlab = "x1") # Correct plotvgam(fit2, xlab = "z1") # Correct ## End(Not run) # Example 3. The use of the xij argument (complex case). set.seed(123) coalminers <- transform(coalminers, Age = (age - 42) / 5, dum1 = round(runif(nrow(coalminers)), digits = 2), dum2 = round(runif(nrow(coalminers)), digits = 2), dum3 = round(runif(nrow(coalminers)), digits = 2), dumm = round(runif(nrow(coalminers)), digits = 2)) BS <- function(x, ..., df = 3) sm.bs(c(x,...), df = df)[1:length(x),,drop = FALSE] NS <- function(x, ..., df = 3) sm.ns(c(x,...), df = df)[1:length(x),,drop = FALSE] # Equivalently... BS <- function(x, ..., df = 3) head(sm.bs(c(x,...), df = df), length(x), drop = FALSE) NS <- function(x, ..., df = 3) head(sm.ns(c(x,...), df = df), length(x), drop = FALSE) fit3 <- vglm(cbind(nBnW,nBW,BnW,BW) ~ Age + NS(dum1, dum2), fam = binom2.or(exchangeable = TRUE, zero = 3), xij = list(NS(dum1, dum2) ~ NS(dum1, dum2) + NS(dum2, dum1) + fill1(NS( dum1))), form2 = ~ NS(dum1, dum2) + NS(dum2, dum1) + fill1(NS(dum1)) + dum1 + dum2 + dum3 + Age + age + dumm, data = coalminers, trace = TRUE) head(model.matrix(fit3, type = "lm")) # LM model matrix head(model.matrix(fit3, type = "vlm")) # Big VLM model matrix coef(fit3) coef(fit3, matrix = TRUE) ## Not run: plotvgam(fit3, se = TRUE, lcol = 2, scol = 4, xlab = "dum1") ## End(Not run)
Family functions for the VGAM package
Objects can be created by calls of the form new("vglmff", ...)
.
In the following, is the number of linear/additive
predictors.
start1
:Object of class "expression"
to insert
code at a special position (the very start)
in vglm.fit
or vgam.fit
.
blurb
:Object of class "character"
giving
a small description of the model. Important arguments such as
parameter link functions can be expressed here.
charfun
:Object of class "function"
which
returns the characteristic function
or variance function (usually for some GLMs only).
The former uses a dummy variable x.
Both use the linear/additive predictors.
The function must have arguments
function(x, eta, extra = NULL, varfun = FALSE)
.
The eta
and extra
arguments are used to obtain
the parameter values.
If varfun = TRUE
then the function returns the
variance function, else the characteristic function (default).
Note that
one should check that the infos
slot has a list component
called charfun
which is TRUE
before attempting to
use this slot.
This is an easier way to test that this slot is operable.
constraints
:Object of class "expression"
which sets up any constraint matrices defined by arguments in the
family function. A zero
argument is always fed into
cm.zero.vgam
, whereas other constraints are fed into
cm.vgam
.
deviance
:Object of class "function"
returning the deviance of the model. This slot is optional.
If present, the function must have arguments
function(mu, y, w, residuals = FALSE, eta, extra = NULL)
.
Deviance residuals are returned if residuals = TRUE
.
rqresslot
:Object of class "function"
returning the randomized quantile residuals of the distibution.
This slot is optional.
If present, the function must have arguments
function(mu, y, w, eta, extra = NULL)
.
fini1
:Object of class "expression"
to insert
code at a special position in vglm.fit
or
vgam.fit
.
This code is evaluated immediately after the fitting.
first
:Object of class "expression"
to insert
code at a special position in vglm
or
vgam
.
infos
:Object of class "function"
which
returns a list with components such as M1
.
At present only a very few VGAM family functions have this
feature implemented.
Those that do do not require specifying the M1
argument when used with rcim
.
initialize
:Object of class "expression"
used
to perform error checking (especially for the variable y
)
and obtain starting values for the model.
In general, etastart
or
mustart
are assigned values based on the variables y
,
x
and w
.
linkinv
:Object of class "function"
which
returns the fitted values, given the linear/additive predictors.
The function must have arguments
function(eta, extra = NULL)
.
last
:Object of class "expression"
to insert code at a
special position (at the very end) of vglm.fit()
or vgam.fit()
.
This code is evaluated after the fitting.
The list misc
is often assigned components in this slot,
which becomes the misc
slot on the fitted object.
linkfun
:Object of class "function"
which,
given the fitted values, returns the linear/additive predictors.
If present, the function must have arguments
function(mu, extra = NULL)
.
Most VGAM family functions do not have
a linkfun
function. They largely are for
classical exponential families, i.e., GLMs.
loglikelihood
:Object of class "function"
returning the log-likelihood of the model. This slot is optional.
If present, the function must have arguments
function(mu, y, w, residuals = FALSE, eta, extra = NULL)
.
The argument residuals
can be ignored because
log-likelihood residuals aren't defined.
middle1
:Object of class "expression"
to insert
code at a special position in vglm.fit
or
vgam.fit
.
middle2
:Object of class "expression"
to insert
code at a special position in vglm.fit
or
vgam.fit
.
simslot
:Object of class "function"
to allow
simulate
to work.
hadof
:Object of class "function"
;
experimental.
summary.dispersion
:Object of class "logical"
indicating whether the general VGLM formula (based on a residual
sum of squares) can be used for computing the scaling/dispersion
parameter. It is TRUE
for most models except for nonlinear
regression models.
vfamily
:Object of class "character"
giving class information about the family function. Although
not developed at this stage, more flexible classes are planned
in the future. For example, family functions
sratio
, cratio
,
cumulative
, and acat
all operate on categorical data, therefore will have a special class
called "VGAMcat"
, say. Then if fit
was
a vglm
object, then coef(fit)
would print
out the vglm
coefficients plus "VGAMcat"
information as well.
deriv
:Object of class "expression"
which
returns a -column matrix of first derivatives of the
log-likelihood function
with respect to the linear/additive predictors, i.e., the
score vector. In Yee and Wild (1996) this is the
vector. Thus each row of the
matrix returned by this slot is such a vector.
weight
:Object of class "expression"
which
returns the second derivatives of the log-likelihood function
with respect to the linear/additive predictors.
This can be either the observed or expected information matrix, i.e.,
Newton-Raphson or Fisher-scoring respectively.
In Yee and Wild (1996) this is the
matrix. Thus each row of the
matrix returned by this slot is such a matrix.
Like the
weights
slot of vglm
/vgam
, it is
stored in
matrix-band form, whereby the first
columns of the matrix are the
diagonals, followed by the upper-diagonal band, followed by the
band above that, etc. In this case, there can be up to
columns, with the last column corresponding to the (1,
) elements
of the weight matrices.
validfitted, validparams
:Functions that test that the fitted values and all parameters are within range. These functions can issue a warning if violations are detected.
signature(x = "vglmff")
:
short summary of the family function.
VGAM family functions are not compatible with
glm
, nor gam()
(from either gam or mgcv).
With link functions etc., one must use substitute
to
embed the options into the code. There are two different forms:
eval(substitute(expression({...}), list(...)))
for expressions, and
eval(substitute( function(...) { ... }, list(...) ))
for functions.
The extra
argument in
linkinv
, linkfun
, deviance
,
loglikelihood
, etc.
matches with the argument extra
in vglm
, vgam
and rrvglm
.
This allows input to be fed into all slots of a VGAM
family function.
The expression derivative
is evaluated immediately
prior to weight
, so there is provision for re-use
of variables etc. Programmers must be careful to choose
variable names that do not interfere with vglm.fit
,
vgam.fit()
etc.
Programmers of VGAM family functions are encouraged
to keep to previous conventions regarding the naming of arguments,
e.g.,
link
is the argument for parameter link functions,
zero
for allowing some of the
linear/additive predictors to be an intercept term only, etc.
In general, Fisher-scoring is recommended over Newton-Raphson where tractable. Although usually slightly slower in convergence, the weight matrices from using the expected information are positive-definite over a larger parameter space.
Thomas W. Yee
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
cratio() cratio(link = "clogloglink") cratio(link = "clogloglink", reverse = TRUE)
cratio() cratio(link = "clogloglink") cratio(link = "clogloglink", reverse = TRUE)
Estimates the location and scale parameters of the von Mises distribution by maximum likelihood estimation.
vonmises(llocation = extlogitlink(min = 0, max = 2*pi), lscale = "loglink", ilocation = NULL, iscale = NULL, imethod = 1, zero = NULL)
vonmises(llocation = extlogitlink(min = 0, max = 2*pi), lscale = "loglink", ilocation = NULL, iscale = NULL, imethod = 1, zero = NULL)
llocation , lscale
|
Parameter link functions applied to the
location |
ilocation |
Initial value for the location |
iscale |
Initial value for the scale |
imethod |
An integer with value |
zero |
An integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
The default is none of them.
If used, one can choose one value from the set {1,2}.
See |
The (two-parameter) von Mises is the most commonly used distribution in practice for circular data. It has a density that can be written as
where ,
is the scale parameter,
is the location parameter, and
is the modified Bessel
function of order 0 evaluated at
.
The mean of
(which is the fitted value) is
and the circular variance is
where
is the modified Bessel
function of order 1.
By default,
and
for this family function.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
Numerically, the von Mises can be difficult to fit because of a
log-likelihood having multiple maximums.
The user is therefore encouraged to try different starting values,
i.e., make use of ilocation
and iscale
.
The response and the fitted values are scaled so that
.
The linear/additive predictors are left alone.
Fisher scoring is used.
T. W. Yee
Forbes, C., Evans, M., Hastings, N. and Peacock, B. (2011). Statistical Distributions, Hoboken, NJ, USA: John Wiley and Sons, Fourth edition.
CircStats and circular currently have a lot more R functions for circular data than the VGAM package.
vdata <- data.frame(x2 = runif(nn <- 1000)) vdata <- transform(vdata, y = rnorm(nn, 2+x2, exp(0.2))) # Bad data!! fit <- vglm(y ~ x2, vonmises(zero = 2), vdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) with(vdata, range(y)) # Original data range(depvar(fit)) # Processed data is in [0,2*pi)
vdata <- data.frame(x2 = runif(nn <- 1000)) vdata <- transform(vdata, y = rnorm(nn, 2+x2, exp(0.2))) # Bad data!! fit <- vglm(y ~ x2, vonmises(zero = 2), vdata, trace = TRUE) coef(fit, matrix = TRUE) Coef(fit) with(vdata, range(y)) # Original data range(depvar(fit)) # Processed data is in [0,2*pi)
plot
and pairs
methods
for objects of
class "profile"
, but renamed as
vplot
and vpairs
.
vplot.profile(x, ...) vpairs.profile(x, colours = 2:3, ...)
vplot.profile(x, ...) vpairs.profile(x, colours = 2:3, ...)
x |
an object inheriting from class |
colours |
Colours to be used for the mean curves
conditional on
|
... |
arguments passed to or from other methods. |
See
profile.glm
for details.
T. W. Yee adapted this function from
profile.glm
,
written originally
by D. M. Bates and W. N. Venables. (For S in 1996.)
profilevglm
,
confintvglm
,
lrt.stat
,
profile.glm
,
profile.nls
.
pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, acat, trace = TRUE, data = pneumo) pfit1 <- profile(fit1, trace = FALSE) ## Not run: vplot.profile(pfit1) vpairs.profile(pfit1) ## End(Not run)
pneumo <- transform(pneumo, let = log(exposure.time)) fit1 <- vglm(cbind(normal, mild, severe) ~ let, acat, trace = TRUE, data = pneumo) pfit1 <- profile(fit1, trace = FALSE) ## Not run: vplot.profile(pfit1) vpairs.profile(pfit1) ## End(Not run)
Fits a vector cubic smoothing spline.
vsmooth.spline(x, y, w = NULL, df = rep(5, M), spar = NULL, i.constraint = diag(M), x.constraint = diag(M), constraints = list("(Intercepts)" = i.constraint, x = x.constraint), all.knots = FALSE, var.arg = FALSE, scale.w = TRUE, nk = NULL, control.spar = list())
vsmooth.spline(x, y, w = NULL, df = rep(5, M), spar = NULL, i.constraint = diag(M), x.constraint = diag(M), constraints = list("(Intercepts)" = i.constraint, x = x.constraint), all.knots = FALSE, var.arg = FALSE, scale.w = TRUE, nk = NULL, control.spar = list())
x |
A vector, matrix or a list.
If a list, the |
y |
A vector, matrix or a list.
If a list, the |
w |
The weight matrices or the number of observations.
If the weight matrices, then this must be
a |
df |
Numerical vector containing the degrees of
freedom for each component function (smooth).
If necessary, the vector is recycled to have length equal
to the number of component functions to be estimated
( |
spar |
Numerical vector containing the non-negative smoothing
parameters for each component function (smooth).
If necessary, the vector is recycled to have length equal
to the number of component functions to be estimated
( |
all.knots |
Logical. If |
i.constraint |
A |
x.constraint |
A |
constraints |
An alternative to specifying |
var.arg |
Logical: return the pointwise variances of the fit? Currently, this corresponds only to the nonlinear part of the fit, and may be wrong. |
scale.w |
Logical.
By default, the weights |
nk |
Number of knots.
If used, this argument overrides |
control.spar |
See |
The algorithm implemented is detailed in Yee (2000).
It involves decomposing the component functions
into a linear and
nonlinear part, and using B-splines.
The cost of the computation is O(n M^3)
.
The argument spar
contains scaled
smoothing parameters.
An object of class "vsmooth.spline"
(see vsmooth.spline-class
).
See vgam
for information about an important bug.
This function is quite similar
to smooth.spline
but offers less functionality.
For example, cross validation is not implemented here.
For M = 1
, the results will be generally different,
mainly due to the different way the knots are selected.
The vector cubic smoothing spline which s()
represents is
computationally demanding for large .
The cost is approximately
where
is the number of unique abscissae.
Yet to be done: return the unscaled smoothing parameters.
Thomas W. Yee
Yee, T. W. (2000). Vector Splines and Other Vector Smoothers. Pages 529–534. In: Bethlehem, J. G. and van der Heijde, P. G. M. Proceedings in Computational Statistics COMPSTAT 2000. Heidelberg: Physica-Verlag.
vsmooth.spline-class
,
plot.vsmooth.spline
,
predict.vsmooth.spline
,
iam
,
sm.os
,
s
,
smooth.spline
.
nn <- 20; x <- 2 + 5*(nn:1)/nn x[2:4] <- x[5:7] # Allow duplication y1 <- sin(x) + rnorm(nn, sd = 0.13) y2 <- cos(x) + rnorm(nn, sd = 0.13) y3 <- 1 + sin(x) + rnorm(nn, sd = 0.13) # For constraints y <- cbind(y1, y2, y3) ww <- cbind(rep(3, nn), 4, (1:nn)/nn) (fit <- vsmooth.spline(x, y, w = ww, df = 5)) ## Not run: plot(fit) # The 1st & 3rd functions dont differ by a constant ## End(Not run) mat <- matrix(c(1,0,1, 0,1,0), 3, 2) (fit2 <- vsmooth.spline(x, y, w = ww, df = 5, i.constr = mat, x.constr = mat)) # The 1st and 3rd functions do differ by a constant: mycols <- c("orange", "blue", "orange") ## Not run: plot(fit2, lcol = mycols, pcol = mycols, las = 1) p <- predict(fit, x = model.matrix(fit, type = "lm"), deriv = 0) max(abs(depvar(fit) - with(p, y))) # Should be 0 par(mfrow = c(3, 1)) ux <- seq(1, 8, len = 100) for (dd in 1:3) { pp <- predict(fit, x = ux, deriv = dd) ## Not run: with(pp, matplot(x, y, type = "l", main = paste("deriv =", dd), lwd = 2, ylab = "", cex.axis = 1.5, cex.lab = 1.5, cex.main = 1.5)) ## End(Not run) }
nn <- 20; x <- 2 + 5*(nn:1)/nn x[2:4] <- x[5:7] # Allow duplication y1 <- sin(x) + rnorm(nn, sd = 0.13) y2 <- cos(x) + rnorm(nn, sd = 0.13) y3 <- 1 + sin(x) + rnorm(nn, sd = 0.13) # For constraints y <- cbind(y1, y2, y3) ww <- cbind(rep(3, nn), 4, (1:nn)/nn) (fit <- vsmooth.spline(x, y, w = ww, df = 5)) ## Not run: plot(fit) # The 1st & 3rd functions dont differ by a constant ## End(Not run) mat <- matrix(c(1,0,1, 0,1,0), 3, 2) (fit2 <- vsmooth.spline(x, y, w = ww, df = 5, i.constr = mat, x.constr = mat)) # The 1st and 3rd functions do differ by a constant: mycols <- c("orange", "blue", "orange") ## Not run: plot(fit2, lcol = mycols, pcol = mycols, las = 1) p <- predict(fit, x = model.matrix(fit, type = "lm"), deriv = 0) max(abs(depvar(fit) - with(p, y))) # Should be 0 par(mfrow = c(3, 1)) ux <- seq(1, 8, len = 100) for (dd in 1:3) { pp <- predict(fit, x = ux, deriv = dd) ## Not run: with(pp, matplot(x, y, type = "l", main = paste("deriv =", dd), lwd = 2, ylab = "", cex.axis = 1.5, cex.lab = 1.5, cex.main = 1.5)) ## End(Not run) }
The waitakere
data frame has 579 rows and 18 columns.
Altitude is explanatory, and there are binary responses
(presence/absence = 1/0 respectively) for 17 plant species.
data(waitakere)
data(waitakere)
This data frame contains the following columns:
Agathis australis, or Kauri
Beilschmiedia tawa, or Tawa
Corynocarpus laevigatus
Cyathea dealbata
Cyathea medullaris
Dacrydium cupressinum
Dacrycarpus dacrydioides
Elaecarpus dentatus
Hedycarya arborea
Species name unknown
Knightia excelsa, or Rewarewa
Kunzea ericoides
Leptospermum scoparium
Metrosideros robusta
Nestegis lanceolata
Rhopalostylis sapida
Vitex lucens, or Puriri
meters above sea level
These were collected from the Waitakere Ranges,
a small forest in northern
Auckland, New Zealand. At 579 sites in the forest,
the presence/absence
of 17 plant species was recorded, as well as the altitude.
Each site was of area size 200.
Dr Neil Mitchell, University of Auckland.
fit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, waitakere) head(predict(fit, waitakere, type = "response")) ## Not run: plot(fit, se = TRUE, lcol = "orange", scol = "blue")
fit <- vgam(agaaus ~ s(altitude, df = 2), binomialff, waitakere) head(predict(fit, waitakere, type = "response")) ## Not run: plot(fit, se = TRUE, lcol = "orange", scol = "blue")
Generic function that computes Wald test statistics evaluated at the null values (consequently they do not suffer from the Hauck-Donner effect).
wald.stat(object, ...) wald.stat.vlm(object, values0 = 0, subset = NULL, omit1s = TRUE, all.out = FALSE, orig.SE = FALSE, iterate.SE = TRUE, trace = FALSE, ...)
wald.stat(object, ...) wald.stat.vlm(object, values0 = 0, subset = NULL, omit1s = TRUE, all.out = FALSE, orig.SE = FALSE, iterate.SE = TRUE, trace = FALSE, ...)
object |
A |
values0 |
Numeric vector. The null values corresponding to the null hypotheses. Recycled if necessary. |
subset |
Same as in |
omit1s |
Logical. Does one omit the intercepts? Because the default would be to test that each intercept is equal to 0, which often does not make sense or is unimportant, the intercepts are not tested by default. If they are tested then each linear predictor must have at least one coefficient (from another variable) to be estimated. |
all.out |
Logical. If |
orig.SE |
Logical. If This argument was previously called For one-parameter models setting
|
iterate.SE |
Logical, for the standard error computations.
If |
trace |
Logical. If |
... |
Ignored for now. |
By default, summaryvglm
and most regression
modelling functions such as summary.glm
compute all the standard errors (SEs) of the estimates at
the MLE and not at 0.
This corresponds to orig.SE = TRUE
and
it is vulnerable to the Hauck-Donner effect (HDE;
see hdeff
).
One solution is to compute the SEs
at 0 (or more generally, at the values of
the argument values0
).
This function does that.
The two variants of Wald statistics are asymptotically equivalent;
however in small samples there can be an appreciable difference,
and the difference can be large if the estimates are near
to the boundary of the parameter space.
None of the tests here are joint,
hence the degrees of freedom is always unity.
For a factor with more than 2 levels one can use
anova.vglm
to test for the significance of the factor.
If orig.SE = FALSE
and iterate.SE = FALSE
then
one retains the MLEs of the original fit for the values of
the other coefficients, and replaces one coefficient at a
time by the value 0 (or whatever specified by values0
).
One alternative would be to recompute the MLEs of the other
coefficients after replacing one of the values;
this is the default because iterate.SE = TRUE
and orig.SE = FALSE
.
Just like with the original IRLS iterations,
the iterations here are not guaranteed to converge.
Almost all VGAM family functions use the EIM and not
the OIM; this affects the resulting standard errors.
Also, regularity conditions are assumed for the Wald,
likelihood ratio and score tests; some VGAM family functions
such as alaplace1
are experimental and
do not satisfy such conditions, therefore naive inference is
hazardous.
The default output of this function can be seen by
setting wald0.arg = TRUE
in summaryvglm
.
By default the signed square root of the Wald statistics
whose SEs are computed at one each of the null values.
If all.out = TRUE
then a list is returned with the
following components:
wald.stat
the Wald statistic,
SE0
the standard error of that coefficient,
values0
the null values.
Approximately, the default Wald statistics output are standard
normal random variates if each null hypothesis is true.
Altogether,
by the four combinations of iterate.SE
and orig.SE
,
there are three different variants of the Wald statistic
that can be returned.
This function has been tested but not thoroughly.
Convergence failure is possible for some models applied to
certain data sets; it is a good idea to set trace = TRUE
to monitor convergence.
For example, for a particular explanatory variable,
the estimated regression coefficients
of a non-parallel cumulative logit model
(see cumulative
) are ordered,
and perturbing one coefficient might disrupt the order
and create numerical problems.
Thomas W. Yee
Laskar, M. R. and M. L. King (1997). Modified Wald test for regression disturbances. Economics Letters, 56, 5–11.
Goh, K.-L. and M. L. King (1999). A correction for local biasedness of the Wald and null Wald tests. Oxford Bulletin of Economics and Statistics 61, 435–450.
lrt.stat
,
score.stat
,
summaryvglm
,
summary.glm
,
anova.vglm
,
vglm
,
hdeff
,
hdeffsev
.
set.seed(1) pneumo <- transform(pneumo, let = log(exposure.time), x3 = rnorm(nrow(pneumo))) (fit <- vglm(cbind(normal, mild, severe) ~ let + x3, propodds, pneumo)) wald.stat(fit) # No HDE here summary(fit, wald0 = TRUE) # See them here coef(summary(fit)) # Usual Wald statistics evaluated at the MLE wald.stat(fit, orig.SE = TRUE) # Same as previous line
set.seed(1) pneumo <- transform(pneumo, let = log(exposure.time), x3 = rnorm(nrow(pneumo))) (fit <- vglm(cbind(normal, mild, severe) ~ let + x3, propodds, pneumo)) wald.stat(fit) # No HDE here summary(fit, wald0 = TRUE) # See them here coef(summary(fit)) # Usual Wald statistics evaluated at the MLE wald.stat(fit, orig.SE = TRUE) # Same as previous line
Estimates the parameter of the standard Wald distribution by maximum likelihood estimation.
waldff(llambda = "loglink", ilambda = NULL)
waldff(llambda = "loglink", ilambda = NULL)
llambda , ilambda
|
See |
The standard Wald distribution is a special case of the
inverse Gaussian distribution with .
It has a density that can be written as
where and
.
The mean of
is
(returned as the fitted values) and its variance is
.
By default,
.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
The VGAM family function inv.gaussianff
estimates the location parameter too.
T. W. Yee
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994). Continuous Univariate Distributions, 2nd edition, Volume 1, New York: Wiley.
inv.gaussianff
,
rinv.gaussian
.
wdata <- data.frame(y = rinv.gaussian(1000, mu = 1, exp(1))) wfit <- vglm(y ~ 1, waldff(ilambda = 0.2), wdata, trace = TRUE) coef(wfit, matrix = TRUE) Coef(wfit) summary(wfit)
wdata <- data.frame(y = rinv.gaussian(1000, mu = 1, exp(1))) wfit <- vglm(y ~ 1, waldff(ilambda = 0.2), wdata, trace = TRUE) coef(wfit, matrix = TRUE) Coef(wfit) summary(wfit)
Maximum likelihood estimation of the 2-parameter Weibull distribution. The mean is one of the parameters. No observations should be censored.
weibull.mean(lmean = "loglink", lshape = "loglink", imean = NULL, ishape = NULL, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = "shape")
weibull.mean(lmean = "loglink", lshape = "loglink", imean = NULL, ishape = NULL, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = "shape")
lmean , lshape
|
Parameter link functions applied to the
(positive) mean parameter (called |
imean , ishape
|
Optional initial values for the mean and shape parameters. |
imethod , zero , probs.y
|
Details at |
See weibullR
for most of the details
for this family function too.
The mean of
is
(returned as the fitted values),
and this is the first parameter (a
loglink
link is the default because it is positive).
The other parameter is the positive shape paramter ,
also having a default
loglink
link.
This VGAM family function currently does not handle
censored data.
Fisher scoring is used to estimate the two parameters.
Although the expected information matrices used here
are valid in all regions of the parameter space,
the regularity conditions for maximum
likelihood estimation are satisfied only if
(according to Kleiber and Kotz (2003)).
If this is violated then a warning message is issued.
One can enforce
by
choosing
lshape = logofflink(offset = -2)
.
Common values of the shape parameter lie between 0.5 and 3.5.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
See weibullR
for more details.
This VGAM family function handles multiple responses.
T. W. Yee
weibullR
,
dweibull
,
truncweibull
,
gev
,
lognormal
,
expexpff
,
maxwell
,
rayleigh
,
gumbelII
.
## Not run: wdata <- data.frame(x2 = runif(nn <- 1000)) # Complete data wdata <- transform(wdata, mu = exp(-1 + 1 * x2), x3 = rnorm(nn), shape1 = exp(1), shape2 = exp(2)) wdata <- transform(wdata, y1 = rweibull(nn, shape1, scale = mu / gamma(1 + 1/shape1)), y2 = rweibull(nn, shape2, scale = mu / gamma(1 + 1/shape2))) fit <- vglm(cbind(y1, y2) ~ x2 + x3, weibull.mean, wdata, trace = TRUE) coef(fit, matrix = TRUE) sqrt(diag(vcov(fit))) # SEs summary(fit, presid = FALSE) ## End(Not run)
## Not run: wdata <- data.frame(x2 = runif(nn <- 1000)) # Complete data wdata <- transform(wdata, mu = exp(-1 + 1 * x2), x3 = rnorm(nn), shape1 = exp(1), shape2 = exp(2)) wdata <- transform(wdata, y1 = rweibull(nn, shape1, scale = mu / gamma(1 + 1/shape1)), y2 = rweibull(nn, shape2, scale = mu / gamma(1 + 1/shape2))) fit <- vglm(cbind(y1, y2) ~ x2 + x3, weibull.mean, wdata, trace = TRUE) coef(fit, matrix = TRUE) sqrt(diag(vcov(fit))) # SEs summary(fit, presid = FALSE) ## End(Not run)
Maximum likelihood estimation of the 2-parameter Weibull distribution. No observations should be censored.
weibullR(lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL, lss = TRUE, nrfs = 1, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = "shape")
weibullR(lscale = "loglink", lshape = "loglink", iscale = NULL, ishape = NULL, lss = TRUE, nrfs = 1, probs.y = c(0.2, 0.5, 0.8), imethod = 1, zero = "shape")
lshape , lscale
|
Parameter link functions applied to the
(positive) shape parameter (called |
ishape , iscale
|
Optional initial values for the shape and scale parameters. |
nrfs |
Currently this argument is ignored.
Numeric, of length one, with value in |
imethod |
Initialization method used if there are censored observations. Currently only the values 1 and 2 are allowed. |
zero , probs.y , lss
|
Details at |
The Weibull density for a response is
for ,
,
.
The cumulative distribution function is
The mean of
is
(returned as the fitted values),
and the mode is
at
when
.
The density is unbounded for
.
The
th moment about the origin is
.
The hazard function
is
.
This VGAM family function currently does not handle
censored data.
Fisher scoring is used to estimate the two parameters.
Although the expected information matrices used here are valid
in all regions of the parameter space,
the regularity conditions for maximum
likelihood estimation are satisfied only if
(according to Kleiber and Kotz (2003)).
If this is violated then a warning message is issued.
One can enforce
by
choosing
lshape = logofflink(offset = -2)
.
Common values of the shape parameter lie between 0.5 and 3.5.
Summarized in Harper et al. (2011),
for inference, there are 4 cases to consider.
If then the MLEs are not consistent
(and the smallest observation becomes a hyperefficient
solution for the location parameter in the 3-parameter case).
If
then MLEs exist but are
not asymptotically normal.
If
then the MLEs exist and are normal
and asymptotically
efficient but with a slower convergence rate than
when
.
If
then MLEs have classical asymptotic properties.
The 3-parameter (location is the third parameter) Weibull can
be estimated by maximizing a profile log-likelihood (see,
e.g., Harper et al. (2011) and Lawless (2003)), else try
gev
which is a better parameterization.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
and vgam
.
This function is under development to handle
other censoring situations.
The version of this function which will handle
censored data will be
called cenweibull()
. It is currently
being written and will use
SurvS4
as input.
It should be released in later versions of VGAM.
If the shape parameter is less than two then
misleading inference may
result, e.g., in the summary
and vcov
of the object.
Successful convergence depends on having reasonably good initial values. If the initial values chosen by this function are not good, make use the two initial value arguments.
This VGAM family function handles multiple responses.
The Weibull distribution is often an
alternative to the lognormal
distribution. The inverse Weibull distribution,
which is that of
where
has a Weibull(
)
distribution, is
known as the log-Gompertz distribution.
There are problems implementing the three-parameter Weibull distribution. These are because the classical regularity conditions for the asymptotic properties of the MLEs are not satisfied because the support of the distribution depends on one of the parameters.
Other related distributions are the Maxwell and Rayleigh distributions.
T. W. Yee
Kleiber, C. and Kotz, S. (2003). Statistical Size Distributions in Economics and Actuarial Sciences, Hoboken, NJ, USA: Wiley-Interscience.
Johnson, N. L. and Kotz, S. and Balakrishnan, N. (1994). Continuous Univariate Distributions, 2nd edition, Volume 1, New York: Wiley.
Lawless, J. F. (2003). Statistical Models and Methods for Lifetime Data, 2nd ed. Hoboken, NJ, USA: John Wiley & Sons.
Rinne, Horst. (2009). The Weibull Distribution: A Handbook. Boca Raton, FL, USA: CRC Press.
Gupta, R. D. and Kundu, D. (2006). On the comparison of Fisher information of the Weibull and GE distributions, Journal of Statistical Planning and Inference, 136, 3130–3144.
Harper, W. V. and Eschenbach, T. G. and James, T. R. (2011). Concerns about Maximum Likelihood Estimation for the Three-Parameter Weibull Distribution: Case Study of Statistical Software, The American Statistician, 65(1), 44–54.
Smith, R. L. (1985). Maximum likelihood estimation in a class of nonregular cases. Biometrika, 72, 67–90.
Smith, R. L. and Naylor, J. C. (1987). A comparison of maximum likelihood and Bayesian estimators for the three-parameter Weibull distribution. Applied Statistics, 36, 358–369.
weibull.mean
,
dweibull
,
truncweibull
,
gev
,
lognormal
,
expexpff
,
maxwell
,
rayleigh
,
gumbelII
.
wdata <- data.frame(x2 = runif(nn <- 1000)) # Complete data wdata <- transform(wdata, y1 = rweibull(nn, exp(1), scale = exp(-2 + x2)), y2 = rweibull(nn, exp(2), scale = exp( 1 - x2))) fit <- vglm(cbind(y1, y2) ~ x2, weibullR, wdata, trace = TRUE) coef(fit, matrix = TRUE) vcov(fit) summary(fit)
wdata <- data.frame(x2 = runif(nn <- 1000)) # Complete data wdata <- transform(wdata, y1 = rweibull(nn, exp(1), scale = exp(-2 + x2)), y2 = rweibull(nn, exp(2), scale = exp( 1 - x2))) fit <- vglm(cbind(y1, y2) ~ x2, weibullR, wdata, trace = TRUE) coef(fit, matrix = TRUE) vcov(fit) summary(fit)
Returns either the prior weights or working weights of a VGLM object.
weightsvglm(object, type = c("prior", "working"), matrix.arg = TRUE, ignore.slot = FALSE, deriv.arg = FALSE, ...)
weightsvglm(object, type = c("prior", "working"), matrix.arg = TRUE, ignore.slot = FALSE, deriv.arg = FALSE, ...)
object |
a model object from the VGAM R package
that inherits from
a vector generalized linear model (VGLM),
e.g., a model of class |
type |
Character, which type of weight is to be returned? The default is the first one. |
matrix.arg |
Logical, whether the answer is returned as a matrix. If not, it will be a vector. |
ignore.slot |
Logical. If |
deriv.arg |
Logical. If |
... |
Currently ignored. |
Prior weights are usually inputted with the weights
argument in functions such as vglm
and
vgam
. It may refer to frequencies of the
individual data or be weight matrices specified beforehand.
Working weights are used by the IRLS algorithm. They correspond
to the second derivatives of the log-likelihood function
with respect to the linear predictors. The working weights
correspond to positive-definite weight matrices and are returned
in matrix-band form, e.g., the first columns
correspond to the diagonals, etc.
If one wants to perturb the linear predictors then the
fitted.values
slots should be assigned to the object
before calling this function. The reason is that,
for some family functions,
the variable mu
is used directly as one of the parameter
estimates, without recomputing it from eta
.
If type = "working"
and deriv = TRUE
then a
list is returned with the two components described below.
Otherwise the prior or working weights are returned depending
on the value of type
.
deriv |
Typically the first derivative of the
log-likelihood with respect to the linear predictors.
For example, this is the variable |
weights |
The working weights. |
This function is intended to be similar to
weights.glm
(see glm
).
Thomas W. Yee
glm
,
vglmff-class
,
vglm
.
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), pneumo)) depvar(fit) # These are sample proportions weights(fit, type = "prior", matrix = FALSE) # No. of observations # Look at the working residuals nn <- nrow(model.matrix(fit, type = "lm")) M <- ncol(predict(fit)) wwt <- weights(fit, type="working", deriv=TRUE) # Matrix-band format wz <- m2a(wwt$weights, M = M) # In array format wzinv <- array(apply(wz, 3, solve), c(M, M, nn)) wresid <- matrix(NA, nn, M) # Working residuals for (ii in 1:nn) wresid[ii, ] <- wzinv[, , ii, drop = TRUE] %*% wwt$deriv[ii, ] max(abs(c(resid(fit, type = "work")) - c(wresid))) # Should be 0 (zedd <- predict(fit) + wresid) # Adjusted dependent vector
pneumo <- transform(pneumo, let = log(exposure.time)) (fit <- vglm(cbind(normal, mild, severe) ~ let, cumulative(parallel = TRUE, reverse = TRUE), pneumo)) depvar(fit) # These are sample proportions weights(fit, type = "prior", matrix = FALSE) # No. of observations # Look at the working residuals nn <- nrow(model.matrix(fit, type = "lm")) M <- ncol(predict(fit)) wwt <- weights(fit, type="working", deriv=TRUE) # Matrix-band format wz <- m2a(wwt$weights, M = M) # In array format wzinv <- array(apply(wz, 3, solve), c(M, M, nn)) wresid <- matrix(NA, nn, M) # Working residuals for (ii in 1:nn) wresid[ii, ] <- wzinv[, , ii, drop = TRUE] %*% wwt$deriv[ii, ] max(abs(c(resid(fit, type = "work")) - c(wresid))) # Should be 0 (zedd <- predict(fit) + wresid) # Adjusted dependent vector
This oenological data frame concerns the amount of bitterness in 78 bottles of white wine.
data(wine)
data(wine)
A data frame with 4 rows on the following 7 variables.
temperature, with levels cold and warm.
whether contact of the juice with the skin was allowed or avoided, for a specified period. Two levels: no or yes.
numeric vectors, the counts. The order is none to most intense.
The data set comes from Randall (1989) and concerns a factorial experiment for investigating factors that affect the bitterness of white wines. There are two factors in the experiment: temperature at the time of crushing the grapes and contact of the juice with the skin. Two bottles of wine were fermented for each of the treatment combinations. A panel of 9 judges were selected and trained for the ability to detect bitterness. Thus there were 72 bottles in total. Originally, the bitterness of the wine were taken on a continuous scale in the interval from 0 (none) to 100 (intense) but later they were grouped using equal lengths into five ordered categories 1, 2, 3, 4 and 5.
Christensen, R. H. B. (2013) Analysis of ordinal data with cumulative link models—estimation with the R-package ordinal. R Package Version 2013.9-30. https://CRAN.R-project.org/package=ordinal.
Randall, J. H. (1989). The analysis of sensory data by generalized linear model. Biometrical Journal 31(7), 781–793.
Kosmidis, I. (2014). Improved estimation in cumulative link models. Journal of the Royal Statistical Society, Series B, Methodological, 76(1): 169–196.
wine summary(wine)
wine summary(wine)
wrapup.smart
deletes any variables used by smart prediction.
Needed by both the modelling function and the prediction function.
wrapup.smart()
wrapup.smart()
The variables to be deleted are .smart.prediction
,
.smart.prediction.counter
, and .smart.prediction.mode
.
The function wrapup.smart
is useful in R because
these variables are held in smartpredenv
.
## Not run: # Place this inside modelling functions such as lm, glm, vglm. wrapup.smart() # Put at the end of lm ## End(Not run)
## Not run: # Place this inside modelling functions such as lm, glm, vglm. wrapup.smart() # Put at the end of lm ## End(Not run)
Computes the Yeo-Johnson transformation, which is a normalizing transformation.
yeo.johnson(y, lambda, derivative = 0, epsilon = sqrt(.Machine$double.eps), inverse = FALSE)
yeo.johnson(y, lambda, derivative = 0, epsilon = sqrt(.Machine$double.eps), inverse = FALSE)
y |
Numeric, a vector or matrix. |
lambda |
Numeric. It is recycled to the same length as
|
derivative |
Non-negative integer. The default is
the ordinary function evaluation, otherwise the derivative
with respect to |
epsilon |
Numeric and positive value. The tolerance given
to values of |
inverse |
Logical. Return the inverse transformation? |
The Yeo-Johnson transformation can be thought of as an extension of the Box-Cox transformation. It handles both positive and negative values, whereas the Box-Cox transformation only handles positive values. Both can be used to transform the data so as to improve normality. They can be used to perform LMS quantile regression.
The Yeo-Johnson transformation or its inverse, or its
derivatives with respect to lambda
, of y
.
If inverse = TRUE
then the
argument derivative = 0
is required.
Thomas W. Yee
Yeo, I.-K. and Johnson, R. A. (2000). A new family of power transformations to improve normality or symmetry. Biometrika, 87, 954–959.
Yee, T. W. (2004). Quantile regression via vector generalized additive models. Statistics in Medicine, 23, 2295–2315.
y <- seq(-4, 4, len = (nn <- 200)) ltry <- c(0, 0.5, 1, 1.5, 2) # Try these values of lambda lltry <- length(ltry) psi <- matrix(NA_real_, nn, lltry) for (ii in 1:lltry) psi[, ii] <- yeo.johnson(y, lambda = ltry[ii]) ## Not run: matplot(y, psi, type = "l", ylim = c(-4, 4), lwd = 2, lty = 1:lltry, col = 1:lltry, las = 1, ylab = "Yeo-Johnson transformation", main = "Yeo-Johnson transformation with some lambda values") abline(v = 0, h = 0) legend(x = 1, y = -0.5, lty = 1:lltry, legend = as.character(ltry), lwd = 2, col = 1:lltry) ## End(Not run)
y <- seq(-4, 4, len = (nn <- 200)) ltry <- c(0, 0.5, 1, 1.5, 2) # Try these values of lambda lltry <- length(ltry) psi <- matrix(NA_real_, nn, lltry) for (ii in 1:lltry) psi[, ii] <- yeo.johnson(y, lambda = ltry[ii]) ## Not run: matplot(y, psi, type = "l", ylim = c(-4, 4), lwd = 2, lty = 1:lltry, col = 1:lltry, las = 1, ylab = "Yeo-Johnson transformation", main = "Yeo-Johnson transformation with some lambda values") abline(v = 0, h = 0) legend(x = 1, y = -0.5, lty = 1:lltry, legend = as.character(ltry), lwd = 2, col = 1:lltry) ## End(Not run)
Density, distribution function, quantile function and random generation for the Yule-Simon distribution.
dyules(x, shape, log = FALSE) pyules(q, shape, lower.tail = TRUE, log.p = FALSE) qyules(p, shape) ryules(n, shape)
dyules(x, shape, log = FALSE) pyules(q, shape, lower.tail = TRUE, log.p = FALSE) qyules(p, shape) ryules(n, shape)
x , q , p , n
|
Same meaning as in |
shape |
See |
log , lower.tail , log.p
|
See yulesimon
, the VGAM family function
for estimating the parameter,
for the formula of the probability density function
and other details.
dyules
gives the density,
pyules
gives the distribution function,
qyules
gives the quantile function, and
ryules
generates random deviates.
Numerical problems may occur with
qyules()
when p
is very close to 1.
T. W. Yee
dyules(1:20, 2.1) ryules(20, 2.1) round(1000 * dyules(1:8, 2)) table(ryules(1000, 2)) ## Not run: x <- 0:6 plot(x, dyules(x, shape = 2.2), type = "h", las = 1, col = "blue") ## End(Not run)
dyules(1:20, 2.1) ryules(20, 2.1) round(1000 * dyules(1:8, 2)) table(ryules(1000, 2)) ## Not run: x <- 0:6 plot(x, dyules(x, shape = 2.2), type = "h", las = 1, col = "blue") ## End(Not run)
Estimating the shape parameter of the Yule-Simon distribution.
yulesimon(lshape = "loglink", ishape = NULL, nsimEIM = 200, zero = NULL)
yulesimon(lshape = "loglink", ishape = NULL, nsimEIM = 200, zero = NULL)
lshape |
Link function for the shape parameter,
called
|
ishape |
Optional initial value for the (positive) parameter.
See |
nsimEIM , zero
|
See |
The probability function is
where the parameter ,
is the
beta
function,
and .
The function
dyules
computes this
probability function.
The mean of , which is returned as fitted values, is
provided
.
The variance of
is
provided
.
The distribution was named after Udny Yule and Herbert A. Simon. Simon originally called it the Yule distribution. This family function can handle multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
and vgam
.
T. W. Yee
Simon, H. A. (1955). On a class of skew distribution functions. Biometrika, 42, 425–440.
ydata <- data.frame(x2 = runif(nn <- 1000)) ydata <- transform(ydata, y = ryules(nn, shape = exp(1.5 - x2))) with(ydata, table(y)) fit <- vglm(y ~ x2, yulesimon, data = ydata, trace = TRUE) coef(fit, matrix = TRUE) summary(fit)
ydata <- data.frame(x2 = runif(nn <- 1000)) ydata <- transform(ydata, y = ryules(nn, shape = exp(1.5 - x2))) with(ydata, table(y)) fit <- vglm(y ~ x2, yulesimon, data = ydata, trace = TRUE) coef(fit, matrix = TRUE) summary(fit)
Density, distribution function, quantile function and random
generation for the zero-altered binomial distribution with
parameter pobs0
.
dzabinom(x, size, prob, pobs0 = 0, log = FALSE) pzabinom(q, size, prob, pobs0 = 0) qzabinom(p, size, prob, pobs0 = 0) rzabinom(n, size, prob, pobs0 = 0)
dzabinom(x, size, prob, pobs0 = 0, log = FALSE) pzabinom(q, size, prob, pobs0 = 0) qzabinom(p, size, prob, pobs0 = 0) rzabinom(n, size, prob, pobs0 = 0)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
size , prob , log
|
Parameters from the ordinary binomial distribution
(see |
pobs0 |
Probability of (an observed) zero, called |
The probability function of is 0 with probability
pobs0
, else a positive binomial(size, prob) distribution.
dzabinom
gives the density and
pzabinom
gives the distribution function,
qzabinom
gives the quantile function, and
rzabinom
generates random deviates.
The argument pobs0
is recycled to the required length,
and must have values which lie in the interval .
T. W. Yee
size <- 10; prob <- 0.15; pobs0 <- 0.05; x <- (-1):7 dzabinom(x, size = size, prob = prob, pobs0 = pobs0) table(rzabinom(100, size = size, prob = prob, pobs0 = pobs0)) ## Not run: x <- 0:10 barplot(rbind(dzabinom(x, size = size, prob = prob, pobs0 = pobs0), dbinom(x, size = size, prob = prob)), beside = TRUE, col = c("blue", "orange"), cex.main = 0.7, las = 1, ylab = "Probability", names.arg = as.character(x), main = paste("ZAB(size = ", size, ", prob = ", prob, ", pobs0 = ", pobs0, ") [blue] vs", " Binom(size = ", size, ", prob = ", prob, ") [orange] densities", sep = "")) ## End(Not run)
size <- 10; prob <- 0.15; pobs0 <- 0.05; x <- (-1):7 dzabinom(x, size = size, prob = prob, pobs0 = pobs0) table(rzabinom(100, size = size, prob = prob, pobs0 = pobs0)) ## Not run: x <- 0:10 barplot(rbind(dzabinom(x, size = size, prob = prob, pobs0 = pobs0), dbinom(x, size = size, prob = prob)), beside = TRUE, col = c("blue", "orange"), cex.main = 0.7, las = 1, ylab = "Probability", names.arg = as.character(x), main = paste("ZAB(size = ", size, ", prob = ", prob, ", pobs0 = ", pobs0, ") [blue] vs", " Binom(size = ", size, ", prob = ", prob, ") [orange] densities", sep = "")) ## End(Not run)
Fits a zero-altered binomial distribution based on a conditional model involving a Bernoulli distribution and a positive-binomial distribution.
zabinomial(lpobs0 = "logitlink", lprob = "logitlink", type.fitted = c("mean", "prob", "pobs0"), ipobs0 = NULL, iprob = NULL, imethod = 1, zero = NULL) zabinomialff(lprob = "logitlink", lonempobs0 = "logitlink", type.fitted = c("mean", "prob", "pobs0", "onempobs0"), iprob = NULL, ionempobs0 = NULL, imethod = 1, zero = "onempobs0")
zabinomial(lpobs0 = "logitlink", lprob = "logitlink", type.fitted = c("mean", "prob", "pobs0"), ipobs0 = NULL, iprob = NULL, imethod = 1, zero = NULL) zabinomialff(lprob = "logitlink", lonempobs0 = "logitlink", type.fitted = c("mean", "prob", "pobs0", "onempobs0"), iprob = NULL, ionempobs0 = NULL, imethod = 1, zero = "onempobs0")
lprob |
Parameter link function applied to the probability parameter
of the binomial distribution.
See |
lpobs0 |
Link function for the parameter |
type.fitted |
See |
iprob , ipobs0
|
|
lonempobs0 , ionempobs0
|
Corresponding argument for the other parameterization. See details below. |
imethod , zero
|
The response is zero with probability
,
else
has a positive-binomial distribution with
probability
. Thus
,
which may be modelled as a function of the covariates.
The zero-altered binomial distribution differs from the
zero-inflated binomial distribution in that the former
has zeros coming from one source, whereas the latter
has zeros coming from the binomial distribution too. The
zero-inflated binomial distribution is implemented in
zibinomial
.
Some people call the zero-altered binomial a hurdle model.
The input is currently a vector or one-column matrix.
By default, the two linear/additive
predictors for zabinomial()
are .
The VGAM family function zabinomialff()
has a few
changes compared to zabinomial()
.
These are:
(i) the order of the linear/additive predictors is switched so the
binomial probability comes first;
(ii) argument onempobs0
is now 1 minus the probability of an observed 0,
i.e., the probability of the positive binomial distribution,
i.e., onempobs0
is 1-pobs0
;
(iii) argument zero
has a new default so that the onempobs0
is intercept-only by default.
Now zabinomialff()
is generally recommended over
zabinomial()
.
Both functions implement Fisher scoring and neither can handle
multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The fitted.values
slot of the fitted object,
which should be extracted by the generic function fitted
, returns
the mean (default) which is given by
where is the usual binomial mean.
If
type.fitted = "pobs0"
then is returned.
The response should be a two-column matrix of counts, with first column giving the number of successes.
Note this family function allows to be modelled as
functions of the covariates by having
zero = NULL
.
It is a conditional model, not a mixture model.
These family functions effectively combine
posbinomial
and binomialff
into
one family function.
T. W. Yee
dzabinom
,
zibinomial
,
posbinomial
,
spikeplot
,
binomialff
,
dbinom
,
CommonVGAMffArguments
.
zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, size = 10, prob = logitlink(-2 + 3*x2, inverse = TRUE), pobs0 = logitlink(-1 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzabinom(nn, size = size, prob = prob, pobs0 = pobs0)) with(zdata, table(y1)) zfit <- vglm(cbind(y1, size - y1) ~ x2, zabinomial(zero = NULL), data = zdata, trace = TRUE) coef(zfit, matrix = TRUE) head(fitted(zfit)) head(predict(zfit)) summary(zfit)
zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, size = 10, prob = logitlink(-2 + 3*x2, inverse = TRUE), pobs0 = logitlink(-1 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzabinom(nn, size = size, prob = prob, pobs0 = pobs0)) with(zdata, table(y1)) zfit <- vglm(cbind(y1, size - y1) ~ x2, zabinomial(zero = NULL), data = zdata, trace = TRUE) coef(zfit, matrix = TRUE) head(fitted(zfit)) head(predict(zfit)) summary(zfit)
Density, distribution function, quantile function and random
generation for the zero-altered geometric distribution with
parameter pobs0
.
dzageom(x, prob, pobs0 = 0, log = FALSE) pzageom(q, prob, pobs0 = 0) qzageom(p, prob, pobs0 = 0) rzageom(n, prob, pobs0 = 0)
dzageom(x, prob, pobs0 = 0, log = FALSE) pzageom(q, prob, pobs0 = 0) qzageom(p, prob, pobs0 = 0) rzageom(n, prob, pobs0 = 0)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
prob , log
|
Parameters from the ordinary geometric distribution
(see |
pobs0 |
Probability of (an observed) zero, called |
The probability function of is 0 with probability
pobs0
, else a positive geometric(prob) distribution.
dzageom
gives the density and
pzageom
gives the distribution function,
qzageom
gives the quantile function, and
rzageom
generates random deviates.
The argument pobs0
is recycled to the required length,
and must have values which lie in the interval .
T. W. Yee
zageometric
,
zigeometric
,
rposgeom
.
prob <- 0.35; pobs0 <- 0.05; x <- (-1):7 dzageom(x, prob = prob, pobs0 = pobs0) table(rzageom(100, prob = prob, pobs0 = pobs0)) ## Not run: x <- 0:10 barplot(rbind(dzageom(x, prob = prob, pobs0 = pobs0), dgeom(x, prob = prob)), las = 1, beside = TRUE, col = c("blue", "orange"), cex.main = 0.7, ylab = "Probability", names.arg = as.character(x), main = paste("ZAG(prob = ", prob, ", pobs0 = ", pobs0, ") [blue] vs", " Geometric(prob = ", prob, ") [orange] densities", sep = "")) ## End(Not run)
prob <- 0.35; pobs0 <- 0.05; x <- (-1):7 dzageom(x, prob = prob, pobs0 = pobs0) table(rzageom(100, prob = prob, pobs0 = pobs0)) ## Not run: x <- 0:10 barplot(rbind(dzageom(x, prob = prob, pobs0 = pobs0), dgeom(x, prob = prob)), las = 1, beside = TRUE, col = c("blue", "orange"), cex.main = 0.7, ylab = "Probability", names.arg = as.character(x), main = paste("ZAG(prob = ", prob, ", pobs0 = ", pobs0, ") [blue] vs", " Geometric(prob = ", prob, ") [orange] densities", sep = "")) ## End(Not run)
Fits a zero-altered geometric distribution based on a conditional model involving a Bernoulli distribution and a positive-geometric distribution.
zageometric(lpobs0 = "logitlink", lprob = "logitlink", type.fitted = c("mean", "prob", "pobs0", "onempobs0"), imethod = 1, ipobs0 = NULL, iprob = NULL, zero = NULL) zageometricff(lprob = "logitlink", lonempobs0 = "logitlink", type.fitted = c("mean", "prob", "pobs0", "onempobs0"), imethod = 1, iprob = NULL, ionempobs0 = NULL, zero = "onempobs0")
zageometric(lpobs0 = "logitlink", lprob = "logitlink", type.fitted = c("mean", "prob", "pobs0", "onempobs0"), imethod = 1, ipobs0 = NULL, iprob = NULL, zero = NULL) zageometricff(lprob = "logitlink", lonempobs0 = "logitlink", type.fitted = c("mean", "prob", "pobs0", "onempobs0"), imethod = 1, iprob = NULL, ionempobs0 = NULL, zero = "onempobs0")
lpobs0 |
Link function for the parameter |
lprob |
Parameter link function applied to the probability of success,
called |
type.fitted |
See |
ipobs0 , iprob
|
Optional initial values for the parameters. If given, they must be in range. For multi-column responses, these are recycled sideways. |
lonempobs0 , ionempobs0
|
Corresponding argument for the other parameterization. See details below. |
zero , imethod
|
The response is zero with probability
,
or
has a positive-geometric distribution with
probability
. Thus
,
which is modelled as a function of the covariates. The zero-altered
geometric distribution differs from the zero-inflated
geometric distribution in that the former has zeros coming from one
source, whereas the latter has zeros coming from the geometric
distribution too. The zero-inflated geometric distribution
is implemented in the VGAM package. Some people
call the zero-altered geometric a hurdle model.
The input can be a matrix (multiple responses).
By default, the two linear/additive predictors
of zageometric
are .
The VGAM family function zageometricff()
has a few
changes compared to zageometric()
.
These are:
(i) the order of the linear/additive predictors is switched so the
geometric probability comes first;
(ii) argument onempobs0
is now 1 minus the probability of an observed 0,
i.e., the probability of the positive geometric distribution,
i.e., onempobs0
is 1-pobs0
;
(iii) argument zero
has a new default so that the pobs0
is intercept-only by default.
Now zageometricff()
is generally recommended over
zageometric()
.
Both functions implement Fisher scoring and can handle
multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The fitted.values
slot of the fitted object,
which should be extracted by the generic function fitted
, returns
the mean (default) which is given by
If type.fitted = "pobs0"
then is returned.
Convergence for this VGAM family function seems to depend quite strongly on providing good initial values.
Inference obtained from summary.vglm
and summary.vgam
may or may not be correct. In particular, the p-values, standard errors
and degrees of freedom may need adjustment. Use simulation on artificial
data to check that these are reasonable.
Note this family function allows to be modelled as
functions of the covariates. It is a conditional model, not a mixture
model.
This family function effectively combines
binomialff
and
posgeometric()
and geometric
into
one family function.
However, posgeometric()
is not written because it
is trivially related to geometric
.
T. W. Yee
dzageom
,
geometric
,
zigeometric
,
spikeplot
,
dgeom
,
CommonVGAMffArguments
,
simulate.vlm
.
zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pobs0 = logitlink(-1 + 2*x2, inverse = TRUE), prob = logitlink(-2 + 3*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzageom(nn, prob = prob, pobs0 = pobs0), y2 = rzageom(nn, prob = prob, pobs0 = pobs0)) with(zdata, table(y1)) fit <- vglm(cbind(y1, y2) ~ x2, zageometric, data = zdata, trace = TRUE) coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) summary(fit)
zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pobs0 = logitlink(-1 + 2*x2, inverse = TRUE), prob = logitlink(-2 + 3*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzageom(nn, prob = prob, pobs0 = pobs0), y2 = rzageom(nn, prob = prob, pobs0 = pobs0)) with(zdata, table(y1)) fit <- vglm(cbind(y1, y2) ~ x2, zageometric, data = zdata, trace = TRUE) coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) summary(fit)
Density, distribution function, quantile function and random
generation for the zero-altered negative binomial distribution
with parameter pobs0
.
dzanegbin(x, size, munb, pobs0 = 0, log = FALSE) pzanegbin(q, size, munb, pobs0 = 0) qzanegbin(p, size, munb, pobs0 = 0) rzanegbin(n, size, munb, pobs0 = 0)
dzanegbin(x, size, munb, pobs0 = 0, log = FALSE) pzanegbin(q, size, munb, pobs0 = 0) qzanegbin(p, size, munb, pobs0 = 0) rzanegbin(n, size, munb, pobs0 = 0)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
size , munb , log
|
Parameters from the ordinary negative binomial distribution
(see |
pobs0 |
Probability of zero, called |
The probability function of is 0 with
probability
pobs0
, else a positive negative
binomial(, size) distribution.
dzanegbin
gives the density and
pzanegbin
gives the distribution function,
qzanegbin
gives the quantile function, and
rzanegbin
generates random deviates.
The argument pobs0
is recycled to the required length,
and must have values which lie in the interval .
T. W. Yee
munb <- 3; size <- 4; pobs0 <- 0.3; x <- (-1):7 dzanegbin(x, munb = munb, size = size, pobs0 = pobs0) table(rzanegbin(100, munb = munb, size = size, pobs0 = pobs0)) ## Not run: x <- 0:10 barplot(rbind(dzanegbin(x, munb = munb, size = size, pobs0 = pobs0), dnbinom(x, mu = munb, size = size)), beside = TRUE, col = c("blue", "green"), cex.main = 0.7, ylab = "Probability", names.arg = as.character(x), las = 1, main = paste0("ZANB(munb = ", munb, ", size = ", size,", pobs0 = ", pobs0, ") [blue] vs", " NB(mu = ", munb, ", size = ", size, ") [green] densities")) ## End(Not run)
munb <- 3; size <- 4; pobs0 <- 0.3; x <- (-1):7 dzanegbin(x, munb = munb, size = size, pobs0 = pobs0) table(rzanegbin(100, munb = munb, size = size, pobs0 = pobs0)) ## Not run: x <- 0:10 barplot(rbind(dzanegbin(x, munb = munb, size = size, pobs0 = pobs0), dnbinom(x, mu = munb, size = size)), beside = TRUE, col = c("blue", "green"), cex.main = 0.7, ylab = "Probability", names.arg = as.character(x), las = 1, main = paste0("ZANB(munb = ", munb, ", size = ", size,", pobs0 = ", pobs0, ") [blue] vs", " NB(mu = ", munb, ", size = ", size, ") [green] densities")) ## End(Not run)
Fits a zero-altered negative binomial distribution based on a conditional model involving a binomial distribution and a positive-negative binomial distribution.
zanegbinomial(zero = "size", type.fitted = c("mean", "munb", "pobs0"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lpobs0 = "logitlink", lmunb = "loglink", lsize = "loglink", imethod = 1, ipobs0 = NULL, imunb = NULL, iprobs.y = NULL, gprobs.y = (0:9)/10, isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) zanegbinomialff(lmunb = "loglink", lsize = "loglink", lonempobs0 = "logitlink", type.fitted = c("mean", "munb", "pobs0", "onempobs0"), isize = NULL, ionempobs0 = NULL, zero = c("size", "onempobs0"), mds.min = 1e-3, iprobs.y = NULL, gprobs.y = (0:9)/10, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)), imethod = 1, imunb = NULL, nsimEIM = 500)
zanegbinomial(zero = "size", type.fitted = c("mean", "munb", "pobs0"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lpobs0 = "logitlink", lmunb = "loglink", lsize = "loglink", imethod = 1, ipobs0 = NULL, imunb = NULL, iprobs.y = NULL, gprobs.y = (0:9)/10, isize = NULL, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) zanegbinomialff(lmunb = "loglink", lsize = "loglink", lonempobs0 = "logitlink", type.fitted = c("mean", "munb", "pobs0", "onempobs0"), isize = NULL, ionempobs0 = NULL, zero = c("size", "onempobs0"), mds.min = 1e-3, iprobs.y = NULL, gprobs.y = (0:9)/10, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, gsize.mux = exp(c(-30, -20, -15, -10, -6:3)), imethod = 1, imunb = NULL, nsimEIM = 500)
lpobs0 |
Link function for the parameter |
lmunb |
Link function applied to the |
lsize |
Parameter link function applied to the reciprocal of the dispersion
parameter, called |
type.fitted |
See |
lonempobs0 , ionempobs0
|
Corresponding argument for the other parameterization. See details below. |
ipobs0 , imunb , isize
|
Optional initial values for |
zero |
Specifies which of the three linear predictors are
modelled as intercept-only.
All parameters can be modelled as a
function of the explanatory variables by setting |
nsimEIM , imethod
|
|
iprobs.y , gsize.mux , gprobs.y
|
See |
cutoff.prob , eps.trig
|
See |
mds.min , max.support , max.chunk.MB
|
See |
The response is zero with probability
,
or
has a positive-negative binomial distribution with
probability
. Thus
,
which is modelled as a function of the covariates. The zero-altered
negative binomial distribution differs from the zero-inflated negative
binomial distribution in that the former has zeros coming from one
source, whereas the latter has zeros coming from the negative binomial
distribution too. The zero-inflated negative binomial distribution
is implemented in the VGAM package. Some people
call the zero-altered negative binomial a hurdle model.
For one response/species, by default, the three linear/additive
predictors
for zanegbinomial()
are . This vector is recycled for multiple species.
The VGAM family function zanegbinomialff()
has a few
changes compared to zanegbinomial()
.
These are:
(i) the order of the linear/additive predictors is switched so the
negative binomial mean comes first;
(ii) argument onempobs0
is now 1 minus the probability of an observed 0,
i.e., the probability of the positive negative binomial distribution,
i.e., onempobs0
is 1-pobs0
;
(iii) argument zero
has a new default so that the pobs0
is intercept-only by default.
Now zanegbinomialff()
is generally recommended over
zanegbinomial()
.
Both functions implement Fisher scoring and can handle
multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The fitted.values
slot of the fitted object,
which should be extracted by the generic function fitted
, returns
the mean (default) which is given by
If type.fitted = "pobs0"
then is returned.
This family function is fragile; it inherits the same difficulties as
posnegbinomial
.
Convergence for this VGAM family function seems to depend quite
strongly on providing good initial values.
This VGAM family function is computationally expensive
and usually runs slowly;
setting trace = TRUE
is useful for monitoring convergence.
Inference obtained from summary.vglm
and summary.vgam
may or may not be correct. In particular, the p-values, standard errors
and degrees of freedom may need adjustment. Use simulation on artificial
data to check that these are reasonable.
Note this family function allows to be modelled as
functions of the covariates provided
zero
is set correctly.
It is a conditional model, not a mixture model.
Simulated Fisher scoring is the algorithm.
This family function effectively combines
posnegbinomial
and binomialff
into
one family function.
This family function can handle multiple responses, e.g., more than one species.
T. W. Yee
Welsh, A. H., Cunningham, R. B., Donnelly, C. F. and Lindenmayer, D. B. (1996). Modelling the abundances of rare species: statistical models for counts with extra zeros. Ecological Modelling, 88, 297–308.
Yee, T. W. (2014). Reduced-rank vector generalized linear models with two linear predictors. Computational Statistics and Data Analysis, 71, 889–902.
gaitdnbinomial
,
posnegbinomial
,
Gaitdnbinom
,
negbinomial
,
binomialff
,
zinegbinomial
,
zipoisson
,
spikeplot
,
dnbinom
,
CommonVGAMffArguments
,
simulate.vlm
.
## Not run: zdata <- data.frame(x2 = runif(nn <- 2000)) zdata <- transform(zdata, pobs0 = logitlink(-1 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzanegbin(nn, munb = exp(0+2*x2), size = exp(1), pobs0 = pobs0), y2 = rzanegbin(nn, munb = exp(1+2*x2), size = exp(1), pobs0 = pobs0)) with(zdata, table(y1)) with(zdata, table(y2)) fit <- vglm(cbind(y1, y2) ~ x2, zanegbinomial, data = zdata, trace = TRUE) coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) ## End(Not run)
## Not run: zdata <- data.frame(x2 = runif(nn <- 2000)) zdata <- transform(zdata, pobs0 = logitlink(-1 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzanegbin(nn, munb = exp(0+2*x2), size = exp(1), pobs0 = pobs0), y2 = rzanegbin(nn, munb = exp(1+2*x2), size = exp(1), pobs0 = pobs0)) with(zdata, table(y1)) with(zdata, table(y2)) fit <- vglm(cbind(y1, y2) ~ x2, zanegbinomial, data = zdata, trace = TRUE) coef(fit, matrix = TRUE) head(fitted(fit)) head(predict(fit)) ## End(Not run)
Density, distribution function, quantile function and random
generation for the zero-altered Poisson distribution with
parameter pobs0
.
dzapois(x, lambda, pobs0 = 0, log = FALSE) pzapois(q, lambda, pobs0 = 0) qzapois(p, lambda, pobs0 = 0) rzapois(n, lambda, pobs0 = 0)
dzapois(x, lambda, pobs0 = 0, log = FALSE) pzapois(q, lambda, pobs0 = 0) qzapois(p, lambda, pobs0 = 0) rzapois(n, lambda, pobs0 = 0)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations.
If |
lambda |
Vector of positive means. |
pobs0 |
Probability of zero, called |
log |
Logical. Return the logarithm of the answer? |
The probability function of is 0 with probability
pobs0
, else a positive
.
dzapois
gives the density,
pzapois
gives the distribution function,
qzapois
gives the quantile function, and
rzapois
generates random deviates.
The argument pobs0
is recycled to the required length,
and must have values which lie in the interval .
T. W. Yee
zapoisson
,
Gaitdpois
,
dzipois
.
lambda <- 3; pobs0 <- 0.2; x <- (-1):7 (ii <- dzapois(x, lambda, pobs0)) max(abs(cumsum(ii) - pzapois(x, lambda, pobs0))) # Should be 0 table(rzapois(100, lambda, pobs0)) table(qzapois(runif(100), lambda, pobs0)) round(dzapois(0:10, lambda, pobs0) * 100) # Should be similar ## Not run: x <- 0:10 barplot(rbind(dzapois(x, lambda, pobs0), dpois(x, lambda)), beside = TRUE, col = c("blue", "green"), las = 1, main = paste0("ZAP(", lambda, ", pobs0 = ", pobs0, ") [blue]", "vs Poisson(", lambda, ") [green] densities"), names.arg = as.character(x), ylab = "Probability") ## End(Not run)
lambda <- 3; pobs0 <- 0.2; x <- (-1):7 (ii <- dzapois(x, lambda, pobs0)) max(abs(cumsum(ii) - pzapois(x, lambda, pobs0))) # Should be 0 table(rzapois(100, lambda, pobs0)) table(qzapois(runif(100), lambda, pobs0)) round(dzapois(0:10, lambda, pobs0) * 100) # Should be similar ## Not run: x <- 0:10 barplot(rbind(dzapois(x, lambda, pobs0), dpois(x, lambda)), beside = TRUE, col = c("blue", "green"), las = 1, main = paste0("ZAP(", lambda, ", pobs0 = ", pobs0, ") [blue]", "vs Poisson(", lambda, ") [green] densities"), names.arg = as.character(x), ylab = "Probability") ## End(Not run)
Fits a zero-altered Poisson distribution based on a conditional model involving a Bernoulli distribution and a positive-Poisson distribution.
zapoisson(lpobs0 = "logitlink", llambda = "loglink", type.fitted = c("mean", "lambda", "pobs0", "onempobs0"), imethod = 1, ipobs0 = NULL, ilambda = NULL, ishrinkage = 0.95, probs.y = 0.35, zero = NULL) zapoissonff(llambda = "loglink", lonempobs0 = "logitlink", type.fitted = c("mean", "lambda", "pobs0", "onempobs0"), imethod = 1, ilambda = NULL, ionempobs0 = NULL, ishrinkage = 0.95, probs.y = 0.35, zero = "onempobs0")
zapoisson(lpobs0 = "logitlink", llambda = "loglink", type.fitted = c("mean", "lambda", "pobs0", "onempobs0"), imethod = 1, ipobs0 = NULL, ilambda = NULL, ishrinkage = 0.95, probs.y = 0.35, zero = NULL) zapoissonff(llambda = "loglink", lonempobs0 = "logitlink", type.fitted = c("mean", "lambda", "pobs0", "onempobs0"), imethod = 1, ilambda = NULL, ionempobs0 = NULL, ishrinkage = 0.95, probs.y = 0.35, zero = "onempobs0")
lpobs0 |
Link function for the parameter |
llambda |
Link function for the usual |
type.fitted |
See |
lonempobs0 |
Corresponding argument for the other parameterization. See details below. |
imethod , ipobs0 , ionempobs0 , ilambda , ishrinkage
|
See |
probs.y , zero
|
See |
The response is zero with probability
,
else
has a positive-Poisson(
distribution with probability
. Thus
, which is modelled as a function of
the covariates. The zero-altered Poisson distribution differs
from the zero-inflated Poisson distribution in that the former
has zeros coming from one source, whereas the latter has zeros
coming from the Poisson distribution too. Some people call the
zero-altered Poisson a hurdle model.
For one response/species, by default, the two linear/additive
predictors for zapoisson()
are .
The VGAM family function zapoissonff()
has a few
changes compared to zapoisson()
.
These are:
(i) the order of the linear/additive predictors is switched so the
Poisson mean comes first;
(ii) argument onempobs0
is now 1 minus the probability of an observed 0,
i.e., the probability of the positive Poisson distribution,
i.e., onempobs0
is 1-pobs0
;
(iii) argument zero
has a new default so that the onempobs0
is intercept-only by default.
Now zapoissonff()
is generally recommended over
zapoisson()
.
Both functions implement Fisher scoring and can handle
multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The fitted.values
slot of the fitted object,
which should be extracted by the generic function fitted
,
returns the mean (default) which is given by
If type.fitted = "pobs0"
then is returned.
There are subtle differences between this family function and
zipoisson
and yip88
.
In particular, zipoisson
is a
mixture model whereas zapoisson()
and yip88
are conditional models.
Note this family function allows to be modelled
as functions of the covariates.
This family function effectively combines pospoisson
and binomialff
into one family function.
This family function can handle multiple responses,
e.g., more than one species.
It is recommended that Gaitdpois
be used, e.g.,
rgaitdpois(nn, lambda, pobs.mlm = pobs0, a.mlm = 0)
instead of
rzapois(nn, lambda, pobs0 = pobs0)
.
T. W. Yee
Welsh, A. H., Cunningham, R. B., Donnelly, C. F. and Lindenmayer, D. B. (1996). Modelling the abundances of rare species: statistical models for counts with extra zeros. Ecological Modelling, 88, 297–308.
Angers, J-F. and Biswas, A. (2003). A Bayesian analysis of zero-inflated generalized Poisson model. Computational Statistics & Data Analysis, 42, 37–46.
Yee, T. W. (2014). Reduced-rank vector generalized linear models with two linear predictors. Computational Statistics and Data Analysis, 71, 889–902.
Gaitdpois
,
rzapois
,
zipoisson
,
gaitdpoisson
,
pospoisson
,
posnegbinomial
,
spikeplot
,
binomialff
,
CommonVGAMffArguments
,
simulate.vlm
.
zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pobs0 = logitlink( -1 + 1*x2, inverse = TRUE), lambda = loglink(-0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y = rgaitdpois(nn, lambda, pobs.mlm = pobs0, a.mlm = 0)) with(zdata, table(y)) fit <- vglm(y ~ x2, zapoisson, data = zdata, trace = TRUE) fit <- vglm(y ~ x2, zapoisson, data = zdata, trace = TRUE, crit = "coef") head(fitted(fit)) head(predict(fit)) head(predict(fit, untransform = TRUE)) coef(fit, matrix = TRUE) summary(fit) # Another example ------------------------------ # Data from Angers and Biswas (2003) abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) abdata <- subset(abdata, w > 0) Abdata <- data.frame(yy = with(abdata, rep(y, w))) fit3 <- vglm(yy ~ 1, zapoisson, data = Abdata, trace = TRUE, crit = "coef") coef(fit3, matrix = TRUE) Coef(fit3) # Estimate lambda (they get 0.6997 with SE 0.1520) head(fitted(fit3), 1) with(Abdata, mean(yy)) # Compare this with fitted(fit3)
zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pobs0 = logitlink( -1 + 1*x2, inverse = TRUE), lambda = loglink(-0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y = rgaitdpois(nn, lambda, pobs.mlm = pobs0, a.mlm = 0)) with(zdata, table(y)) fit <- vglm(y ~ x2, zapoisson, data = zdata, trace = TRUE) fit <- vglm(y ~ x2, zapoisson, data = zdata, trace = TRUE, crit = "coef") head(fitted(fit)) head(predict(fit)) head(predict(fit, untransform = TRUE)) coef(fit, matrix = TRUE) summary(fit) # Another example ------------------------------ # Data from Angers and Biswas (2003) abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) abdata <- subset(abdata, w > 0) Abdata <- data.frame(yy = with(abdata, rep(y, w))) fit3 <- vglm(yy ~ 1, zapoisson, data = Abdata, trace = TRUE, crit = "coef") coef(fit3, matrix = TRUE) Coef(fit3) # Estimate lambda (they get 0.6997 with SE 0.1520) head(fitted(fit3), 1) with(Abdata, mean(yy)) # Compare this with fitted(fit3)
The zero
argument allows users to conveniently
model certain linear/additive predictors as intercept-only.
Often a certain parameter needs to be modelled simply while other
parameters in the model may be more complex, for example, the
parameter in LMS-Box-Cox quantile regression
should be modelled more simply compared to its
parameter.
Another example is the
parameter in a GEV distribution
which is should be modelled simpler than its
parameter.
Using the
zero
argument allows this to be fitted conveniently
without having to input all the constraint matrices explicitly.
The zero
argument can be assigned an integer vector from the
set {1:M
} where M
is the number of linear/additive
predictors. Full details about constraint matrices can be found in
the references.
See CommonVGAMffArguments
for more information.
Nothing is returned. It is simply a convenient argument for constraining certain linear/additive predictors to be an intercept only.
The use of other arguments may conflict with the zero
argument. For example, using constraints
to input constraint
matrices may conflict with the zero
argument.
Another example is the argument parallel
.
In general users
should not assume any particular order of precedence when
there is potential conflict of definition.
Currently no checking for consistency is made.
The argument zero
may be renamed in the future to
something better.
The argument creates the appropriate constraint matrices internally.
In all VGAM family functions zero = NULL
means
none of the linear/additive predictors are modelled as
intercepts-only.
Almost all VGAM family function have zero = NULL
as the default, but there are some exceptions, e.g.,
binom2.or
.
Typing something like coef(fit, matrix = TRUE)
is a useful
way to ensure that the zero
argument has worked as expected.
T. W. Yee
Yee, T. W. and Wild, C. J. (1996). Vector generalized additive models. Journal of the Royal Statistical Society, Series B, Methodological, 58, 481–493.
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
CommonVGAMffArguments
,
constraints
.
args(multinomial) args(binom2.or) args(gpd) #LMS quantile regression example fit <- vglm(BMI ~ sm.bs(age, df = 4), lms.bcg(zero = c(1, 3)), data = bmi.nz, trace = TRUE) coef(fit, matrix = TRUE)
args(multinomial) args(binom2.or) args(gpd) #LMS quantile regression example fit <- vglm(BMI ~ sm.bs(age, df = 4), lms.bcg(zero = c(1, 3)), data = bmi.nz, trace = TRUE) coef(fit, matrix = TRUE)
Computes Riemann's zeta function and its first two derivatives. Also can compute the Hurwitz zeta function.
zeta(x, deriv = 0, shift = 1)
zeta(x, deriv = 0, shift = 1)
x |
A complex-valued vector/matrix whose real values must be
|
deriv |
An integer equalling 0 or 1 or 2, which is the order of the derivative. The default means it is computed ordinarily. |
shift |
Positive and numeric, called |
The (Riemann) formula for real is
While the usual definition involves an infinite series that
converges when the real part of the argument is ,
more efficient methods have been devised to compute the
value. In particular, this function uses Euler–Maclaurin
summation. Theoretically, the zeta function can be computed
over the whole complex plane because of analytic continuation.
The (Riemann) formula used here for analytic continuation is
This is actually one of several formulas, but this one was discovered by Riemann himself and is called the functional equation.
The Hurwitz zeta function for real is
where is known here as the
shift
.
Since by default, this function will therefore return
Riemann's zeta function by default.
Currently derivatives are unavailable.
The default is a vector/matrix of computed values of Riemann's zeta
function.
If shift
contains values not equal to 1, then this is
Hurwitz's zeta function.
This function has not been fully tested, especially the derivatives.
In particular, analytic continuation does not work here for
complex x
with Re(x)<1
because currently the
gamma
function does not handle complex
arguments.
Estimation of the parameter of the zeta distribution can
be achieved with zetaff
.
T. W. Yee, with the help of Garry J. Tee.
Riemann, B. (1859). Ueber die Anzahl der Primzahlen unter einer gegebenen Grosse. Monatsberichte der Berliner Akademie, November 1859.
Edwards, H. M. (1974). Riemann's Zeta Function. Academic Press: New York.
Markman, B. (1965). The Riemann zeta function. BIT, 5, 138–141.
Abramowitz, M. and Stegun, I. A. (1972). Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover Publications Inc.
zetaff
,
Zeta
,
oazeta
,
oizeta
,
otzeta
,
lerch
,
gamma
.
zeta(2:10) ## Not run: curve(zeta, -13, 0.8, xlim = c(-12, 10), ylim = c(-1, 4), col = "orange", las = 1, main = expression({zeta}(x))) curve(zeta, 1.2, 12, add = TRUE, col = "orange") abline(v = 0, h = c(0, 1), lty = "dashed", col = "gray") curve(zeta, -14, -0.4, col = "orange", main = expression({zeta}(x))) abline(v = 0, h = 0, lty = "dashed", col = "gray") # Close up plot x <- seq(0.04, 0.8, len = 100) # Plot of the first derivative plot(x, zeta(x, deriv = 1), type = "l", las = 1, col = "blue", xlim = c(0.04, 3), ylim = c(-6, 0), main = "zeta'(x)") x <- seq(1.2, 3, len = 100) lines(x, zeta(x, deriv = 1), col = "blue") abline(v = 0, h = 0, lty = "dashed", col = "gray") ## End(Not run) zeta(2) - pi^2 / 6 # Should be 0 zeta(4) - pi^4 / 90 # Should be 0 zeta(6) - pi^6 / 945 # Should be 0 zeta(8) - pi^8 / 9450 # Should be 0 zeta(0, deriv = 1) + 0.5 * log(2*pi) # Should be 0 gamma0 <- 0.5772156649 gamma1 <- -0.07281584548 zeta(0, deriv = 2) - gamma1 + 0.5 * (log(2*pi))^2 + pi^2/24 - gamma0^2 / 2 # Should be 0 zeta(0.5, deriv = 1) + 3.92264613 # Should be 0 zeta(2.0, deriv = 1) + 0.93754825431 # Should be 0
zeta(2:10) ## Not run: curve(zeta, -13, 0.8, xlim = c(-12, 10), ylim = c(-1, 4), col = "orange", las = 1, main = expression({zeta}(x))) curve(zeta, 1.2, 12, add = TRUE, col = "orange") abline(v = 0, h = c(0, 1), lty = "dashed", col = "gray") curve(zeta, -14, -0.4, col = "orange", main = expression({zeta}(x))) abline(v = 0, h = 0, lty = "dashed", col = "gray") # Close up plot x <- seq(0.04, 0.8, len = 100) # Plot of the first derivative plot(x, zeta(x, deriv = 1), type = "l", las = 1, col = "blue", xlim = c(0.04, 3), ylim = c(-6, 0), main = "zeta'(x)") x <- seq(1.2, 3, len = 100) lines(x, zeta(x, deriv = 1), col = "blue") abline(v = 0, h = 0, lty = "dashed", col = "gray") ## End(Not run) zeta(2) - pi^2 / 6 # Should be 0 zeta(4) - pi^4 / 90 # Should be 0 zeta(6) - pi^6 / 945 # Should be 0 zeta(8) - pi^8 / 9450 # Should be 0 zeta(0, deriv = 1) + 0.5 * log(2*pi) # Should be 0 gamma0 <- 0.5772156649 gamma1 <- -0.07281584548 zeta(0, deriv = 2) - gamma1 + 0.5 * (log(2*pi))^2 + pi^2/24 - gamma0^2 / 2 # Should be 0 zeta(0.5, deriv = 1) + 3.92264613 # Should be 0 zeta(2.0, deriv = 1) + 0.93754825431 # Should be 0
Density, distribution function, quantile function and random generation for the zeta distribution.
dzeta(x, shape, log = FALSE) pzeta(q, shape, lower.tail = TRUE) qzeta(p, shape) rzeta(n, shape)
dzeta(x, shape, log = FALSE) pzeta(q, shape, lower.tail = TRUE) qzeta(p, shape) rzeta(n, shape)
x , q , p , n
|
Same as |
shape |
The positive shape parameter |
lower.tail , log
|
Same meaning as in |
The density function of the zeta distribution is given by
where ,
, and
is
Riemann's zeta function.
dzeta
gives the density,
pzeta
gives the distribution function,
qzeta
gives the quantile function, and
rzeta
generates random deviates.
qzeta()
runs slower and slower as shape
approaches
0 and shape
approaches 1. The VGAM family function
zetaff
estimates the shape parameter .
T. W. Yee
Johnson N. L., Kotz S., and Balakrishnan N. (1993). Univariate Discrete Distributions, 2nd ed. New York: Wiley.
zeta
,
zetaff
,
Oazeta
,
Oizeta
,
Otzeta
.
dzeta(1:20, shape = 2) myshape <- 0.5 max(abs(pzeta(1:200, myshape) - cumsum(1/(1:200)^(1+myshape)) / zeta(myshape+1))) # Should be 0 ## Not run: plot(1:6, dzeta(1:6, 2), type = "h", las = 1, col = "orange", ylab = "Probability", main = "zeta probability function; orange: shape = 2; blue: shape = 1") points(0.10 + 1:6, dzeta(1:6, 1), type = "h", col = "blue") ## End(Not run)
dzeta(1:20, shape = 2) myshape <- 0.5 max(abs(pzeta(1:200, myshape) - cumsum(1/(1:200)^(1+myshape)) / zeta(myshape+1))) # Should be 0 ## Not run: plot(1:6, dzeta(1:6, 2), type = "h", las = 1, col = "orange", ylab = "Probability", main = "zeta probability function; orange: shape = 2; blue: shape = 1") points(0.10 + 1:6, dzeta(1:6, 1), type = "h", col = "blue") ## End(Not run)
Estimates the parameter of the zeta distribution.
zetaff(lshape = "loglink", ishape = NULL, gshape = 1 + exp(-seq(7)), zero = NULL)
zetaff(lshape = "loglink", ishape = NULL, gshape = 1 + exp(-seq(7)), zero = NULL)
lshape , ishape , zero
|
These arguments apply to the (positive) parameter |
gshape |
See |
In this long tailed distribution
the response must be a positive integer.
The probability function for a response is
where is Riemann's zeta function.
The parameter
is positive, therefore a log link
is the default.
The mean of
is
(provided
) and these are the fitted values.
The variance of
is
provided
.
It appears that good initial values are needed for successful convergence. If convergence is not obtained, try several values ranging from values near 0 to values about 10 or more.
Multiple responses are handled.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
The zeta
function may be used to compute values
of the zeta function.
T. W. Yee
pp.527– of Chapter 11 of Johnson N. L., Kemp, A. W. and Kotz S. (2005). Univariate Discrete Distributions, 3rd edition, Hoboken, New Jersey: Wiley.
Knight, K. (2000). Mathematical Statistics. Boca Raton, FL, USA: Chapman & Hall/CRC Press.
zeta
,
Zeta
,
gaitdzeta
,
oazeta
,
oizeta
,
otzeta
,
diffzeta
,
hzeta
,
zipf
.
zdata <- data.frame(y = 1:5, w = c(63, 14, 5, 1, 2)) # Knight, p.304 fit <- vglm(y ~ 1, zetaff, data = zdata, trace = TRUE, weight = w, crit = "c") (phat <- Coef(fit)) # 1.682557 with(zdata, cbind(round(dzeta(y, phat) * sum(w), 1), w)) with(zdata, weighted.mean(y, w)) fitted(fit, matrix = FALSE) predict(fit) # The following should be zero at the MLE: with(zdata, mean(log(rep(y, w))) + zeta(1+phat, deriv = 1) / zeta(1+phat))
zdata <- data.frame(y = 1:5, w = c(63, 14, 5, 1, 2)) # Knight, p.304 fit <- vglm(y ~ 1, zetaff, data = zdata, trace = TRUE, weight = w, crit = "c") (phat <- Coef(fit)) # 1.682557 with(zdata, cbind(round(dzeta(y, phat) * sum(w), 1), w)) with(zdata, weighted.mean(y, w)) fitted(fit, matrix = FALSE) predict(fit) # The following should be zero at the MLE: with(zdata, mean(log(rep(y, w))) + zeta(1+phat, deriv = 1) / zeta(1+phat))
Density, distribution function, quantile function and random
generation for the zero-inflated binomial distribution with
parameter pstr0
.
dzibinom(x, size, prob, pstr0 = 0, log = FALSE) pzibinom(q, size, prob, pstr0 = 0) qzibinom(p, size, prob, pstr0 = 0) rzibinom(n, size, prob, pstr0 = 0)
dzibinom(x, size, prob, pstr0 = 0, log = FALSE) pzibinom(q, size, prob, pstr0 = 0) qzibinom(p, size, prob, pstr0 = 0) rzibinom(n, size, prob, pstr0 = 0)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
size |
number of trials. It is the |
prob |
probability of success on each trial. |
n |
Same as in |
log |
Same as |
pstr0 |
Probability of a structural zero
(i.e., ignoring the binomial distribution),
called |
The probability function of is 0 with probability
,
and
with
probability
. Thus
where is
distributed
.
dzibinom
gives the density,
pzibinom
gives the distribution function,
qzibinom
gives the quantile function, and
rzibinom
generates random deviates.
The argument pstr0
is recycled to the required length,
and must have values which lie in the interval .
These functions actually allow for zero-deflation.
That is, the resulting probability of a zero count
is less than the nominal value of the parent
distribution.
See Zipois
for more information.
T. W. Yee
zibinomial
,
Gaitdbinom
,
Binomial
.
prob <- 0.2; size <- 10; pstr0 <- 0.5 (ii <- dzibinom(0:size, size, prob, pstr0 = pstr0)) max(abs(cumsum(ii) - pzibinom(0:size, size, prob, pstr0 = pstr0))) # 0? table(rzibinom(100, size, prob, pstr0 = pstr0)) table(qzibinom(runif(100), size, prob, pstr0 = pstr0)) round(dzibinom(0:10, size, prob, pstr0 = pstr0) * 100) # Similar? ## Not run: x <- 0:size barplot(rbind(dzibinom(x, size, prob, pstr0 = pstr0), dbinom(x, size, prob)), beside = TRUE, col = c("blue", "green"), ylab = "Probability", main = paste0("ZIB(", size, ", ", prob, ", pstr0 = ", pstr0, ")", " (blue) vs Binomial(", size, ", ", prob, ") (green)"), names.arg = as.character(x), las = 1, lwd = 2) ## End(Not run)
prob <- 0.2; size <- 10; pstr0 <- 0.5 (ii <- dzibinom(0:size, size, prob, pstr0 = pstr0)) max(abs(cumsum(ii) - pzibinom(0:size, size, prob, pstr0 = pstr0))) # 0? table(rzibinom(100, size, prob, pstr0 = pstr0)) table(qzibinom(runif(100), size, prob, pstr0 = pstr0)) round(dzibinom(0:10, size, prob, pstr0 = pstr0) * 100) # Similar? ## Not run: x <- 0:size barplot(rbind(dzibinom(x, size, prob, pstr0 = pstr0), dbinom(x, size, prob)), beside = TRUE, col = c("blue", "green"), ylab = "Probability", main = paste0("ZIB(", size, ", ", prob, ", pstr0 = ", pstr0, ")", " (blue) vs Binomial(", size, ", ", prob, ") (green)"), names.arg = as.character(x), las = 1, lwd = 2) ## End(Not run)
Fits a zero-inflated binomial distribution by maximum likelihood estimation.
zibinomial(lpstr0 = "logitlink", lprob = "logitlink", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), ipstr0 = NULL, zero = NULL, multiple.responses = FALSE, imethod = 1) zibinomialff(lprob = "logitlink", lonempstr0 = "logitlink", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), ionempstr0 = NULL, zero = "onempstr0", multiple.responses = FALSE, imethod = 1)
zibinomial(lpstr0 = "logitlink", lprob = "logitlink", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), ipstr0 = NULL, zero = NULL, multiple.responses = FALSE, imethod = 1) zibinomialff(lprob = "logitlink", lonempstr0 = "logitlink", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), ionempstr0 = NULL, zero = "onempstr0", multiple.responses = FALSE, imethod = 1)
lpstr0 , lprob
|
Link functions for the parameter |
type.fitted |
See |
ipstr0 |
Optional initial values for |
lonempstr0 , ionempstr0
|
Corresponding arguments for the other parameterization. See details below. |
multiple.responses |
Logical. Currently it must be |
zero , imethod
|
See |
These functions are based on
for , and
for . That is, the response is a sample
proportion out of
trials, and the argument
size
in
rzibinom
is here.
The parameter
is the probability of a structural zero,
and it satisfies
.
The mean of
is
and these are returned as the fitted values
by default.
By default, the two linear/additive predictors
for
zibinomial()
are .
The VGAM family function zibinomialff()
has a few
changes compared to zibinomial()
.
These are:
(i) the order of the linear/additive predictors is switched so the
binomial probability comes first;
(ii) argument onempstr0
is now 1 minus
the probability of a structural zero, i.e.,
the probability of the parent (binomial) component,
i.e., onempstr0
is 1-pstr0
;
(iii) argument zero
has a new default so that the onempstr0
is intercept-only by default.
Now zibinomialff()
is generally recommended over
zibinomial()
.
Both functions implement Fisher scoring.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
Numerical problems can occur.
Half-stepping is not uncommon.
If failure to converge occurs, make use of the argument ipstr0
or ionempstr0
,
or imethod
.
The response variable must have one of the formats described by
binomialff
, e.g., a factor or two column matrix or a
vector of sample proportions with the weights
argument
specifying the values of .
To work well, one needs large values of
and
, i.e.,
the larger
and
are, the better.
If
then the model is unidentifiable since
the number of parameters is excessive.
Setting stepsize = 0.5
, say, may aid convergence.
Estimated probabilities of a structural zero and an
observed zero are returned, as in zipoisson
.
The zero-deflated binomial distribution might
be fitted by setting lpstr0 = identitylink
, albeit,
not entirely reliably. See zipoisson
for information that can be applied here. Else
try the zero-altered binomial distribution (see
zabinomial
).
T. W. Yee
Welsh, A. H., Lindenmayer, D. B. and Donnelly, C. F. (2013). Fitting and interpreting occupancy models. PLOS One, 8, 1–21.
rzibinom
,
binomialff
,
posbinomial
,
spikeplot
,
Binomial
.
size <- 10 # Number of trials; N in the notation above nn <- 200 zdata <- data.frame(pstr0 = logitlink( 0, inverse = TRUE), # 0.50 mubin = logitlink(-1, inverse = TRUE), # Mean of usual binomial sv = rep(size, length = nn)) zdata <- transform(zdata, y = rzibinom(nn, size = sv, prob = mubin, pstr0 = pstr0)) with(zdata, table(y)) fit <- vglm(cbind(y, sv - y) ~ 1, zibinomialff, data = zdata, trace = TRUE) fit <- vglm(cbind(y, sv - y) ~ 1, zibinomialff, data = zdata, trace = TRUE, stepsize = 0.5) coef(fit, matrix = TRUE) Coef(fit) # Useful for intercept-only models head(fitted(fit, type = "pobs0")) # Estimate of P(Y = 0) head(fitted(fit)) with(zdata, mean(y)) # Compare this with fitted(fit) summary(fit)
size <- 10 # Number of trials; N in the notation above nn <- 200 zdata <- data.frame(pstr0 = logitlink( 0, inverse = TRUE), # 0.50 mubin = logitlink(-1, inverse = TRUE), # Mean of usual binomial sv = rep(size, length = nn)) zdata <- transform(zdata, y = rzibinom(nn, size = sv, prob = mubin, pstr0 = pstr0)) with(zdata, table(y)) fit <- vglm(cbind(y, sv - y) ~ 1, zibinomialff, data = zdata, trace = TRUE) fit <- vglm(cbind(y, sv - y) ~ 1, zibinomialff, data = zdata, trace = TRUE, stepsize = 0.5) coef(fit, matrix = TRUE) Coef(fit) # Useful for intercept-only models head(fitted(fit, type = "pobs0")) # Estimate of P(Y = 0) head(fitted(fit)) with(zdata, mean(y)) # Compare this with fitted(fit) summary(fit)
Density, and random generation
for the zero-inflated geometric distribution with parameter
pstr0
.
dzigeom(x, prob, pstr0 = 0, log = FALSE) pzigeom(q, prob, pstr0 = 0) qzigeom(p, prob, pstr0 = 0) rzigeom(n, prob, pstr0 = 0)
dzigeom(x, prob, pstr0 = 0, log = FALSE) pzigeom(q, prob, pstr0 = 0) qzigeom(p, prob, pstr0 = 0) rzigeom(n, prob, pstr0 = 0)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
prob |
see |
n |
Same as in |
pstr0 |
Probability of structural zero (ignoring the geometric
distribution), called |
log |
Logical. Return the logarithm of the answer? |
The probability function of is 0 with probability
, and
with
probability
. Thus
where is distributed
.
dzigeom
gives the density,
pzigeom
gives the distribution function,
qzigeom
gives the quantile function, and
rzigeom
generates random deviates.
The argument pstr0
is recycled to the required length,
and must have values which lie in the interval .
These functions actually allow for zero-deflation.
That is, the resulting probability of a zero count
is less than the nominal value of the parent
distribution.
See Zipois
for more information.
T. W. Yee
prob <- 0.5; pstr0 <- 0.2; x <- (-1):20 (ii <- dzigeom(x, prob, pstr0)) max(abs(cumsum(ii) - pzigeom(x, prob, pstr0))) # Should be 0 table(rzigeom(1000, prob, pstr0)) ## Not run: x <- 0:10 barplot(rbind(dzigeom(x, prob, pstr0), dgeom(x, prob)), beside = TRUE, col = c("blue","orange"), ylab = "P[Y = y]", xlab = "y", las = 1, main = paste0("zigeometric(", prob, ", pstr0 = ", pstr0, ") (blue) vs", " geometric(", prob, ") (orange)"), names.arg = as.character(x)) ## End(Not run)
prob <- 0.5; pstr0 <- 0.2; x <- (-1):20 (ii <- dzigeom(x, prob, pstr0)) max(abs(cumsum(ii) - pzigeom(x, prob, pstr0))) # Should be 0 table(rzigeom(1000, prob, pstr0)) ## Not run: x <- 0:10 barplot(rbind(dzigeom(x, prob, pstr0), dgeom(x, prob)), beside = TRUE, col = c("blue","orange"), ylab = "P[Y = y]", xlab = "y", las = 1, main = paste0("zigeometric(", prob, ", pstr0 = ", pstr0, ") (blue) vs", " geometric(", prob, ") (orange)"), names.arg = as.character(x)) ## End(Not run)
Fits a zero-inflated geometric distribution by maximum likelihood estimation.
zigeometric(lpstr0 = "logitlink", lprob = "logitlink", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), ipstr0 = NULL, iprob = NULL, imethod = 1, bias.red = 0.5, zero = NULL) zigeometricff(lprob = "logitlink", lonempstr0 = "logitlink", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), iprob = NULL, ionempstr0 = NULL, imethod = 1, bias.red = 0.5, zero = "onempstr0")
zigeometric(lpstr0 = "logitlink", lprob = "logitlink", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), ipstr0 = NULL, iprob = NULL, imethod = 1, bias.red = 0.5, zero = NULL) zigeometricff(lprob = "logitlink", lonempstr0 = "logitlink", type.fitted = c("mean", "prob", "pobs0", "pstr0", "onempstr0"), iprob = NULL, ionempstr0 = NULL, imethod = 1, bias.red = 0.5, zero = "onempstr0")
lpstr0 , lprob
|
Link functions for the parameters
|
lonempstr0 , ionempstr0
|
Corresponding arguments for the other parameterization. See details below. |
bias.red |
A constant used in the initialization process of |
type.fitted |
See |
ipstr0 , iprob
|
See |
zero , imethod
|
See |
Function zigeometric()
is based on
for , and
for .
The parameter
satisfies
. The mean of
is
and these are returned as the fitted values
by default.
By default, the two linear/additive predictors
are
.
Multiple responses are handled.
Estimated probabilities of a structural zero and an
observed zero can be returned, as in zipoisson
;
see fittedvlm
for information.
The VGAM family function zigeometricff()
has a few
changes compared to zigeometric()
.
These are:
(i) the order of the linear/additive predictors is switched so the
geometric probability comes first;
(ii) argument onempstr0
is now 1 minus
the probability of a structural zero, i.e.,
the probability of the parent (geometric) component,
i.e., onempstr0
is 1-pstr0
;
(iii) argument zero
has a new default so that the onempstr0
is intercept-only by default.
Now zigeometricff()
is generally recommended over
zigeometric()
.
Both functions implement Fisher scoring and can handle
multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
The zero-deflated geometric distribution might
be fitted by setting lpstr0 = identitylink
, albeit,
not entirely reliably. See zipoisson
for information that can be applied here. Else
try the zero-altered geometric distribution (see
zageometric
).
T. W. Yee
rzigeom
,
geometric
,
zageometric
,
spikeplot
,
rgeom
,
simulate.vlm
.
gdata <- data.frame(x2 = runif(nn <- 1000) - 0.5) gdata <- transform(gdata, x3 = runif(nn) - 0.5, x4 = runif(nn) - 0.5) gdata <- transform(gdata, eta1 = 1.0 - 1.0 * x2 + 2.0 * x3, eta2 = -1.0, eta3 = 0.5) gdata <- transform(gdata, prob1 = logitlink(eta1, inverse = TRUE), prob2 = logitlink(eta2, inverse = TRUE), prob3 = logitlink(eta3, inverse = TRUE)) gdata <- transform(gdata, y1 = rzigeom(nn, prob1, pstr0 = prob3), y2 = rzigeom(nn, prob2, pstr0 = prob3), y3 = rzigeom(nn, prob2, pstr0 = prob3)) with(gdata, table(y1)) with(gdata, table(y2)) with(gdata, table(y3)) head(gdata) fit1 <- vglm(y1 ~ x2 + x3 + x4, zigeometric(zero = 1), data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) head(fitted(fit1, type = "pstr0")) fit2 <- vglm(cbind(y2, y3) ~ 1, zigeometric(zero = 1), data = gdata, trace = TRUE) coef(fit2, matrix = TRUE) summary(fit2)
gdata <- data.frame(x2 = runif(nn <- 1000) - 0.5) gdata <- transform(gdata, x3 = runif(nn) - 0.5, x4 = runif(nn) - 0.5) gdata <- transform(gdata, eta1 = 1.0 - 1.0 * x2 + 2.0 * x3, eta2 = -1.0, eta3 = 0.5) gdata <- transform(gdata, prob1 = logitlink(eta1, inverse = TRUE), prob2 = logitlink(eta2, inverse = TRUE), prob3 = logitlink(eta3, inverse = TRUE)) gdata <- transform(gdata, y1 = rzigeom(nn, prob1, pstr0 = prob3), y2 = rzigeom(nn, prob2, pstr0 = prob3), y3 = rzigeom(nn, prob2, pstr0 = prob3)) with(gdata, table(y1)) with(gdata, table(y2)) with(gdata, table(y3)) head(gdata) fit1 <- vglm(y1 ~ x2 + x3 + x4, zigeometric(zero = 1), data = gdata, trace = TRUE) coef(fit1, matrix = TRUE) head(fitted(fit1, type = "pstr0")) fit2 <- vglm(cbind(y2, y3) ~ 1, zigeometric(zero = 1), data = gdata, trace = TRUE) coef(fit2, matrix = TRUE) summary(fit2)
Density, distribution function, quantile function and random
generation for the zero-inflated negative binomial distribution
with parameter pstr0
.
dzinegbin(x, size, prob = NULL, munb = NULL, pstr0 = 0, log = FALSE) pzinegbin(q, size, prob = NULL, munb = NULL, pstr0 = 0) qzinegbin(p, size, prob = NULL, munb = NULL, pstr0 = 0) rzinegbin(n, size, prob = NULL, munb = NULL, pstr0 = 0)
dzinegbin(x, size, prob = NULL, munb = NULL, pstr0 = 0, log = FALSE) pzinegbin(q, size, prob = NULL, munb = NULL, pstr0 = 0) qzinegbin(p, size, prob = NULL, munb = NULL, pstr0 = 0) rzinegbin(n, size, prob = NULL, munb = NULL, pstr0 = 0)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
Same as in |
size , prob , munb , log
|
Arguments matching |
pstr0 |
Probability of structural zero
(i.e., ignoring the negative binomial distribution),
called |
The probability function of is 0 with probability
, and a negative binomial distribution with
probability
. Thus
where is distributed as a negative binomial distribution
(see
rnbinom
.)
See negbinomial
, a VGAM family
function, for the formula of the probability density
function and other details of the negative binomial
distribution.
dzinegbin
gives the density,
pzinegbin
gives the distribution function,
qzinegbin
gives the quantile function, and
rzinegbin
generates random deviates.
The argument pstr0
is recycled to the required
length, and must have values which lie in the interval
.
These functions actually allow for zero-deflation.
That is, the resulting probability of a zero count
is less than the nominal value of the parent
distribution.
See Zipois
for more information.
T. W. Yee
zinegbinomial
,
rnbinom
,
rzipois
.
munb <- 3; pstr0 <- 0.2; size <- k <- 10; x <- 0:10 (ii <- dzinegbin(x, pstr0 = pstr0, mu = munb, size = k)) max(abs(cumsum(ii) - pzinegbin(x, pstr0 = pstr0, mu = munb, size = k))) table(rzinegbin(100, pstr0 = pstr0, mu = munb, size = k)) table(qzinegbin(runif(1000), pstr0 = pstr0, mu = munb, size = k)) round(dzinegbin(x, pstr0 = pstr0, mu = munb, size = k) * 1000) # Similar? ## Not run: barplot(rbind(dzinegbin(x, pstr0 = pstr0, mu = munb, size = k), dnbinom(x, mu = munb, size = k)), las = 1, beside = TRUE, col = c("blue", "green"), ylab = "Probability", main = paste("ZINB(mu = ", munb, ", k = ", k, ", pstr0 = ", pstr0, ") (blue) vs NB(mu = ", munb, ", size = ", k, ") (green)", sep = ""), names.arg = as.character(x)) ## End(Not run)
munb <- 3; pstr0 <- 0.2; size <- k <- 10; x <- 0:10 (ii <- dzinegbin(x, pstr0 = pstr0, mu = munb, size = k)) max(abs(cumsum(ii) - pzinegbin(x, pstr0 = pstr0, mu = munb, size = k))) table(rzinegbin(100, pstr0 = pstr0, mu = munb, size = k)) table(qzinegbin(runif(1000), pstr0 = pstr0, mu = munb, size = k)) round(dzinegbin(x, pstr0 = pstr0, mu = munb, size = k) * 1000) # Similar? ## Not run: barplot(rbind(dzinegbin(x, pstr0 = pstr0, mu = munb, size = k), dnbinom(x, mu = munb, size = k)), las = 1, beside = TRUE, col = c("blue", "green"), ylab = "Probability", main = paste("ZINB(mu = ", munb, ", k = ", k, ", pstr0 = ", pstr0, ") (blue) vs NB(mu = ", munb, ", size = ", k, ") (green)", sep = ""), names.arg = as.character(x)) ## End(Not run)
Fits a zero-inflated negative binomial distribution by full maximum likelihood estimation.
zinegbinomial(zero = "size", type.fitted = c("mean", "munb", "pobs0", "pstr0", "onempstr0"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lpstr0 = "logitlink", lmunb = "loglink", lsize = "loglink", imethod = 1, ipstr0 = NULL, imunb = NULL, iprobs.y = NULL, isize = NULL, gprobs.y = (0:9)/10, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) zinegbinomialff(lmunb = "loglink", lsize = "loglink", lonempstr0 = "logitlink", type.fitted = c("mean", "munb", "pobs0", "pstr0", "onempstr0"), imunb = NULL, isize = NULL, ionempstr0 = NULL, zero = c("size", "onempstr0"), imethod = 1, iprobs.y = NULL, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, gprobs.y = (0:9)/10, gsize.mux = exp((-12:6)/2), mds.min = 1e-3, nsimEIM = 500)
zinegbinomial(zero = "size", type.fitted = c("mean", "munb", "pobs0", "pstr0", "onempstr0"), mds.min = 1e-3, nsimEIM = 500, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, lpstr0 = "logitlink", lmunb = "loglink", lsize = "loglink", imethod = 1, ipstr0 = NULL, imunb = NULL, iprobs.y = NULL, isize = NULL, gprobs.y = (0:9)/10, gsize.mux = exp(c(-30, -20, -15, -10, -6:3))) zinegbinomialff(lmunb = "loglink", lsize = "loglink", lonempstr0 = "logitlink", type.fitted = c("mean", "munb", "pobs0", "pstr0", "onempstr0"), imunb = NULL, isize = NULL, ionempstr0 = NULL, zero = c("size", "onempstr0"), imethod = 1, iprobs.y = NULL, cutoff.prob = 0.999, eps.trig = 1e-7, max.support = 4000, max.chunk.MB = 30, gprobs.y = (0:9)/10, gsize.mux = exp((-12:6)/2), mds.min = 1e-3, nsimEIM = 500)
lpstr0 , lmunb , lsize
|
Link functions for the parameters |
type.fitted |
See |
ipstr0 , isize , imunb
|
Optional initial values for |
lonempstr0 , ionempstr0
|
Corresponding arguments for the other parameterization. See details below. |
imethod |
An integer with value |
zero |
Specifies which linear/additive predictors are to be modelled
as intercept-only. They can be such that their absolute values are
either 1 or 2 or 3.
The default is the |
nsimEIM |
See |
iprobs.y , cutoff.prob , max.support , max.chunk.MB
|
See |
mds.min , eps.trig
|
See |
gprobs.y , gsize.mux
|
These arguments relate to grid searching in the initialization process.
See |
These functions are based on
and for ,
The parameter satisfies
.
The mean of
is
(returned as the fitted values).
By default, the three linear/additive predictors
for
zinegbinomial()
are .
See
negbinomial
, another VGAM family function,
for the formula of the probability density function and other details
of the negative binomial distribution.
Independent multiple responses are handled.
If so then arguments ipstr0
and isize
may be vectors
with length equal to the number of responses.
The VGAM family function zinegbinomialff()
has a few
changes compared to zinegbinomial()
.
These are:
(i) the order of the linear/additive predictors is switched so the
NB mean comes first;
(ii) onempstr0
is now 1 minus the probability of a structural 0,
i.e., the probability of the parent (NB) component,
i.e., onempstr0
is 1-pstr0
;
(iii) argument zero
has a new default so that the onempstr0
is intercept-only by default.
Now zinegbinomialff()
is generally recommended over
zinegbinomial()
.
Both functions implement Fisher scoring and can handle
multiple responses.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
,
and vgam
.
This model can be difficult to fit to data,
and this family function is fragile.
The model is especially difficult to fit reliably when
the estimated parameter is very large (so the model
approaches a zero-inflated Poisson distribution) or
much less than 1
(and gets more difficult as it approaches 0).
Numerical problems can also occur, e.g., when the probability of
a zero is actually less than, and not more than, the nominal
probability of zero.
Similarly, numerical problems can occur if there is little
or no 0-inflation, or when the sample size is small.
Half-stepping is not uncommon.
Successful convergence is sensitive to the initial values, therefore
if failure to converge occurs, try using combinations of arguments
stepsize
(in vglm.control
),
imethod
,
imunb
,
ipstr0
,
isize
, and/or
zero
if there are explanatory variables.
Else try fitting an ordinary negbinomial
model
or a zipoisson
model.
This VGAM family function can be computationally expensive
and can run slowly;
setting trace = TRUE
is useful for monitoring convergence.
Estimated probabilities of a structural zero and an
observed zero can be returned, as in zipoisson
;
see fittedvlm
for more information.
If is large then the use of VGAM family function
zipoisson
is probably preferable.
This follows because the Poisson is the limiting distribution of a
negative binomial as tends to infinity.
The zero-deflated negative binomial distribution
might be fitted by setting lpstr0 = identitylink
,
albeit, not entirely reliably. See zipoisson
for information that can be applied here. Else try
the zero-altered negative binomial distribution (see
zanegbinomial
).
T. W. Yee
gaitdnbinomial
,
Zinegbin
,
negbinomial
,
spikeplot
,
rpois
,
CommonVGAMffArguments
.
## Not run: # Example 1 ndata <- data.frame(x2 = runif(nn <- 1000)) ndata <- transform(ndata, pstr0 = logitlink(-0.5 + 1 * x2, inverse = TRUE), munb = exp( 3 + 1 * x2), size = exp( 0 + 2 * x2)) ndata <- transform(ndata, y1 = rzinegbin(nn, mu = munb, size = size, pstr0 = pstr0)) with(ndata, table(y1)["0"] / sum(table(y1))) nfit <- vglm(y1 ~ x2, zinegbinomial(zero = NULL), data = ndata) coef(nfit, matrix = TRUE) summary(nfit) head(cbind(fitted(nfit), with(ndata, (1 - pstr0) * munb))) round(vcov(nfit), 3) # Example 2: RR-ZINB could also be called a COZIVGLM-ZINB-2 ndata <- data.frame(x2 = runif(nn <- 2000)) ndata <- transform(ndata, x3 = runif(nn)) ndata <- transform(ndata, eta1 = 3 + 1 * x2 + 2 * x3) ndata <- transform(ndata, pstr0 = logitlink(-1.5 + 0.5 * eta1, inverse = TRUE), munb = exp(eta1), size = exp(4)) ndata <- transform(ndata, y1 = rzinegbin(nn, pstr0 = pstr0, mu = munb, size = size)) with(ndata, table(y1)["0"] / sum(table(y1))) rrzinb <- rrvglm(y1 ~ x2 + x3, zinegbinomial(zero = NULL), data = ndata, Index.corner = 2, str0 = 3, trace = TRUE) coef(rrzinb, matrix = TRUE) Coef(rrzinb) ## End(Not run)
## Not run: # Example 1 ndata <- data.frame(x2 = runif(nn <- 1000)) ndata <- transform(ndata, pstr0 = logitlink(-0.5 + 1 * x2, inverse = TRUE), munb = exp( 3 + 1 * x2), size = exp( 0 + 2 * x2)) ndata <- transform(ndata, y1 = rzinegbin(nn, mu = munb, size = size, pstr0 = pstr0)) with(ndata, table(y1)["0"] / sum(table(y1))) nfit <- vglm(y1 ~ x2, zinegbinomial(zero = NULL), data = ndata) coef(nfit, matrix = TRUE) summary(nfit) head(cbind(fitted(nfit), with(ndata, (1 - pstr0) * munb))) round(vcov(nfit), 3) # Example 2: RR-ZINB could also be called a COZIVGLM-ZINB-2 ndata <- data.frame(x2 = runif(nn <- 2000)) ndata <- transform(ndata, x3 = runif(nn)) ndata <- transform(ndata, eta1 = 3 + 1 * x2 + 2 * x3) ndata <- transform(ndata, pstr0 = logitlink(-1.5 + 0.5 * eta1, inverse = TRUE), munb = exp(eta1), size = exp(4)) ndata <- transform(ndata, y1 = rzinegbin(nn, pstr0 = pstr0, mu = munb, size = size)) with(ndata, table(y1)["0"] / sum(table(y1))) rrzinb <- rrvglm(y1 ~ x2 + x3, zinegbinomial(zero = NULL), data = ndata, Index.corner = 2, str0 = 3, trace = TRUE) coef(rrzinb, matrix = TRUE) Coef(rrzinb) ## End(Not run)
Fits an exchangeable bivariate odds-ratio model to two binary responses with a complementary log-log link. The data are assumed to come from a zero-inflated Poisson distribution that has been converted to presence/absence.
zipebcom(lmu12 = "clogloglink", lphi12 = "logitlink", loratio = "loglink", imu12 = NULL, iphi12 = NULL, ioratio = NULL, zero = c("phi12", "oratio"), tol = 0.001, addRidge = 0.001)
zipebcom(lmu12 = "clogloglink", lphi12 = "logitlink", loratio = "loglink", imu12 = NULL, iphi12 = NULL, ioratio = NULL, zero = c("phi12", "oratio"), tol = 0.001, addRidge = 0.001)
lmu12 , imu12
|
Link function, extra argument and optional initial values for
the first (and second) marginal probabilities.
Argument |
lphi12 |
Link function applied to the |
loratio |
Link function applied to the odds ratio.
See |
iphi12 , ioratio
|
Optional initial values for |
zero |
Which linear/additive predictor is modelled as an intercept only?
A |
tol |
Tolerance for testing independence. Should be some small positive numerical value. |
addRidge |
Some small positive numerical value.
The first two diagonal elements of the working weight matrices are
multiplied by |
This VGAM family function fits an exchangeable bivariate odds
ratio model (binom2.or
) with a clogloglink
link.
The data are assumed to come from a zero-inflated Poisson (ZIP) distribution
that has been converted to presence/absence.
Explicitly, the default model is
for the (exchangeable) marginals, and
for the mixing parameter, and
specifies the dependency between the two responses. Here, the responses
equal 1 for a success and a 0 for a failure, and the odds ratio is often
written .
We have
because of the exchangeability.
The second linear/additive predictor models the
parameter (see
zipoisson
).
The third linear/additive predictor is the same as binom2.or
,
viz., the log odds ratio.
Suppose a dataset1 comes from a Poisson distribution that has been
converted to presence/absence, and that both marginal probabilities
are the same (exchangeable).
Then binom2.or("clogloglink", exch=TRUE)
is appropriate.
Now suppose a dataset2 comes from a zero-inflated Poisson
distribution. The first linear/additive predictor of zipebcom()
applied to dataset2
is the same as that of
binom2.or("clogloglink", exch=TRUE)
applied to dataset1.
That is, the has been taken care
of by
zipebcom()
so that it is just like the simpler
binom2.or
.
Note that, for ,
mu12 = prob12 / (1-phi12)
where prob12
is the probability
of a 1 under the ZIP model.
Here, mu12
correspond to mu1
and mu2
in the
binom2.or
-Poisson model.
If then
zipebcom()
should be equivalent to
binom2.or("clogloglink", exch=TRUE)
.
Full details are given in Yee and Dirnbock (2009).
The leading submatrix of the expected
information matrix (EIM) is of rank-1, not 2! This is due to the
fact that the parameters corresponding to the first two
linear/additive predictors are unidentifiable. The quick fix
around this problem is to use the
addRidge
adjustment.
The model is fitted by maximum likelihood estimation since the full
likelihood is specified. Fisher scoring is implemented.
The default models and
as
single parameters only, but this
can be circumvented by setting
zero=NULL
in order to model the
and odds ratio as a function of all the explanatory
variables.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as vglm
and vgam
.
When fitted, the fitted.values
slot of the object contains the
four joint probabilities, labelled as = (0,0),
(0,1), (1,0), (1,1), respectively.
These estimated probabilities should be extracted with the
fitted
generic function.
The fact that the EIM is not of full rank may mean the model is
naturally ill-conditioned.
Not sure whether there are any negative consequences wrt theory.
For now
it is certainly safer to fit binom2.or
to bivariate binary
responses.
The "12"
in the argument names reinforce the user about the
exchangeability assumption.
The name of this VGAM family function stands for
zero-inflated Poisson exchangeable bivariate complementary
log-log odds-ratio model or ZIP-EBCOM.
See binom2.or
for details that are pertinent to this
VGAM family function too.
Even better initial values are usually needed here.
The xij
(see vglm.control
) argument enables
environmental variables with different values at the two time points
to be entered into an exchangeable binom2.or
model.
See the author's webpage for sample code.
Yee, T. W. and Dirnbock, T. (2009). Models for analysing species' presence/absence data at two time points. Journal of Theoretical Biology, 259(4), 684–694.
binom2.or
,
zipoisson
,
clogloglink
,
CommonVGAMffArguments
.
## Not run: zdata <- data.frame(x2 = seq(0, 1, len = (nsites <- 2000))) zdata <- transform(zdata, eta1 = -3 + 5 * x2, phi1 = logitlink(-1, inverse = TRUE), oratio = exp(2)) zdata <- transform(zdata, mu12 = clogloglink(eta1, inverse = TRUE) * (1-phi1)) tmat <- with(zdata, rbinom2.or(nsites, mu1 = mu12, oratio = oratio, exch = TRUE)) zdata <- transform(zdata, ybin1 = tmat[, 1], ybin2 = tmat[, 2]) with(zdata, table(ybin1, ybin2)) / nsites # For interest only # Various plots of the data, for interest only par(mfrow = c(2, 2)) plot(jitter(ybin1) ~ x2, data = zdata, col = "blue") plot(jitter(ybin2) ~ jitter(ybin1), data = zdata, col = "blue") plot(mu12 ~ x2, data = zdata, col = "blue", type = "l", ylim = 0:1, ylab = "Probability", main = "Marginal probability and phi") with(zdata, abline(h = phi1[1], col = "red", lty = "dashed")) tmat2 <- with(zdata, dbinom2.or(mu1 = mu12, oratio = oratio, exch = TRUE)) with(zdata, matplot(x2, tmat2, col = 1:4, type = "l", ylim = 0:1, ylab = "Probability", main = "Joint probabilities")) # Now fit the model to the data. fit <- vglm(cbind(ybin1, ybin2) ~ x2, zipebcom, data = zdata, trace = TRUE) coef(fit, matrix = TRUE) summary(fit) vcov(fit) ## End(Not run)
## Not run: zdata <- data.frame(x2 = seq(0, 1, len = (nsites <- 2000))) zdata <- transform(zdata, eta1 = -3 + 5 * x2, phi1 = logitlink(-1, inverse = TRUE), oratio = exp(2)) zdata <- transform(zdata, mu12 = clogloglink(eta1, inverse = TRUE) * (1-phi1)) tmat <- with(zdata, rbinom2.or(nsites, mu1 = mu12, oratio = oratio, exch = TRUE)) zdata <- transform(zdata, ybin1 = tmat[, 1], ybin2 = tmat[, 2]) with(zdata, table(ybin1, ybin2)) / nsites # For interest only # Various plots of the data, for interest only par(mfrow = c(2, 2)) plot(jitter(ybin1) ~ x2, data = zdata, col = "blue") plot(jitter(ybin2) ~ jitter(ybin1), data = zdata, col = "blue") plot(mu12 ~ x2, data = zdata, col = "blue", type = "l", ylim = 0:1, ylab = "Probability", main = "Marginal probability and phi") with(zdata, abline(h = phi1[1], col = "red", lty = "dashed")) tmat2 <- with(zdata, dbinom2.or(mu1 = mu12, oratio = oratio, exch = TRUE)) with(zdata, matplot(x2, tmat2, col = 1:4, type = "l", ylim = 0:1, ylab = "Probability", main = "Joint probabilities")) # Now fit the model to the data. fit <- vglm(cbind(ybin1, ybin2) ~ x2, zipebcom, data = zdata, trace = TRUE) coef(fit, matrix = TRUE) summary(fit) vcov(fit) ## End(Not run)
Estimates the parameter of the Zipf distribution.
zipf(N = NULL, lshape = "loglink", ishape = NULL)
zipf(N = NULL, lshape = "loglink", ishape = NULL)
N |
Number of elements, an integer satisfying |
lshape |
Parameter link function applied to the (positive) shape parameter |
ishape |
Optional initial value for the parameter |
The probability function for a response is
where is the exponent characterizing the distribution.
The mean of
, which are returned as the fitted values,
is
where
is the
th generalized harmonic number.
Zipf's law is an experimental law which is often applied
to the study of the frequency of words in a corpus of
natural language utterances. It states that the frequency
of any word is inversely proportional to its rank in the
frequency table. For example, "the"
and "of"
are first two most common words, and Zipf's law states
that "the"
is twice as common as "of"
.
Many other natural phenomena conform to Zipf's law.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions such as
vglm
and vgam
.
Upon convergence, the N
is stored as @misc$N
.
T. W. Yee
pp.526– of Chapter 11 of Johnson N. L., Kemp, A. W. and Kotz S. (2005). Univariate Discrete Distributions, 3rd edition, Hoboken, New Jersey, USA: Wiley.
zdata <- data.frame(y = 1:5, ofreq = c(63, 14, 5, 1, 2)) zfit <- vglm(y ~ 1, zipf, data = zdata, trace = TRUE, weight = ofreq) zfit <- vglm(y ~ 1, zipf(lshape = "identitylink", ishape = 3.4), data = zdata, trace = TRUE, weight = ofreq, crit = "coef") zfit@misc$N (shape.hat <- Coef(zfit)) with(zdata, weighted.mean(y, ofreq)) fitted(zfit, matrix = FALSE)
zdata <- data.frame(y = 1:5, ofreq = c(63, 14, 5, 1, 2)) zfit <- vglm(y ~ 1, zipf, data = zdata, trace = TRUE, weight = ofreq) zfit <- vglm(y ~ 1, zipf(lshape = "identitylink", ishape = 3.4), data = zdata, trace = TRUE, weight = ofreq, crit = "coef") zfit@misc$N (shape.hat <- Coef(zfit)) with(zdata, weighted.mean(y, ofreq)) fitted(zfit, matrix = FALSE)
Density, distribution function, quantile function and random generation for the Zipf distribution.
dzipf(x, N, shape, log = FALSE) pzipf(q, N, shape, log.p = FALSE) qzipf(p, N, shape) rzipf(n, N, shape)
dzipf(x, N, shape, log = FALSE) pzipf(q, N, shape, log.p = FALSE) qzipf(p, N, shape) rzipf(n, N, shape)
x , q , p , n
|
Same as |
N , shape
|
the number of elements, and the exponent characterizing the
distribution.
See |
log , log.p
|
Same meaning as in |
This is a finite version of the zeta distribution.
See zetaff
for more details.
In general, these functions runs slower and slower as N
increases.
dzipf
gives the density,
pzipf
gives the cumulative distribution function,
qzipf
gives the quantile function, and
rzipf
generates random deviates.
T. W. Yee
N <- 10; shape <- 0.5; y <- 1:N proby <- dzipf(y, N = N, shape = shape) ## Not run: plot(proby ~ y, type = "h", col = "blue", ylim = c(0, 0.2), ylab = "Probability", lwd = 2, las = 1, main = paste0("Zipf(N = ", N, ", shape = ", shape, ")")) ## End(Not run) sum(proby) # Should be 1 max(abs(cumsum(proby) - pzipf(y, N = N, shape = shape))) # 0?
N <- 10; shape <- 0.5; y <- 1:N proby <- dzipf(y, N = N, shape = shape) ## Not run: plot(proby ~ y, type = "h", col = "blue", ylim = c(0, 0.2), ylab = "Probability", lwd = 2, las = 1, main = paste0("Zipf(N = ", N, ", shape = ", shape, ")")) ## End(Not run) sum(proby) # Should be 1 max(abs(cumsum(proby) - pzipf(y, N = N, shape = shape))) # 0?
Density, distribution function, quantile function and random generation for the Mandelbrot distribution.
dzipfmb(x, shape, start = 1, log = FALSE) pzipfmb(q, shape, start = 1, lower.tail = TRUE, log.p = FALSE) qzipfmb(p, shape, start = 1) rzipfmb(n, shape, start = 1)
dzipfmb(x, shape, start = 1, log = FALSE) pzipfmb(q, shape, start = 1, lower.tail = TRUE, log.p = FALSE) qzipfmb(p, shape, start = 1) rzipfmb(n, shape, start = 1)
x |
vector of (non-negative integer) quantiles. |
q |
vector of quantiles. |
p |
vector of probabilities. |
n |
number of random values to return. |
shape |
vector of positive shape parameter. |
start |
integer, the minimum value of the support of the distribution. |
log , log.p
|
logical; if TRUE, probabilities p are given as log(p) |
lower.tail |
logical; if TRUE (default), probabilities are P[X <= x], otherwise, P[X > x]. |
The probability mass function of the Zipf-Mandelbrot distribution is given by
where and the starting value start
being by default 1.
dzipfmb
gives the density,
pzipfmb
gives the distribution function,
qzipfmb
gives the quantile function, and
rzipfmb
generates random deviates.
M. Chou, with edits by T. W. Yee.
Mandelbrot, B. (1961). On the theory of word frequencies and on related Markovian models of discourse. In R. Jakobson, Structure of Language and its Mathematical Aspects, pp. 190–219, Providence, RI, USA. American Mathematical Society.
Moreno-Sanchez, I. and Font-Clos, F. and Corral, A. (2016). Large-Scale Analysis of Zipf's Law in English Texts. PLos ONE, 11(1), 1–19.
Zipf
.
aa <- 1:10 (pp <- pzipfmb(aa, shape = 0.5, start = 1)) cumsum(dzipfmb(aa, shape = 0.5, start = 1)) # Should be same qzipfmb(pp, shape = 0.5, start = 1) - aa # Should be all 0s rdiffzeta(30, 0.5) ## Not run: x <- 1:10 plot(x, dzipfmb(x, shape = 0.5), type = "h", ylim = 0:1, sub = "shape=0.5", las = 1, col = "blue", ylab = "Probability", main = "Zipf-Mandelbrot distribution: blue=PMF; orange=CDF") lines(x+0.1, pzipfmb(x, shape = 0.5), col = "red", lty = 3, type = "h") ## End(Not run)
aa <- 1:10 (pp <- pzipfmb(aa, shape = 0.5, start = 1)) cumsum(dzipfmb(aa, shape = 0.5, start = 1)) # Should be same qzipfmb(pp, shape = 0.5, start = 1) - aa # Should be all 0s rdiffzeta(30, 0.5) ## Not run: x <- 1:10 plot(x, dzipfmb(x, shape = 0.5), type = "h", ylim = 0:1, sub = "shape=0.5", las = 1, col = "blue", ylab = "Probability", main = "Zipf-Mandelbrot distribution: blue=PMF; orange=CDF") lines(x+0.1, pzipfmb(x, shape = 0.5), col = "red", lty = 3, type = "h") ## End(Not run)
Density, distribution function, quantile function and random
generation for the zero-inflated and zero-deflated Poisson
distribution with parameter pstr0
.
dzipois(x, lambda, pstr0 = 0, log = FALSE) pzipois(q, lambda, pstr0 = 0) qzipois(p, lambda, pstr0 = 0) rzipois(n, lambda, pstr0 = 0)
dzipois(x, lambda, pstr0 = 0, log = FALSE) pzipois(q, lambda, pstr0 = 0) qzipois(p, lambda, pstr0 = 0) rzipois(n, lambda, pstr0 = 0)
x , q
|
vector of quantiles. |
p |
vector of probabilities. |
n |
number of observations. Must be a single positive integer. |
lambda |
Vector of positive means. |
pstr0 |
Probability of a structural zero
(i.e., ignoring the Poisson distribution),
called |
log |
Logical. Return the logarithm of the answer? |
The probability function of is 0 with probability
, and
with probability
. Thus
where is distributed
.
dzipois
gives the density,
pzipois
gives the distribution function,
qzipois
gives the quantile function, and
rzipois
generates random deviates.
The argument pstr0
is recycled to the required length,
and must have values which lie in the interval .
These functions actually allow for the
zero-deflated Poisson (ZDP) distribution.
Here, pstr0
is also permitted
to lie in the interval [-1/expm1(lambda), 0]
. The
resulting probability of a zero count is less than
the nominal Poisson value, and the use of pstr0
to
stand for the probability of a structural zero loses its
meaning.
When pstr0
equals -1/expm1(lambda)
this corresponds to the positive-Poisson distribution
(e.g., see Gaitdpois
), also
called the zero-truncated Poisson or ZTP.
The zero-modified Poisson (ZMP) is a combination of the ZIP and ZDP and ZTP distributions. The family function
T. W. Yee
zipoisson
,
Gaitdpois
,
dpois
,
rzinegbin
.
lambda <- 3; pstr0 <- 0.2; x <- (-1):7 (ii <- dzipois(x, lambda, pstr0 = pstr0)) max(abs(cumsum(ii) - pzipois(x, lambda, pstr0 = pstr0))) # 0? table(rzipois(100, lambda, pstr0 = pstr0)) table(qzipois(runif(100), lambda, pstr0)) round(dzipois(0:10, lambda, pstr0 = pstr0) * 100) # Similar? ## Not run: x <- 0:10 par(mfrow = c(2, 1)) # Zero-inflated Poisson barplot(rbind(dzipois(x, lambda, pstr0 = pstr0), dpois(x, lambda)), beside = TRUE, col = c("blue", "orange"), main = paste0("ZIP(", lambda, ", pstr0 = ", pstr0, ") (blue) vs", " Poisson(", lambda, ") (orange)"), names.arg = as.character(x)) deflat.limit <- -1 / expm1(lambda) # Zero-deflated Poisson newpstr0 <- round(deflat.limit / 1.5, 3) barplot(rbind(dzipois(x, lambda, pstr0 = newpstr0), dpois(x, lambda)), beside = TRUE, col = c("blue","orange"), main = paste0("ZDP(", lambda, ", pstr0 = ", newpstr0, ")", " (blue) vs Poisson(", lambda, ") (orange)"), names.arg = as.character(x)) ## End(Not run)
lambda <- 3; pstr0 <- 0.2; x <- (-1):7 (ii <- dzipois(x, lambda, pstr0 = pstr0)) max(abs(cumsum(ii) - pzipois(x, lambda, pstr0 = pstr0))) # 0? table(rzipois(100, lambda, pstr0 = pstr0)) table(qzipois(runif(100), lambda, pstr0)) round(dzipois(0:10, lambda, pstr0 = pstr0) * 100) # Similar? ## Not run: x <- 0:10 par(mfrow = c(2, 1)) # Zero-inflated Poisson barplot(rbind(dzipois(x, lambda, pstr0 = pstr0), dpois(x, lambda)), beside = TRUE, col = c("blue", "orange"), main = paste0("ZIP(", lambda, ", pstr0 = ", pstr0, ") (blue) vs", " Poisson(", lambda, ") (orange)"), names.arg = as.character(x)) deflat.limit <- -1 / expm1(lambda) # Zero-deflated Poisson newpstr0 <- round(deflat.limit / 1.5, 3) barplot(rbind(dzipois(x, lambda, pstr0 = newpstr0), dpois(x, lambda)), beside = TRUE, col = c("blue","orange"), main = paste0("ZDP(", lambda, ", pstr0 = ", newpstr0, ")", " (blue) vs Poisson(", lambda, ") (orange)"), names.arg = as.character(x)) ## End(Not run)
Fits a zero-inflated or zero-deflated Poisson distribution by full maximum likelihood estimation.
zipoisson(lpstr0 = "logitlink", llambda = "loglink", type.fitted = c("mean", "lambda", "pobs0", "pstr0", "onempstr0"), ipstr0 = NULL, ilambda = NULL, gpstr0 = NULL, imethod = 1, ishrinkage = 0.95, probs.y = 0.35, parallel = FALSE, zero = NULL) zipoissonff(llambda = "loglink", lonempstr0 = "logitlink", type.fitted = c("mean", "lambda", "pobs0", "pstr0", "onempstr0"), ilambda = NULL, ionempstr0 = NULL, gonempstr0 = NULL, imethod = 1, ishrinkage = 0.95, probs.y = 0.35, zero = "onempstr0")
zipoisson(lpstr0 = "logitlink", llambda = "loglink", type.fitted = c("mean", "lambda", "pobs0", "pstr0", "onempstr0"), ipstr0 = NULL, ilambda = NULL, gpstr0 = NULL, imethod = 1, ishrinkage = 0.95, probs.y = 0.35, parallel = FALSE, zero = NULL) zipoissonff(llambda = "loglink", lonempstr0 = "logitlink", type.fitted = c("mean", "lambda", "pobs0", "pstr0", "onempstr0"), ilambda = NULL, ionempstr0 = NULL, gonempstr0 = NULL, imethod = 1, ishrinkage = 0.95, probs.y = 0.35, zero = "onempstr0")
lpstr0 , llambda
|
Link function for the parameter |
ipstr0 , ilambda
|
Optional initial values for |
lonempstr0 , ionempstr0
|
Corresponding arguments for the other parameterization. See details below. |
type.fitted |
Character. The type of fitted value to be returned.
The first choice (the expected value) is the default.
The estimated probability of an observed 0 is an alternative,
else
the estimated probability of a structural 0,
or one minus the estimated probability of a structural 0.
See |
imethod |
An integer with value |
ishrinkage |
How much shrinkage is used when initializing
|
zero |
Specifies which linear/additive predictors are to be
modelled as
intercept-only. If given, the value can be
either 1 or 2, and the
default is none of them. Setting |
gpstr0 , gonempstr0 , probs.y
|
Details at |
parallel |
Details at |
These models are a mixture of a Poisson distribution
and the value 0;
it has value 0 with probability else is
Poisson(
) distributed.
Thus there are two sources for zero values, and
is the probability of a structural zero.
The model for
zipoisson()
can be written
and for ,
Here, the parameter satisfies
.
The mean of
is
and these
are returned as the fitted values,
by default.
The variance of
is
.
By default, the two linear/additive predictors
of
zipoisson()
are
.
The VGAM family function zipoissonff()
has a few
changes compared to zipoisson()
.
These are:
(i) the order of the linear/additive predictors
is switched so the
Poisson mean comes first;
(ii) onempstr0
is now 1 minus the probability
of a structural 0,
i.e., the probability of the parent (Poisson) component,
i.e., onempstr0
is 1-pstr0
;
(iii) argument zero
has a new default so that
the onempstr0
is intercept-only by default.
Now zipoissonff()
is generally recommended
over zipoisson()
(and definitely recommended over yip88
).
Both functions implement Fisher scoring and can handle
multiple responses.
Both family functions
can fit the zero-modified Poisson (ZMP), which
is a combination
of the ZIP and zero-deflated Poisson (ZDP);
see Zipois
for some details and the
example below.
The key is to set the link function to be
identitylink
.
However, problems might occur when iterations get close to
or go past the boundary of the parameter space,
especially when there are covariates.
The PMF of the ZMP is best written not as above
but in terms of onempstr0
which may be greater
than unity; when using pstr0
the above PMF
is negative for non-zero values.
An object of class "vglmff"
(see vglmff-class
).
The object is used by modelling functions
such as vglm
,
rrvglm
and vgam
.
Numerical problems can occur, e.g., when the probability of
zero is actually less than, not more than, the nominal
probability of zero.
For example, in the Angers and Biswas (2003) data below,
replacing 182 by 1 results in nonconvergence.
Half-stepping is not uncommon.
If failure to converge occurs, try using combinations of
imethod
,
ishrinkage
,
ipstr0
, and/or
zipoisson(zero = 1)
if there are explanatory variables.
The default for zipoissonff()
is to model the
structural zero probability as an intercept-only.
This family function can be used to estimate
the 0-deflated model,
hence pstr0
is not to be interpreted as a probability.
One should set, e.g., lpstr0 = "identitylink"
.
Likewise, the functions in Zipois
can handle the zero-deflated Poisson distribution too.
Although the iterations
might fall outside the parameter space,
the validparams
slot
should keep them inside.
A (somewhat) similar alternative for
zero-deflation is to try the zero-altered Poisson model
(see zapoisson
).
The use of this VGAM family function
with rrvglm
can result in a so-called COZIGAM or COZIGLM.
That is, a reduced-rank zero-inflated Poisson model (RR-ZIP)
is a constrained zero-inflated generalized linear model.
See what used to be COZIGAM on CRAN.
A RR-ZINB model can also be fitted easily;
see zinegbinomial
.
Jargon-wise, a COZIGLM might be better described as a
COZIVGLM-ZIP.
T. W. Yee
Thas, O. and Rayner, J. C. W. (2005). Smooth tests for the zero-inflated Poisson distribution. Biometrics, 61, 808–815.
Data: Angers, J-F. and Biswas, A. (2003). A Bayesian analysis of zero-inflated generalized Poisson model. Computational Statistics & Data Analysis, 42, 37–46.
Cameron, A. C. and Trivedi, P. K. (1998). Regression Analysis of Count Data. Cambridge University Press: Cambridge.
M'Kendrick, A. G. (1925). Applications of mathematics to medical problems. Proc. Edinb. Math. Soc., 44, 98–130.
Yee, T. W. (2014). Reduced-rank vector generalized linear models with two linear predictors. Computational Statistics and Data Analysis, 71, 889–902.
gaitdpoisson
,
zapoisson
,
Zipois
,
yip88
,
spikeplot
,
lpossums
,
rrvglm
,
negbinomial
,
zipebcom
,
rpois
,
simulate.vlm
,
hdeff.vglm
.
# Example 1: simulated ZIP data zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pstr01 = logitlink(-0.5 + 1*x2, inverse = TRUE), pstr02 = logitlink( 0.5 - 1*x2, inverse = TRUE), Ps01 = logitlink(-0.5 , inverse = TRUE), Ps02 = logitlink( 0.5 , inverse = TRUE), lambda1 = loglink(-0.5 + 2*x2, inverse = TRUE), lambda2 = loglink( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(nn, lambda1, pstr0 = Ps01), y2 = rzipois(nn, lambda2, pstr0 = Ps02)) with(zdata, table(y1)) # Eyeball the data with(zdata, table(y2)) fit1 <- vglm(y1 ~ x2, zipoisson(zero = 1), zdata, crit = "coef") fit2 <- vglm(y2 ~ x2, zipoisson(zero = 1), zdata, crit = "coef") coef(fit1, matrix = TRUE) # Should agree with the above values coef(fit2, matrix = TRUE) # Should agree with the above values # Fit all two simultaneously, using a different parameterization: fit12 <- vglm(cbind(y1, y2) ~ x2, zipoissonff, zdata, crit = "coef") coef(fit12, matrix = TRUE) # Should agree with the above values # For the first observation compute the probability that y1 is # due to a structural zero. (fitted(fit1, type = "pstr0") / fitted(fit1, type = "pobs0"))[1] # Example 2: McKendrick (1925). From 223 Indian village households cholera <- data.frame(ncases = 0:4, # Number of cholera cases, wfreq = c(168, 32, 16, 6, 1)) # Frequencies fit <- vglm(ncases ~ 1, zipoisson, wei = wfreq, cholera) coef(fit, matrix = TRUE) with(cholera, cbind(actual = wfreq, fitted = round(dzipois(ncases, Coef(fit)[2], pstr0 = Coef(fit)[1]) * sum(wfreq), digits = 2))) # Example 3: data from Angers and Biswas (2003) abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) abdata <- subset(abdata, w > 0) fit3 <- vglm(y ~ 1, zipoisson(lpstr0 = probitlink, ipstr0 = 0.8), data = abdata, weight = w, trace = TRUE) fitted(fit3, type = "pobs0") # Estimate of P(Y = 0) coef(fit3, matrix = TRUE) Coef(fit3) # Estimate of pstr0 and lambda fitted(fit3) with(abdata, weighted.mean(y, w)) # Compare this with fitted(fit) summary(fit3) # Example 4: zero-deflated (ZDP) model for intercept-only data zdata <- transform(zdata, lambda3 = loglink(0.0, inverse = TRUE)) zdata <- transform(zdata, deflat.limit=-1/expm1(lambda3)) # Bndy # The 'pstr0' parameter is negative and in parameter space: # Not too near the boundary: zdata <- transform(zdata, usepstr0 = deflat.limit / 2) zdata <- transform(zdata, y3 = rzipois(nn, lambda3, pstr0 = usepstr0)) head(zdata) with(zdata, table(y3)) # A lot of deflation fit4 <- vglm(y3 ~ 1, data = zdata, trace = TRUE, crit = "coef", zipoisson(lpstr0 = "identitylink")) coef(fit4, matrix = TRUE) # Check how accurate it was: zdata[1, "usepstr0"] # Answer coef(fit4)[1] # Estimate Coef(fit4) vcov(fit4) # Is positive-definite # Example 5: RR-ZIP set.seed(123) rrzip <- rrvglm(Alopacce ~ sm.bs(WaterCon, df = 3), zipoisson(zero = NULL), data = hspider, trace = TRUE, Index.corner = 2) coef(rrzip, matrix = TRUE) Coef(rrzip) summary(rrzip) ## Not run: plotvgam(rrzip, lcol = "blue")
# Example 1: simulated ZIP data zdata <- data.frame(x2 = runif(nn <- 1000)) zdata <- transform(zdata, pstr01 = logitlink(-0.5 + 1*x2, inverse = TRUE), pstr02 = logitlink( 0.5 - 1*x2, inverse = TRUE), Ps01 = logitlink(-0.5 , inverse = TRUE), Ps02 = logitlink( 0.5 , inverse = TRUE), lambda1 = loglink(-0.5 + 2*x2, inverse = TRUE), lambda2 = loglink( 0.5 + 2*x2, inverse = TRUE)) zdata <- transform(zdata, y1 = rzipois(nn, lambda1, pstr0 = Ps01), y2 = rzipois(nn, lambda2, pstr0 = Ps02)) with(zdata, table(y1)) # Eyeball the data with(zdata, table(y2)) fit1 <- vglm(y1 ~ x2, zipoisson(zero = 1), zdata, crit = "coef") fit2 <- vglm(y2 ~ x2, zipoisson(zero = 1), zdata, crit = "coef") coef(fit1, matrix = TRUE) # Should agree with the above values coef(fit2, matrix = TRUE) # Should agree with the above values # Fit all two simultaneously, using a different parameterization: fit12 <- vglm(cbind(y1, y2) ~ x2, zipoissonff, zdata, crit = "coef") coef(fit12, matrix = TRUE) # Should agree with the above values # For the first observation compute the probability that y1 is # due to a structural zero. (fitted(fit1, type = "pstr0") / fitted(fit1, type = "pobs0"))[1] # Example 2: McKendrick (1925). From 223 Indian village households cholera <- data.frame(ncases = 0:4, # Number of cholera cases, wfreq = c(168, 32, 16, 6, 1)) # Frequencies fit <- vglm(ncases ~ 1, zipoisson, wei = wfreq, cholera) coef(fit, matrix = TRUE) with(cholera, cbind(actual = wfreq, fitted = round(dzipois(ncases, Coef(fit)[2], pstr0 = Coef(fit)[1]) * sum(wfreq), digits = 2))) # Example 3: data from Angers and Biswas (2003) abdata <- data.frame(y = 0:7, w = c(182, 41, 12, 2, 2, 0, 0, 1)) abdata <- subset(abdata, w > 0) fit3 <- vglm(y ~ 1, zipoisson(lpstr0 = probitlink, ipstr0 = 0.8), data = abdata, weight = w, trace = TRUE) fitted(fit3, type = "pobs0") # Estimate of P(Y = 0) coef(fit3, matrix = TRUE) Coef(fit3) # Estimate of pstr0 and lambda fitted(fit3) with(abdata, weighted.mean(y, w)) # Compare this with fitted(fit) summary(fit3) # Example 4: zero-deflated (ZDP) model for intercept-only data zdata <- transform(zdata, lambda3 = loglink(0.0, inverse = TRUE)) zdata <- transform(zdata, deflat.limit=-1/expm1(lambda3)) # Bndy # The 'pstr0' parameter is negative and in parameter space: # Not too near the boundary: zdata <- transform(zdata, usepstr0 = deflat.limit / 2) zdata <- transform(zdata, y3 = rzipois(nn, lambda3, pstr0 = usepstr0)) head(zdata) with(zdata, table(y3)) # A lot of deflation fit4 <- vglm(y3 ~ 1, data = zdata, trace = TRUE, crit = "coef", zipoisson(lpstr0 = "identitylink")) coef(fit4, matrix = TRUE) # Check how accurate it was: zdata[1, "usepstr0"] # Answer coef(fit4)[1] # Estimate Coef(fit4) vcov(fit4) # Is positive-definite # Example 5: RR-ZIP set.seed(123) rrzip <- rrvglm(Alopacce ~ sm.bs(WaterCon, df = 3), zipoisson(zero = NULL), data = hspider, trace = TRUE, Index.corner = 2) coef(rrzip, matrix = TRUE) Coef(rrzip) summary(rrzip) ## Not run: plotvgam(rrzip, lcol = "blue")
Density, distribution function, and random generation for the zero/one-inflated beta distribution.
dzoabeta(x, shape1, shape2, pobs0 = 0, pobs1 = 0, log = FALSE, tol = .Machine$double.eps) pzoabeta(q, shape1, shape2, pobs0 = 0, pobs1 = 0, lower.tail = TRUE, log.p = FALSE, tol = .Machine$double.eps) qzoabeta(p, shape1, shape2, pobs0 = 0, pobs1 = 0, lower.tail = TRUE, log.p = FALSE, tol = .Machine$double.eps) rzoabeta(n, shape1, shape2, pobs0 = 0, pobs1 = 0, tol = .Machine$double.eps)
dzoabeta(x, shape1, shape2, pobs0 = 0, pobs1 = 0, log = FALSE, tol = .Machine$double.eps) pzoabeta(q, shape1, shape2, pobs0 = 0, pobs1 = 0, lower.tail = TRUE, log.p = FALSE, tol = .Machine$double.eps) qzoabeta(p, shape1, shape2, pobs0 = 0, pobs1 = 0, lower.tail = TRUE, log.p = FALSE, tol = .Machine$double.eps) rzoabeta(n, shape1, shape2, pobs0 = 0, pobs1 = 0, tol = .Machine$double.eps)
x , q , p , n
|
Same as |
pobs0 , pobs1
|
vector of probabilities that 0 and 1 are observed
( |
shape1 , shape2
|
|
lower.tail , log , log.p
|
Same as |
tol |
Numeric, tolerance for testing equality with 0 and 1. |
This distribution is a mixture of a discrete distribution
with a continuous distribution.
The cumulative distribution function of is
where is the cumulative distribution function
of the beta distribution with the same shape parameters
(
pbeta
),
is the inflated probability at 0 and
is the inflated probability at 1.
The default values of
mean that these
functions behave like the ordinary
Beta
when only the essential arguments are inputted.
dzoabeta
gives the density,
pzoabeta
gives the distribution function,
qzoabeta
gives the quantile, and
rzoabeta
generates random deviates.
Xiangjie Xue and T. W. Yee
zoabetaR
,
beta
,
betaR
,
Betabinom
.
## Not run: N <- 1000; y <- rzoabeta(N, 2, 3, 0.2, 0.2) hist(y, probability = TRUE, border = "blue", las = 1, main = "Blue = 0- and 1-altered; orange = ordinary beta") sum(y == 0) / N # Proportion of 0s sum(y == 1) / N # Proportion of 1s Ngrid <- 1000 lines(seq(0, 1, length = Ngrid), dbeta(seq(0, 1, length = Ngrid), 2, 3), col = "orange") lines(seq(0, 1, length = Ngrid), col = "blue", dzoabeta(seq(0, 1, length = Ngrid), 2 , 3, 0.2, 0.2)) ## End(Not run)
## Not run: N <- 1000; y <- rzoabeta(N, 2, 3, 0.2, 0.2) hist(y, probability = TRUE, border = "blue", las = 1, main = "Blue = 0- and 1-altered; orange = ordinary beta") sum(y == 0) / N # Proportion of 0s sum(y == 1) / N # Proportion of 1s Ngrid <- 1000 lines(seq(0, 1, length = Ngrid), dbeta(seq(0, 1, length = Ngrid), 2, 3), col = "orange") lines(seq(0, 1, length = Ngrid), col = "blue", dzoabeta(seq(0, 1, length = Ngrid), 2 , 3, 0.2, 0.2)) ## End(Not run)
Estimation of the shape parameters of the two-parameter beta distribution plus the probabilities of a 0 and/or a 1.
zoabetaR(lshape1 = "loglink", lshape2 = "loglink", lpobs0 = "logitlink", lpobs1 = "logitlink", ishape1 = NULL, ishape2 = NULL, trim = 0.05, type.fitted = c("mean", "pobs0", "pobs1", "beta.mean"), parallel.shape = FALSE, parallel.pobs = FALSE, zero = NULL)
zoabetaR(lshape1 = "loglink", lshape2 = "loglink", lpobs0 = "logitlink", lpobs1 = "logitlink", ishape1 = NULL, ishape2 = NULL, trim = 0.05, type.fitted = c("mean", "pobs0", "pobs1", "beta.mean"), parallel.shape = FALSE, parallel.pobs = FALSE, zero = NULL)
lshape1 , lshape2 , lpobs0 , lpobs1
|
Details at |
ishape1 , ishape2
|
Details at |
trim , zero
|
Same as |
parallel.shape , parallel.pobs
|
See |
type.fitted |
The choice |
The standard 2-parameter beta distribution has a support on (0,1),
however, many datasets have 0 and/or 1 values too.
This family function handles 0s and 1s (at least one of
them must be present) in
the data set by modelling the probability of a 0 by a
logistic regression (default link is the logit), and similarly
for the probability of a 1. The remaining proportion,
1-pobs0-pobs1
,
of the data comes from a standard beta distribution.
This family function therefore extends betaR
.
One has or
per response.
Multiple responses are allowed.
Similar to betaR
.
Thomas W. Yee and Xiangjie Xue.
Zoabeta
,
betaR
,
betaff
,
Beta
,
zipoisson
.
nn <- 1000; set.seed(1) bdata <- data.frame(x2 = runif(nn)) bdata <- transform(bdata, pobs0 = logitlink(-2 + x2, inverse = TRUE), pobs1 = logitlink(-2 + x2, inverse = TRUE)) bdata <- transform(bdata, y1 = rzoabeta(nn, shape1 = exp(1 + x2), shape2 = exp(2 - x2), pobs0 = pobs0, pobs1 = pobs1)) summary(bdata) fit1 <- vglm(y1 ~ x2, zoabetaR(parallel.pobs = TRUE), data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1)
nn <- 1000; set.seed(1) bdata <- data.frame(x2 = runif(nn)) bdata <- transform(bdata, pobs0 = logitlink(-2 + x2, inverse = TRUE), pobs1 = logitlink(-2 + x2, inverse = TRUE)) bdata <- transform(bdata, y1 = rzoabeta(nn, shape1 = exp(1 + x2), shape2 = exp(2 - x2), pobs0 = pobs0, pobs1 = pobs1)) summary(bdata) fit1 <- vglm(y1 ~ x2, zoabetaR(parallel.pobs = TRUE), data = bdata, trace = TRUE) coef(fit1, matrix = TRUE) summary(fit1)