Title: | Miscellaneous Functions 'T. Yanagida' |
---|---|
Description: | Miscellaneous functions for (1) data management (e.g., grand-mean and group-mean centering, coding variables and reverse coding items, scale and cluster scores, reading and writing Excel and SPSS files), (2) descriptive statistics (e.g., frequency table, cross tabulation, effect size measures), (3) missing data (e.g., descriptive statistics for missing data, missing data pattern, Little's test of Missing Completely at Random, and auxiliary variable analysis), (4) multilevel data (e.g., multilevel descriptive statistics, within-group and between-group correlation matrix, multilevel confirmatory factor analysis, level-specific fit indices, cross-level measurement equivalence evaluation, multilevel composite reliability, and multilevel R-squared measures), (5) item analysis (e.g., confirmatory factor analysis, coefficient alpha and omega, between-group and longitudinal measurement equivalence evaluation), (6) statistical analysis (e.g., confidence intervals, collinearity and residual diagnostics, dominance analysis, between- and within-subject analysis of variance, latent class analysis, t-test, z-test, sample size determination), and (7) functions to interact with 'Blimp' and 'Mplus'. |
Authors: | Takuya Yanagida [aut, cre] |
Maintainer: | Takuya Yanagida <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.6.8 |
Built: | 2024-12-24 06:33:56 UTC |
Source: | CRAN |
This function performs an one-way between-subject analysis of variance (ANOVA) including Tukey HSD post hoc test for multiple comparison and provides descriptive statistics, effect size measures, and a plot showing error bars for difference-adjusted confidence intervals with jittered data points.
aov.b(formula, data, posthoc = FALSE, conf.level = 0.95, hypo = TRUE, descript = TRUE, effsize = FALSE, weighted = FALSE, correct = FALSE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
aov.b(formula, data, posthoc = FALSE, conf.level = 0.95, hypo = TRUE, descript = TRUE, effsize = FALSE, weighted = FALSE, correct = FALSE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
formula |
a formula of the form |
data |
a matrix or data frame containing the variables in the
formula |
posthoc |
logical: if |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
hypo |
logical: if |
descript |
logical: if |
effsize |
logical: if |
weighted |
logical: if |
correct |
logical: if |
plot |
logical: if |
point.size |
a numeric value indicating the |
adjust |
logical: if |
error.width |
a numeric value indicating the horizontal bar width of the error bar. |
xlab |
a character string specifying the labels for the x-axis. |
ylab |
a character string specifying the labels for the y-axis. |
ylim |
a numeric vector of length two specifying limits of the limits of the y-axis. |
breaks |
a numeric vector specifying the points at which tick-marks are drawn at the y-axis. |
jitter |
logical: if |
jitter.size |
a numeric value indicating the |
jitter.width |
a numeric value indicating the amount of horizontal jitter. |
jitter.height |
a numeric value indicating the amount of vertical jitter. |
jitter.alpha |
a numeric value indicating the opacity of the jittered data points. |
title |
a character string specifying the text for the title for the plot. |
subtitle |
a character string specifying the text for the subtitle for the plot. |
digits |
an integer value indicating the number of decimal places to be used for displaying descriptive statistics and confidence interval. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
... |
further arguments to be passed to or from methods. |
Tukey HSD post hoc test reports Cohen's d based
on the non-weighted standard deviation (i.e., weighted = FALSE
) when
requesting an effect size measure (i.e., effsize = TRUE
) following the
recommendation by Delacre et al. (2021).
Cumming and Finch (2005) pointed out that
when 95% confidence intervals (CI) for two separately plotted means overlap,
it is still possible that the CI for the difference would not include zero.
Baguley (2012) proposed to adjust the width of the CIs by the factor of
to reflect the correct width of the CI for a mean difference:
These difference-adjusted CIs around the individual means can be interpreted
as if it were a CI for their difference. Note that the width of these intervals
is sensitive to differences in the variance and sample size of each sample,
i.e., unequal population variances and unequal alter the interpretation
of difference-adjusted CIs.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame with variables used in the current analysis |
formula |
formula of the current analysis |
plot |
ggplot2 object for plotting the results |
args |
specification of function arguments |
result |
list with result tables, i.e., |
Takuya Yanagida [email protected]
Baguley, T. S. (2012a). Serious stats: A guide to advanced statistics for the behavioral sciences. Palgrave Macmillan.
Cumming, G., and Finch, S. (2005) Inference by eye: Confidence intervals, and how to read pictures of data. American Psychologist, 60, 170–80.
Delacre, M., Lakens, D., Ley, C., Liu, L., & Leys, C. (2021). Why Hedges' g*s based on the non-pooled standard deviation should be reported with Welch's t-test. https://doi.org/10.31234/osf.io/tu6mp
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
aov.w
, test.t
, test.z
,
test.levene
, test.welch
, cohens.d
,
ci.mean.diff
, ci.mean
dat <- data.frame(group = c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3), y = c(3, 1, 4, 2, 5, 3, 2, 3, 6, 6, 3, NA)) # Example 1: Between-subject ANOVA aov.b(y ~ group, data = dat) # Example 2: Between-subject ANOVA # print effect size measures aov.b(y ~ group, data = dat, effsize = TRUE) # Example 3: Between-subject ANOVA # do not print hypotheses and descriptive statistics, aov.b(y ~ group, data = dat, descript = FALSE, hypo = FALSE) ## Not run: # Example 4: Write results into a text file aov.b(y ~ group, data = dat, write = "ANOVA.txt") # Example 5: Between-subject ANOVA # plot results aov.b(y ~ group, data = dat, plot = TRUE) # Load ggplot2 package library(ggplot2) # Example 6: Save plot, ggsave() from the ggplot2 package ggsave("Between-Subject_ANOVA.png", dpi = 600, width = 4.5, height = 6) # Example 7: Between-subject ANOVA # extract plot p <- aov.b(y ~ group, data = dat, output = FALSE)$plot p # Extract data plotdat <- aov.b(y ~ group, data = dat, output = FALSE)$data # Draw plot in line with the default setting of aov.b() ggplot(plotdat, aes(group, y)) + geom_jitter(alpha = 0.1, width = 0.05, height = 0, size = 1.25) + geom_point(stat = "summary", fun = "mean", size = 4) + stat_summary(fun.data = "mean_cl_normal", geom = "errorbar", width = 0.20) + scale_x_discrete(name = NULL) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5)) ## End(Not run)
dat <- data.frame(group = c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3), y = c(3, 1, 4, 2, 5, 3, 2, 3, 6, 6, 3, NA)) # Example 1: Between-subject ANOVA aov.b(y ~ group, data = dat) # Example 2: Between-subject ANOVA # print effect size measures aov.b(y ~ group, data = dat, effsize = TRUE) # Example 3: Between-subject ANOVA # do not print hypotheses and descriptive statistics, aov.b(y ~ group, data = dat, descript = FALSE, hypo = FALSE) ## Not run: # Example 4: Write results into a text file aov.b(y ~ group, data = dat, write = "ANOVA.txt") # Example 5: Between-subject ANOVA # plot results aov.b(y ~ group, data = dat, plot = TRUE) # Load ggplot2 package library(ggplot2) # Example 6: Save plot, ggsave() from the ggplot2 package ggsave("Between-Subject_ANOVA.png", dpi = 600, width = 4.5, height = 6) # Example 7: Between-subject ANOVA # extract plot p <- aov.b(y ~ group, data = dat, output = FALSE)$plot p # Extract data plotdat <- aov.b(y ~ group, data = dat, output = FALSE)$data # Draw plot in line with the default setting of aov.b() ggplot(plotdat, aes(group, y)) + geom_jitter(alpha = 0.1, width = 0.05, height = 0, size = 1.25) + geom_point(stat = "summary", fun = "mean", size = 4) + stat_summary(fun.data = "mean_cl_normal", geom = "errorbar", width = 0.20) + scale_x_discrete(name = NULL) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5)) ## End(Not run)
This function performs an one-way repeated measures analysis of variance (within subject ANOVA) including paired-samples t-tests for multiple comparison and provides descriptive statistics, effect size measures, and a plot showing error bars for difference-adjusted Cousineau-Morey within-subject confidence intervals with jittered data points including subject-specific lines.
aov.w(formula, data, print = c("all", "none", "LB", "GG", "HF"), posthoc = FALSE, conf.level = 0.95, p.adj = c("none", "bonferroni", "holm", "hochberg", "hommel", "BH", "BY", "fdr"), hypo = TRUE, descript = TRUE, epsilon = TRUE, effsize = FALSE, na.omit = TRUE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), jitter = TRUE, line = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
aov.w(formula, data, print = c("all", "none", "LB", "GG", "HF"), posthoc = FALSE, conf.level = 0.95, p.adj = c("none", "bonferroni", "holm", "hochberg", "hommel", "BH", "BY", "fdr"), hypo = TRUE, descript = TRUE, epsilon = TRUE, effsize = FALSE, na.omit = TRUE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), jitter = TRUE, line = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
formula |
a formula of the form |
data |
a matrix or data frame containing the variables in the
formula |
print |
a character vector indicating which sphericity correction
to use, i.e., |
posthoc |
logical: if |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
p.adj |
a character string indicating an adjustment method for
multiple testing based on |
hypo |
logical: if |
descript |
logical: if |
epsilon |
logical: if |
effsize |
logical: if |
na.omit |
logical: if |
plot |
logical: if |
point.size |
a numeric value indicating the |
adjust |
logical: if |
error.width |
a numeric value indicating the horizontal bar width of the error bar. |
xlab |
a character string specifying the labels for the x-axis. |
ylab |
a character string specifying the labels for the y-axis. |
ylim |
a numeric vector of length two specifying limits of the limits of the y-axis. |
breaks |
a numeric vector specifying the points at which tick-marks are drawn at the y-axis. |
jitter |
logical: if |
line |
logical: if |
jitter.size |
a numeric value indicating the |
jitter.width |
a numeric value indicating the amount of horizontal jitter. |
jitter.height |
a numeric value indicating the amount of vertical jitter. |
jitter.alpha |
a numeric value indicating the opacity of the jittered data points. |
title |
a character string specifying the text for the title for the plot. |
subtitle |
a character string specifying the text for the subtitle for the plot. |
digits |
an integer value indicating the number of decimal places to be used for displaying descriptive statistics and confidence interval. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
... |
further arguments to be passed to or from methods. |
The F-Test of the repeated measures ANOVA
is based on the assumption of sphericity, which is defined as the assumption
that the variance of differences between repeated measures are equal in the
population. The Mauchly's test is commonly used to test this hypothesis.
However, test of assumptions addresses an irrelevant hypothesis because what
matters is the degree of violation rather than its presence (Baguley, 2012a).
Moreover, the test is not recommended because it lacks statistical power (Abdi,
2010). Instead, the Box index of sphericity () should be used to
assess the degree of violation of the sphericity assumption. The
parameter indicates the degree to which the population departs from sphericity
with
indicating that sphericity holds. As the departure
becomes more extreme,
approaches its lower bound
:
where is the number of levels of the within-subject factor. Box (1954a,
1954b) suggested a measure for sphericity, which applies to a population
covariance matrix. Greenhouse and Geisser (1959) proposed an estimate for
known as
that can be computed
from the sample covariance matrix, whereas Huynh and Feldt (1976) proposed
an alternative estimate
. These estimates can
be used to correct the effect and error df of the F-test.
Simulation studies showed that
and that
tends to be conservative underestimating
, whereas
tends to be liberal
overestimating
and occasionally exceeding one. Baguley (2012a)
recommended to compute the average of the conservative estimate
and the liberal estimate
to assess the sphericity
assumption.
By default, the function prints results depending on the average
and
:
If the average is less than 0.75 results of the F-Test based on
Greenhouse-Geiser correction factor () is printed.
If the average is less greater or equal 0.75, but less than 0.95
results of the F-Test based on Huynh-Feldt correction factor
() is printed.
If the average is greater or equal 0.95 results of the F-Test without any corrections are printed.
The function uses listwise deletion by default to
deal with missing data. However, the function also allows to use all available
observations by conducting the repeated measures ANOVA in long data format when
specifying na.omit = FALSE
. Note that in the presence of missing data,
the F-Test without any sphericity corrections may be reliable, but it
is not clear whether results based on Greenhouse-Geiser or Huynh-Feldt correction
are trustworthy given that pairwise deletion is used for estimating the
variance-covariance matrix when computing and the total
number of subjects regardless of missing values (i.e., complete and incomplete
cases) are used for computing
.
The function provides a
plot showing error bars for difference-adjusted Cousineau-Morey confidence
intervals (Baguley, 2012b). The intervals matches that of a CI for a difference,
i.e., non-overlapping CIs corresponds to an inferences of no statistically
significant difference. The Cousineau-Morey confidence intervals without
adjustment can be used by specifying adjust = FALSE
.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
list with the data ( |
formula |
formula of the current analysis |
plot |
ggplot2 object for plotting the results |
args |
specification of function arguments |
result |
list with result tables, i.e., |
Takuya Yanagida [email protected]
Abdi, H. (2010). The Greenhouse-Geisser correction. In N. J. Salkind (Ed.) Encyclopedia of Research Design (pp. 630-634), Sage. https://dx.doi.org/10.4135/9781412961288
Baguley, T. S. (2012a). Serious stats: A guide to advanced statistics for the behavioral sciences. Palgrave Macmillan.
Baguley, T. (2012b). Calculating and graphing within-subject confidence intervals for ANOVA. Behavior Research Methods, 44, 158-175. https://doi.org/10.3758/s13428-011-0123-7
Bakerman, R. (2005). Recommended effect size statistics for repeated measures designs. Behavior Research Methods, 37, 179-384. https://doi.org/10.3758/BF03192707
Box, G. E. P. (1954a) Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, I. Effects of Inequality of Variance in the One-way Classification. Annals of Mathematical Statistics, 25, 290–302.
Box, G. E. P. (1954b) Some Theorems on Quadratic Forms Applied in the Study of Analysis of Variance Problems, II. Effects of Inequality of Variance and of Correlation between Errors in the Two-way Classification. Annals of Mathematical Statistics, 25, 484–98.
Greenhouse, S. W., and Geisser, S. (1959). On methods in the analysis of profile data.Psychometrika, 24, 95-112. https://doi.org/10.1007/BF02289823
Huynh, H., and Feldt, L. S. (1976). Estimation of the box correction for degrees of freedom from sample data in randomized block and splitplot designs. Journal of Educational Statistics, 1, 69-82. https://doi.org/10.2307/1164736
Olejnik, S., & Algina, J. (2000). Measures of effect size for comparative studies: Applications, interpretations, and limitations. Contemporary Educational Psychology, 25, 241-286. https://doi.org/10.1006/ceps.2000.1040
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
aov.b
, test.t
, test.z
,
cohens.d
, ci.mean.diff
, ci.mean
dat <- data.frame(time1 = c(3, 2, 1, 4, 5, 2, 3, 5, 6, 7), time2 = c(4, 3, 6, 5, 8, 6, 7, 3, 4, 5), time3 = c(1, 2, 2, 3, 6, 5, 1, 2, 4, 6)) # Example 1: Repeated measures ANOVA aov.w(cbind(time1, time2, time3) ~ 1, data = dat) # Example 2: Repeated measures ANOVA # print results based on all sphericity corrections aov.w(cbind(time1, time2, time3) ~ 1, data = dat, print = "all") # Example 3: Repeated measures ANOVA # print effect size measures aov.w(cbind(time1, time2, time3) ~ 1, data = dat, effsize = TRUE) # Example 4: Repeated measures ANOVA # do not print hypotheses and descriptive statistics, aov.w(cbind(time1, time2, time3) ~ 1, data = dat, descript = FALSE, hypo = FALSE) ## Not run: # Example 5: Write results into a text file aov.w(cbind(time1, time2, time3) ~ 1, data = dat, write = "RM-ANOVA.txt") # Example 6: Repeated measures ANOVA # plot results aov.w(cbind(time1, time2, time3) ~ 1, data = dat, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Repeated_measures_ANOVA.png", dpi = 600, width = 4.5, height = 4) # Example 7: Repeated measures ANOVA # extract plot p <- aov.w(cbind(time1, time2, time3) ~ 1, data = dat, output = FALSE)$plot p # Extract data plotdat <- aov.w(cbind(time1, time2, time3) ~ 1, data = dat, output = FALSE)$data # Draw plot in line with the default setting of aov.w() ggplot(plotdat$long, aes(time, y, group = 1L)) + geom_point(aes(time, y, group = id), alpha = 0.1, position = position_dodge(0.05)) + geom_line(aes(time, y, group = id), alpha = 0.1, position = position_dodge(0.05)) + geom_point(data = plotdat$ci, aes(variable, m), stat = "identity", size = 4) + stat_summary(aes(time, y), fun = mean, geom = "line") + geom_errorbar(data = plotdat$ci, aes(variable, m, ymin = low, ymax = upp), width = 0.1) + theme_bw() + xlab(NULL) + labs(subtitle = "Two-Sided 95 theme(plot.subtitle = element_text(hjust = 0.5), plot.title = element_text(hjust = 0.5)) ## End(Not run)
dat <- data.frame(time1 = c(3, 2, 1, 4, 5, 2, 3, 5, 6, 7), time2 = c(4, 3, 6, 5, 8, 6, 7, 3, 4, 5), time3 = c(1, 2, 2, 3, 6, 5, 1, 2, 4, 6)) # Example 1: Repeated measures ANOVA aov.w(cbind(time1, time2, time3) ~ 1, data = dat) # Example 2: Repeated measures ANOVA # print results based on all sphericity corrections aov.w(cbind(time1, time2, time3) ~ 1, data = dat, print = "all") # Example 3: Repeated measures ANOVA # print effect size measures aov.w(cbind(time1, time2, time3) ~ 1, data = dat, effsize = TRUE) # Example 4: Repeated measures ANOVA # do not print hypotheses and descriptive statistics, aov.w(cbind(time1, time2, time3) ~ 1, data = dat, descript = FALSE, hypo = FALSE) ## Not run: # Example 5: Write results into a text file aov.w(cbind(time1, time2, time3) ~ 1, data = dat, write = "RM-ANOVA.txt") # Example 6: Repeated measures ANOVA # plot results aov.w(cbind(time1, time2, time3) ~ 1, data = dat, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Repeated_measures_ANOVA.png", dpi = 600, width = 4.5, height = 4) # Example 7: Repeated measures ANOVA # extract plot p <- aov.w(cbind(time1, time2, time3) ~ 1, data = dat, output = FALSE)$plot p # Extract data plotdat <- aov.w(cbind(time1, time2, time3) ~ 1, data = dat, output = FALSE)$data # Draw plot in line with the default setting of aov.w() ggplot(plotdat$long, aes(time, y, group = 1L)) + geom_point(aes(time, y, group = id), alpha = 0.1, position = position_dodge(0.05)) + geom_line(aes(time, y, group = id), alpha = 0.1, position = position_dodge(0.05)) + geom_point(data = plotdat$ci, aes(variable, m), stat = "identity", size = 4) + stat_summary(aes(time, y), fun = mean, geom = "line") + geom_errorbar(data = plotdat$ci, aes(variable, m, ymin = low, ymax = upp), width = 0.1) + theme_bw() + xlab(NULL) + labs(subtitle = "Two-Sided 95 theme(plot.subtitle = element_text(hjust = 0.5), plot.title = element_text(hjust = 0.5)) ## End(Not run)
The function as.na
replaces user-specified values in the argument
na
in a vector, factor, matrix, array, list, or data frame with
NA
, while the function na.as
replaces NA
in a vector,
factor, matrix or data frame with user-specified values in the argument
na
.
as.na(..., data = NULL, na, replace = TRUE, check = TRUE) na.as(..., data = NULL, na, replace = TRUE, as.na = NULL, check = TRUE)
as.na(..., data = NULL, na, replace = TRUE, check = TRUE) na.as(..., data = NULL, na, replace = TRUE, as.na = NULL, check = TRUE)
... |
a vector, factor, matrix, array, data frame, or list.
Alternatively, an expression indicating the variable names in
|
data |
a data frame when specifying one or more variables in the
argument |
na |
a vector indicating values or characters to replace with
|
replace |
logical: if |
check |
logical: if |
as.na |
a numeric vector or character vector indicating user-defined
missing values, i.e. these values are converted to |
Returns a vector, factor, matrix, array, data frame, or list specified in the
argument ...
or a data frame specified in data
with variables
specified in ...
replaced.
Takuya Yanagida [email protected]
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
na.auxiliary
, na.coverage
, na.descript
,
na.indicator
, na.pattern
, na.prop
,
na.test
#------------------------------------------------------------------------------- # Numeric vector num <- c(1, 3, 2, 4, 5) # Example 1: Replace 2 with NA as.na(num, na = 2) # Example 2: Replace 2, 3, and 4 with NA as.na(num, na = c(2, 3, 4)) # Example 3: Replace NA with 2 na.as(c(1, 3, NA, 4, 5), na = 2) #------------------------------------------------------------------------------- # Character vector chr <- c("a", "b", "c", "d", "e") # Example 4: Replace "b" with NA as.na(chr, na = "b") # Example 5: Replace "b", "c", and "d" with NA as.na(chr, na = c("b", "c", "d")) # Example 6: Replace NA with "b" na.as(c("a", NA, "c", "d", "e"), na = "b") #------------------------------------------------------------------------------- # Factor fac <- factor(c("a", "a", "b", "b", "c", "c")) # Example 7: Replace "b" with NA as.na(fac, na = "b") # Example 8: Replace "b" and "c" with NA as.na(fac, na = c("b", "c")) # Example 9: Replace NA with "b" na.as(factor(c("a", "a", NA, NA, "c", "c")), na = "b") #------------------------------------------------------------------------------- # Matrix mat <- matrix(1:20, ncol = 4) # Example 10: Replace 8 with NA as.na(mat, na = 8) # Example 11: Replace 8, 14, and 20 with NA as.na(mat, na = c(8, 14, 20)) # Example 12: Replace NA with 2 na.as(matrix(c(1, NA, 3, 4, 5, 6), ncol = 2), na = 2) #------------------------------------------------------------------------------- # Array # Example 13: Replace 1 and 10 with NA as.na(array(1:20, dim = c(2, 3, 2)), na = c(1, 10)) #------------------------------------------------------------------------------- # List # Example 14: Replace 1 with NA as.na(list(x1 = c(1, 2, 3, 1, 2, 3), x2 = c(2, 1, 3, 2, 1), x3 = c(3, 1, 2, 3)), na = 1) #------------------------------------------------------------------------------- # Data frame df <- data.frame(x1 = c(1, 2, 3), x2 = c(2, 1, 3), x3 = c(3, 1, 2)) # Example 15a: Replace 1 with NA as.na(df, na = 1) # Example 15b: Alternative specification using the 'data' argument as.na(., data = df, na = 1) # Example 16: Replace 1 and 3 with NA as.na(df, na = c(1, 3)) # Example 17a: Replace 1 with NA in 'x2' as.na(df$x2, na = 1) # Example 17b: Alternative specification using the 'data' argument as.na(x2, data = df, na = 1) # Example 18: Replace 1 with NA in 'x2' and 'x3' as.na(x2, x3, data = df, na = 1) # Example 19: Replace 1 with NA in 'x1', 'x2', and 'x3' as.na(x1:x3, data = df, na = 1) # Example 20: Replace NA with -99 na.as(data.frame(x1 = c(NA, 2, 3), x2 = c(2, NA, 3), x3 = c(3, NA, 2)), na = -99) # Example 2: Recode by replacing 30 with NA and then replacing NA with 3 na.as(data.frame(x1 = c(1, 2, 30), x2 = c(2, 1, 30), x3 = c(30, 1, 2)), na = 3, as.na = 30)
#------------------------------------------------------------------------------- # Numeric vector num <- c(1, 3, 2, 4, 5) # Example 1: Replace 2 with NA as.na(num, na = 2) # Example 2: Replace 2, 3, and 4 with NA as.na(num, na = c(2, 3, 4)) # Example 3: Replace NA with 2 na.as(c(1, 3, NA, 4, 5), na = 2) #------------------------------------------------------------------------------- # Character vector chr <- c("a", "b", "c", "d", "e") # Example 4: Replace "b" with NA as.na(chr, na = "b") # Example 5: Replace "b", "c", and "d" with NA as.na(chr, na = c("b", "c", "d")) # Example 6: Replace NA with "b" na.as(c("a", NA, "c", "d", "e"), na = "b") #------------------------------------------------------------------------------- # Factor fac <- factor(c("a", "a", "b", "b", "c", "c")) # Example 7: Replace "b" with NA as.na(fac, na = "b") # Example 8: Replace "b" and "c" with NA as.na(fac, na = c("b", "c")) # Example 9: Replace NA with "b" na.as(factor(c("a", "a", NA, NA, "c", "c")), na = "b") #------------------------------------------------------------------------------- # Matrix mat <- matrix(1:20, ncol = 4) # Example 10: Replace 8 with NA as.na(mat, na = 8) # Example 11: Replace 8, 14, and 20 with NA as.na(mat, na = c(8, 14, 20)) # Example 12: Replace NA with 2 na.as(matrix(c(1, NA, 3, 4, 5, 6), ncol = 2), na = 2) #------------------------------------------------------------------------------- # Array # Example 13: Replace 1 and 10 with NA as.na(array(1:20, dim = c(2, 3, 2)), na = c(1, 10)) #------------------------------------------------------------------------------- # List # Example 14: Replace 1 with NA as.na(list(x1 = c(1, 2, 3, 1, 2, 3), x2 = c(2, 1, 3, 2, 1), x3 = c(3, 1, 2, 3)), na = 1) #------------------------------------------------------------------------------- # Data frame df <- data.frame(x1 = c(1, 2, 3), x2 = c(2, 1, 3), x3 = c(3, 1, 2)) # Example 15a: Replace 1 with NA as.na(df, na = 1) # Example 15b: Alternative specification using the 'data' argument as.na(., data = df, na = 1) # Example 16: Replace 1 and 3 with NA as.na(df, na = c(1, 3)) # Example 17a: Replace 1 with NA in 'x2' as.na(df$x2, na = 1) # Example 17b: Alternative specification using the 'data' argument as.na(x2, data = df, na = 1) # Example 18: Replace 1 with NA in 'x2' and 'x3' as.na(x2, x3, data = df, na = 1) # Example 19: Replace 1 with NA in 'x1', 'x2', and 'x3' as.na(x1:x3, data = df, na = 1) # Example 20: Replace NA with -99 na.as(data.frame(x1 = c(NA, 2, 3), x2 = c(2, NA, 3), x3 = c(3, NA, 2)), na = -99) # Example 2: Recode by replacing 30 with NA and then replacing NA with 3 na.as(data.frame(x1 = c(1, 2, 30), x2 = c(2, 1, 30), x3 = c(30, 1, 2)), na = 3, as.na = 30)
This wrapper function creates a Blimp input file, runs the input file by using
the blimp.run()
function, and prints the Blimp output file by using the
blimp.print()
function.
blimp(x, file = "Blimp_Input.imp", data = NULL, comment = FALSE, replace.inp = TRUE, blimp.run = TRUE, posterior = FALSE, folder = "Posterior_", format = c("csv", "csv2", "excel", "rds", "workspace"), clear = TRUE, replace.out = c("always", "never", "modified"), Blimp = detect.blimp(), result = c("all", "default", "algo.options", "data.info", "model.info", "warn.mess", "out.model", "gen.param"), exclude = NULL, color = c("none", "blue", "violet"), style = c("bold", "regular"), not.result = TRUE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
blimp(x, file = "Blimp_Input.imp", data = NULL, comment = FALSE, replace.inp = TRUE, blimp.run = TRUE, posterior = FALSE, folder = "Posterior_", format = c("csv", "csv2", "excel", "rds", "workspace"), clear = TRUE, replace.out = c("always", "never", "modified"), Blimp = detect.blimp(), result = c("all", "default", "algo.options", "data.info", "model.info", "warn.mess", "out.model", "gen.param"), exclude = NULL, color = c("none", "blue", "violet"), style = c("bold", "regular"), not.result = TRUE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
x |
a character string containing the Blimp input text. |
file |
a character string indicating the name of the Blimp input
file with or without the file extension |
data |
a matrix or data frame from which the variables names for
the section |
comment |
logical: if |
replace.inp |
logical: if |
blimp.run |
logical: if |
posterior |
logical: if |
folder |
a character string indicating the prefix of the folder for
saving the posterior distributions. The default setting is
|
format |
a character vector indicating the file format(s) for saving the
posterior distributions, i.e., |
clear |
logical: if |
replace.out |
a character string for specifying three settings:
|
Blimp |
a character string for specifying the name or path of the Blimp executable to be used for running models. This covers situations where Blimp is not in the system's path, or where one wants to test different versions of the Blimp program. Note that there is no need to specify this argument for most users since it has intelligent defaults. |
result |
a character vector specifying Blimp result sections included
in the output (see 'Details' in the |
exclude |
a character vector specifying Blimp input command or result
sections excluded from the output (see 'Details' in the
|
color |
a character vector with two elements indicating the colors
used for the main headers (e.g., |
style |
a character vector with two elements indicating the style
used for headers (e.g., |
not.result |
logical: if |
write |
a character string naming a file for writing the output into
a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
VARIABLES
SectionThe VARIABLES
section used to
assign names to the variables in the data set can be specified by using
the data
argument:
Write Blimp Data File
: In the first step, the Blimp data
file is written by using the write.mplus()
function, e.g.
write.mplus(data1, file = "data1.dat")
.
Specify Blimp Input
: In the second step, the Blimp input
is specified as a character string. The VARIABLES
option is left
out from the Blimp input text, e.g.,
input <- 'DATA: data1.dat;\nMODEL: y ~ x1@b1 x2@b2 d2;'
.
Run Blimp Input
: In the third step, the Blimp input is run
by using the blimp()
function. The argument data
needs to be
specified given that the VARIABLES
section was left out from the
Blimp input text in the previous step, e.g., blimp(input, file = "Ex4.3.imp", data = data1)
.
Note that unlike Mplus, Blimp allows to specify a CSV data file with variable
names in the first row. Hence, it is recommended to export the data from
R using the write.csv()
function to specify the data file in the DATA
section of the Blimp input file without specifying the VARIABLES
section.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
a character vector containing the Blimp input text |
args |
specification of function arguments |
write |
write command sections |
result |
list with result sections ( |
Takuya Yanagida
Keller, B. T., & Enders, C. K. (2023). Blimp user’s guide (Version 3). Retrieved from www.appliedmissingdata.com/blimp
blimp.update
, blimp.run
,
blimp.print
, blimp.plot
, blimp.bayes
## Not run: #---------------------------------------------------------------------------- # Example 1: Write data, specify input without VARIABLES section, and run input # Write Data File # Note that row.names = FALSE needs to be specified write.csv(data1, file = "data1.csv", row.names = FALSE) # Specify Blimp input input1 <- ' DATA: data1.csv; ORDINAL: d; MISSING: 999; FIXED: d; CENTER: x1 x2; MODEL: y ~ x1 x2 d; SEED: 90291; BURN: 1000; ITERATIONS: 10000; ' # Run Blimp input blimp(input1, file = "Ex4.3.imp") #---------------------------------------------------------------------------- # Example 2: Write data, specify input with VARIABLES section, and run input # Write Data File write.mplus(data1, file = "data1.dat", input = FALSE) # Specify Blimp input input2 <- ' DATA: data1.dat; VARIABLES: id v1 v2 v3 y x1 d x2 v4; ORDINAL: d; MISSING: 999; FIXED: d; CENTER: x1 x2; MODEL: y ~ x1 x2 d; SEED: 90291; BURN: 1000; ITERATIONS: 10000; ' # Run Blimp input blimp(input2, file = "Ex4.3.imp") #---------------------------------------------------------------------------- # Example 3: Alternative specification using the data argument # Write Data File write.mplus(data1, file = "data1.dat", input = FALSE) # Specify Blimp input input3 <- ' DATA: data1.dat; ORDINAL: d; MISSING: 999; FIXED: d; CENTER: x1 x2; MODEL: y ~ x1 x2 d; SEED: 90291; BURN: 1000; ITERATIONS: 10000; ' # Run Blimp input blimp(input3, file = "Ex4.3.imp", data = data1) ## End(Not run)
## Not run: #---------------------------------------------------------------------------- # Example 1: Write data, specify input without VARIABLES section, and run input # Write Data File # Note that row.names = FALSE needs to be specified write.csv(data1, file = "data1.csv", row.names = FALSE) # Specify Blimp input input1 <- ' DATA: data1.csv; ORDINAL: d; MISSING: 999; FIXED: d; CENTER: x1 x2; MODEL: y ~ x1 x2 d; SEED: 90291; BURN: 1000; ITERATIONS: 10000; ' # Run Blimp input blimp(input1, file = "Ex4.3.imp") #---------------------------------------------------------------------------- # Example 2: Write data, specify input with VARIABLES section, and run input # Write Data File write.mplus(data1, file = "data1.dat", input = FALSE) # Specify Blimp input input2 <- ' DATA: data1.dat; VARIABLES: id v1 v2 v3 y x1 d x2 v4; ORDINAL: d; MISSING: 999; FIXED: d; CENTER: x1 x2; MODEL: y ~ x1 x2 d; SEED: 90291; BURN: 1000; ITERATIONS: 10000; ' # Run Blimp input blimp(input2, file = "Ex4.3.imp") #---------------------------------------------------------------------------- # Example 3: Alternative specification using the data argument # Write Data File write.mplus(data1, file = "data1.dat", input = FALSE) # Specify Blimp input input3 <- ' DATA: data1.dat; ORDINAL: d; MISSING: 999; FIXED: d; CENTER: x1 x2; MODEL: y ~ x1 x2 d; SEED: 90291; BURN: 1000; ITERATIONS: 10000; ' # Run Blimp input blimp(input3, file = "Ex4.3.imp", data = data1) ## End(Not run)
This function reads the posterior distribution for all parameters saved in
long format in a file called posterior.*
by the function blimp.run
or blimp
when specifying posterior = TRUE
to compute point estimates
(i.e., mean, median, and MAP), measures of dispersion (i.e., standard deviation
and mean absolute deviation), measures of shape (i.e., skewness and kurtosis),
credible intervals (i.e., equal-tailed intervals and highest density interval),
convergence and efficiency diagnostics (i.e., potential scale reduction factor
R-hat, effective sample size, and Monte Carlo standard error), probability of
direction, and probability of being in the region of practical equivalence for
the posterior distribution for each parameter. By default, the function computes
the maximum of rank-normalized split-R-hat and rank normalized folded-split-R-hat,
Bulk effective sample size (Bulk-ESS) for rank-normalized values using split
chains, tail effective sample size (Tail-ESS) defined as the minimum of the
effective sample size for 0.025 and 0.975 quantiles, the Bulk Monte Carlo
standard error (Bulk-MCSE) for the median and Tail Monte Carlo standard error
(Tail-MCSE) defined as the maximum of the MCSE for 0.025 and 0.975 quantiles.
blimp.bayes(x, param = NULL, print = c("all", "default", "m", "med", "map", "sd", "mad", "skew", "kurt", "eti", "hdi", "rhat", "b.ess", "t.ess", "b.mcse", "t.mcse"), m.bulk = FALSE, split = TRUE, rank = TRUE, fold = TRUE, pd = FALSE, null = 0, rope = NULL, ess.tail = c(0.025, 0.975), mcse.tail = c(0.025, 0.975), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, digits = 2, r.digits = 3, ess.digits = 0, mcse.digits = 3, p.digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
blimp.bayes(x, param = NULL, print = c("all", "default", "m", "med", "map", "sd", "mad", "skew", "kurt", "eti", "hdi", "rhat", "b.ess", "t.ess", "b.mcse", "t.mcse"), m.bulk = FALSE, split = TRUE, rank = TRUE, fold = TRUE, pd = FALSE, null = 0, rope = NULL, ess.tail = c(0.025, 0.975), mcse.tail = c(0.025, 0.975), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, digits = 2, r.digits = 3, ess.digits = 0, mcse.digits = 3, p.digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
x |
a character string indicating the name of folder containing
the |
param |
a numeric vector indicating which parameters to print.
Note that the number of the parameter ( |
print |
a character vector indicating which summary measures,
convergence, and efficiency diagnostics to be printed on
the console, i.e. |
m.bulk |
logical: if |
split |
logical: if |
rank |
logical: if |
fold |
logical: if |
pd |
logical: if |
null |
a numeric value considered as a null effect for the probability
of direction (default is |
rope |
a numeric vector with two elements indicating the ROPE's
lower and upper bounds. ROPE is also depending on the argument
|
ess.tail |
a numeric vector with two elements to specify the quantiles
for computing the tail ESS. The default setting is
|
mcse.tail |
a numeric vector with two elements to specify the quantiles
for computing the tail MCSE. The default setting is
|
alternative |
a character string specifying the alternative hypothesis
for the credible intervals, must be one of |
conf.level |
a numeric value between 0 and 1 indicating the confidence
level of the credible interval. The default setting is
|
digits |
an integer value indicating the number of decimal places to be used for displaying point estimates, measures of dispersion, and credible intervals. |
r.digits |
an integer value indicating the number of decimal places to be used for displaying R-hat values. |
ess.digits |
an integer value indicating the number of decimal places to be used for displaying effective sample sizes. |
mcse.digits |
an integer value indicating the number of decimal places to be used for displaying Monte Carlo standard errors. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the probability of direction and the probability of being in the region of practical equivalence (ROPE). |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Convergence and efficiency diagnostics for Markov chains is based on following numeric measures:
Potential Scale Reduction (PSR) factor R-hat: The PSR factor
R-hat compares the between- and within-chain variance for a model
parameter, i.e., R-hat larger than 1 indicates that the between-chain
variance is greater than the within-chain variance and chains have not
mixed well. According to the default setting, the function computes the
improved R-hat as recommended by Vehtari et al. (2020) based on rank-normalizing
(i.e., rank = TRUE
) and folding (i.e., fold = TRUE
) the
posterior draws after splitting each MCMC chain in half (i.e.,
split = TRUE
). The traditional R-hat used in Blimp can be requested
by specifying split = TRUE
, rank = FALSE
, and
fold = FALSE
. Note that the traditional R-hat can catch many
problems of poor convergence, but fails if the chains have different
variances with the same mean parameter or if the chains have infinite
variance with one of the chains having a different location parameter to
the others (Vehtari et al., 2020). According to Gelman et al. (2014) a
R-hat value of 1.1 or smaller for all parameters can be considered evidence
for convergence. The Stan Development Team (2024) recommends running at
least four chains and a convergence criterion of less than 1.05 for the
maximum of rank normalized split-R-hat and rank normalized folded-split-R-hat.
Vehtari et al. (2020), however, recommended to only use the posterior
samples if R-hat is less than 1.01 because the R-hat can fall below 1.1
well before convergence in some scenarios (Brooks & Gelman, 1998; Vats &
Knudon, 2018).
Effective Sample Size (ESS): The ESS is the estimated number
of independent samples from the posterior distribution that would lead
to the same precision as the autocorrelated samples at hand. According
to the default setting, the function computes the ESS based on rank-normalized
split-R-hat and within-chain autocorrelation. The function provides the
estimated Bulk-ESS (B.ESS
) and the Tail-ESS (T.ESS
). The
Bulk-ESS is a useful measure for sampling efficiency in the bulk of the
distribution (i.e, efficiency of the posterior mean), and the Tail-ESS
is useful measure for sampling efficiency in the tails of the distribution
(e.g., efficiency of tail quantile estimates). Note that by default, the
Tail-ESS is the minimum of the effective sample sizes for 2.5% and 97.5%
quantiles (tail = c(0.025, 0.975)
). According to Kruschke (2015),
a rank-normalized ESS greater than 400 is usually sufficient to get a
stable estimate of the Monte Carlo standard error. However, a ESS of
at least 1000 is considered optimal (Zitzmann & Hecht, 2019).
Monte Carlo Standard Error (MCSE): The MCSE is defined as
the standard deviation of the chains divided by their effective sample
size and reflects uncertainty due to the stochastic algorithm of the
Markov Chain Monte Carlo method. The function provides the estimated
Bulk-MCSE (B.MCSE
) for the margin of error when using the MCMC
samples to estimate the posterior mean and the Tail-ESS (T.MCSE
)
for the margin of error when using the MCMC samples for interval
estimation.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
a character string indicating the name of the |
args |
specification of function arguments |
data |
posterior distribution of each parameter estimate in long format |
result |
result table with summary measures, convergence, and efficiency diagnostics |
This function is a modified copy of functions provided in the rstan package by Stan Development Team (2024) and bayestestR package by Makowski et al. (2019).
Takuya Yanagida
Brooks, S. P. and Gelman, A. (1998). General Methods for Monitoring Convergence of Iterative Simulations. Journal of Computational and Graphical Statistics, 7(4): 434–455. MR1665662.
Gelman, A., & Rubin, D.B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7, 457-472. https://doi.org/10.1214/ss/1177011136
Keller, B. T., & Enders, C. K. (2023). Blimp user’s guide (Version 3). Retrieved from www.appliedmissingdata.com/blimp
Kruschke, J. (2015). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan. Academic Press.
Makowski, D., Ben-Shachar, M., & Lüdecke, D. (2019). bayestestR: Describing effects and their uncertainty, existence and significance within the Bayesian framework. Journal of Open Source Software, 4(40), 1541. https://doi.org/10.21105/joss.01541
Stan Development Team (2024). RStan: the R interface to Stan. R package version 2.32.6. https://mc-stan.org/.
Vats, D. and Knudson, C. (2018). Revisiting the Gelman-Rubin Diagnostic. arXiv:1812.09384.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P.-C. (2020). Rank-normalization, folding, and localization: An improved R-hat for assessing convergence of MCMC. Bayesian analysis, 16(2), 667-718. https://doi.org/110.1214/20-BA1221
Zitzmann, S., & Hecht, M. (2019). Going beyond convergence in Bayesian estimation: Why precision matters too and how to assess it. Structural Equation Modeling, 26(4), 646–661. https://doi.org/10.1080/10705511.2018.1545232
blimp
, blimp.update
, blimp.run
,
blimp.plot
,blimp.print
, blimp.plot
,
## Not run: #---------------------------------------------------------------------------- # Blimp Example 4.3: Linear Regression # Example 1a: Default setting, specifying name of the folder blimp.bayes("Posterior_Ex4.3") # Example 1b: Default setting, specifying the posterior file blimp.bayes("Posterior_Ex4.3/posterior.csv") # Example 2a: Print all summary measures, convergence, and efficiency diagnostics blimp.bayes("Posterior_Ex4.3", print = "all") # Example 3a: Print default measures plus MAP blimp.bayes("Posterior_Ex4.3", print = c("default", "map")) # Example 4: Print traditional R-hat in line with Blimp blimp.bayes("Posterior_Ex4.3", split = TRUE, rank = FALSE, fold = FALSE) # Example 5: Print probability of direction and the probability of # being ROPE [-0.1, 0.1] blimp.bayes("Posterior_Ex4.3", pd = TRUE, rope = c(-0.1, 0.1)) # Example 6: Write Results into a text file blimp.bayes("Posterior_Ex4.3", write = "Bayes_Summary.txt") # Example 7b: Write Results into a Excel file blimp.bayes("Posterior_Ex4.3", write = "Bayes_Summary.xlsx") ## End(Not run)
## Not run: #---------------------------------------------------------------------------- # Blimp Example 4.3: Linear Regression # Example 1a: Default setting, specifying name of the folder blimp.bayes("Posterior_Ex4.3") # Example 1b: Default setting, specifying the posterior file blimp.bayes("Posterior_Ex4.3/posterior.csv") # Example 2a: Print all summary measures, convergence, and efficiency diagnostics blimp.bayes("Posterior_Ex4.3", print = "all") # Example 3a: Print default measures plus MAP blimp.bayes("Posterior_Ex4.3", print = c("default", "map")) # Example 4: Print traditional R-hat in line with Blimp blimp.bayes("Posterior_Ex4.3", split = TRUE, rank = FALSE, fold = FALSE) # Example 5: Print probability of direction and the probability of # being ROPE [-0.1, 0.1] blimp.bayes("Posterior_Ex4.3", pd = TRUE, rope = c(-0.1, 0.1)) # Example 6: Write Results into a text file blimp.bayes("Posterior_Ex4.3", write = "Bayes_Summary.txt") # Example 7b: Write Results into a Excel file blimp.bayes("Posterior_Ex4.3", write = "Bayes_Summary.xlsx") ## End(Not run)
This function reads the posterior distribution including burn-in and
post-burn-in phase for all parameters saved in long format in a file called
posterior.*
by the function blimp.run
or blimp
when specifying
posterior = TRUE
to display trace plots and posterior distribution plots.
blimp.plot(x, plot = c("none", "trace", "post"), param = NULL, labels = TRUE, burnin = TRUE, point = c("all", "none", "m", "med", "map"), ci = c("none", "eti", "hdi"), conf.level = 0.95, hist = TRUE, density = TRUE, area = TRUE, alpha = 0.4, fill = "gray85", nrow = NULL, ncol = NULL, scales = c("fixed", "free", "free_x", "free_y"), xlab = NULL, ylab = NULL, xlim = NULL, ylim = NULL, xbreaks = ggplot2::waiver(), ybreaks = ggplot2::waiver(), xexpand = ggplot2::waiver(), yexpand = ggplot2::waiver(), palette = "Set 2", binwidth = NULL, bins = NULL, density.col = "#0072B2", shape = 21, point.col = c("#CC79A7", "#D55E00", "#009E73"), linewidth = 0.6, linetype = "dashed", line.col = "black", plot.margin = NULL, legend.title.size = 10, legend.text.size = 10, legend.box.margin = NULL, saveplot = c("all", "none", "trace", "post"), file = "Blimp_Plot.pdf", file.plot = c("_TRACE", "_POST"), width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 600, check = TRUE)
blimp.plot(x, plot = c("none", "trace", "post"), param = NULL, labels = TRUE, burnin = TRUE, point = c("all", "none", "m", "med", "map"), ci = c("none", "eti", "hdi"), conf.level = 0.95, hist = TRUE, density = TRUE, area = TRUE, alpha = 0.4, fill = "gray85", nrow = NULL, ncol = NULL, scales = c("fixed", "free", "free_x", "free_y"), xlab = NULL, ylab = NULL, xlim = NULL, ylim = NULL, xbreaks = ggplot2::waiver(), ybreaks = ggplot2::waiver(), xexpand = ggplot2::waiver(), yexpand = ggplot2::waiver(), palette = "Set 2", binwidth = NULL, bins = NULL, density.col = "#0072B2", shape = 21, point.col = c("#CC79A7", "#D55E00", "#009E73"), linewidth = 0.6, linetype = "dashed", line.col = "black", plot.margin = NULL, legend.title.size = 10, legend.text.size = 10, legend.box.margin = NULL, saveplot = c("all", "none", "trace", "post"), file = "Blimp_Plot.pdf", file.plot = c("_TRACE", "_POST"), width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 600, check = TRUE)
x |
a character string indicating the name of folder
containing the |
plot |
a character string indicating the type of plot to
display, i.e., |
param |
a numeric vector indicating which parameters to print
for the trace plots or posterior distribution plots.
Note that the number of the parameter ( |
labels |
logical: if |
burnin |
logical: if |
point |
a character vector indicating the point estimate(s)
to be displayed in the posterior distribution plots,
i.e., |
ci |
a character string indicating the type of credible
interval to be displayed in the posterior distribution
plots, i.e., |
conf.level |
a numeric value between 0 and 1 indicating the
confidence level of the credible interval (default is
|
hist |
logical: if |
density |
logical: if |
area |
logical: if |
alpha |
a numeric value between 0 and 1 for the |
fill |
a character string indicating the color for the
|
nrow |
a numeric value indicating the |
ncol |
a numeric value indicating the |
scales |
a character string indicating the |
xlab |
a character string indicating the |
ylab |
a character string indicating the |
xlim |
a numeric vector with two elements indicating the
|
ylim |
a numeric vector with two elements indicating the
|
xbreaks |
a numeric vector indicating the |
ybreaks |
a numeric vector indicating the |
xexpand |
a numeric vector with two elements indicating the
|
yexpand |
a numeric vector with two elements indicating the
|
palette |
a character string indicating the palette name (default
is |
binwidth |
a numeric value indicating the |
bins |
a numeric value indicating the |
density.col |
a character string indicating the |
shape |
a numeric value indicating the |
point.col |
a character vector with three elements indicating the
|
linewidth |
a numeric value indicating the |
linetype |
a numeric value indicating the |
line.col |
a character string indicating the |
plot.margin |
a numeric vector indicating the |
legend.title.size |
a numeric value indicating the |
legend.text.size |
a numeric value indicating the |
legend.box.margin |
a numeric vector indicating the |
saveplot |
a character vector indicating the plot to be saved,
i.e., |
file |
a character string indicating the |
file.plot |
a character vector with two elements for distinguishing
different types of plots. By default, the character
string specified in the argument |
width |
a numeric value indicating the |
height |
a numeric value indicating the |
units |
a character string indicating the |
dpi |
a numeric value indicating the |
check |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
a character string indicating the name of the |
args |
specification of function arguments |
data |
list with posterior distribution of each parameter estimate
in long format ( |
plot |
list with the trace plots ( |
Takuya Yanagida
Keller, B. T., & Enders, C. K. (2023). Blimp user’s guide (Version 3). Retrieved from www.appliedmissingdata.com/blimp
blimp
, blimp.update
, blimp.run
,
blimp.print
, blimp.plot
, blimp.bayes
## Not run: #---------------------------------------------------------------------------- # Blimp Example 4.3: Linear Regression #.......... # Trace Plots # Example 1a: Default setting, specifying name of the folder blimp.plot("Posterior_Ex4.3") # Example 1b: Default setting, specifying the posterior file blimp.plot("Posterior_Ex4.3/posterior.csv") # Example 1c: Print parameters 2, 3, 4, and 5 blimp.plot("Posterior_Ex4.3", param = 2:5) # Example 1e: Arrange panels in three columns blimp.plot("Posterior_Ex4.3", ncol = 3) # Example 1f: Specify "Pastel 1" palette for the hcl.colors function blimp.plot("Posterior_Ex4.3", palette = "Pastel 1") #.......... # Posterior Distribution Plots # Example 2a: Default setting, i.e., posterior median and equal-tailed interval blimp.plot("Posterior_Ex4.3", plot = "post") # Example 2b: Display posterior mean and maximum a posteriori blimp.plot("Posterior_Ex4.3", plot = "post", point = c("m", "map")) # Example 2c: Display maximum a posteriori and highest density interval blimp.plot("Posterior_Ex4.3", plot = "post", point = "map", ci = "hdi") # Example 2d: Do not display any point estimates and credible interval blimp.plot("Posterior_Ex4.3", plot = "post", point = "none", ci = "none") # Example 2d: Do not display histograms blimp.plot("Posterior_Ex4.3", plot = "post", hist = FALSE) #.......... # Save Plots # Example 3a: Save all plots in pdf format blimp.plot("Posterior_Ex4.3", saveplot = "all") # Example 3b: Save all plots in png format with 300 dpi blimp.plot("Posterior_Ex4.3", saveplot = "all", file = "Blimp_Plot.png", dpi = 300) # Example 3a: Save posterior distribution plot, specify width and height of the plot blimp.plot("Posterior_Ex4.3", plot = "none", saveplot = "post", width = 7.5, height = 7) #---------------------------------------------------------------------------- # Plot from misty.object # Create misty.object object <- blimp.plot("Posterior_Ex4.3", plot = "none") # Trace plot blimp.plot(object, plot = "trace") # Posterior distribution plot blimp.plot(object, plot = "post") #---------------------------------------------------------------------------- # Create Plots Manually # Load ggplot2 package library(ggplot2) # Create misty object object <- blimp.plot("Posterior_Ex4.3", plot = "none") #.......... # Example 4: Trace Plots # Extract data data.trace <- object$data$trace # Plot ggplot(data.trace, aes(x = iter, y = value, color = chain)) + annotate("rect", xmin = 0, xmax = 1000, ymin = -Inf, ymax = Inf, alpha = 0.4, fill = "gray85") + geom_line() + facet_wrap(~ param, ncol = 2, scales = "free") + scale_x_continuous(name = "", expand = c(0.02, 0)) + scale_y_continuous(name = "", expand = c(0.02, 0)) + scale_colour_manual(name = "Chain", values = hcl.colors(n = 2, palette = "Set 2")) + theme_bw() + guides(color = guide_legend(nrow = 1, byrow = TRUE)) + theme(plot.margin = margin(c(4, 15, -10, 0)), legend.position = "bottom", legend.title = element_text(size = 10), legend.text = element_text(size = 10), legend.box.margin = margin(c(-16, 6, 6, 6)), legend.background = element_rect(fill = "transparent")) #.......... # Example 5: Posterior Distribution Plots # Extract data data.post <- object$data$post # Plot ggplot(data.post, aes(x = value)) + geom_histogram(aes(y = after_stat(density)), color = "black", alpha = 0.4, fill = "gray85") + geom_density(color = "#0072B2") + geom_vline(data = data.frame(param = levels(data.post$param), stat = tapply(data.post$value, data.post$param, median)), aes(xintercept = stat, color = "Median"), linewidth = 0.6) + geom_vline(data = data.frame(param = levels(data.post$param), low = tapply(data.post$value, data.post$param, function(y) quantile(y, probs = 0.025))), aes(xintercept = low), linetype = "dashed", linewidth = 0.6) + geom_vline(data = data.frame(param = levels(data.post$param), upp = tapply(data.post$value, data.post$param, function(y) quantile(y, probs = 0.975))), aes(xintercept = upp), linetype = "dashed", linewidth = 0.6) + facet_wrap(~ param, ncol = 2, scales = "free") + scale_x_continuous(name = "", expand = c(0.02, 0)) + scale_y_continuous(name = "Probability Density, f(x)", expand = expansion(mult = c(0L, 0.05))) + scale_color_manual(name = "Point Estimate", values = c(Median = "#D55E00")) + labs(caption = "95% Equal-Tailed Interval") + theme_bw() + theme(plot.margin = margin(c(4, 15, -8, 4)), plot.caption = element_text(hjust = 0.5, vjust = 7), legend.position = "bottom", legend.title = element_text(size = 10), legend.text = element_text(size = 10), legend.box.margin = margin(c(-30, 6, 6, 6)), legend.background = element_rect(fill = "transparent")) ## End(Not run)
## Not run: #---------------------------------------------------------------------------- # Blimp Example 4.3: Linear Regression #.......... # Trace Plots # Example 1a: Default setting, specifying name of the folder blimp.plot("Posterior_Ex4.3") # Example 1b: Default setting, specifying the posterior file blimp.plot("Posterior_Ex4.3/posterior.csv") # Example 1c: Print parameters 2, 3, 4, and 5 blimp.plot("Posterior_Ex4.3", param = 2:5) # Example 1e: Arrange panels in three columns blimp.plot("Posterior_Ex4.3", ncol = 3) # Example 1f: Specify "Pastel 1" palette for the hcl.colors function blimp.plot("Posterior_Ex4.3", palette = "Pastel 1") #.......... # Posterior Distribution Plots # Example 2a: Default setting, i.e., posterior median and equal-tailed interval blimp.plot("Posterior_Ex4.3", plot = "post") # Example 2b: Display posterior mean and maximum a posteriori blimp.plot("Posterior_Ex4.3", plot = "post", point = c("m", "map")) # Example 2c: Display maximum a posteriori and highest density interval blimp.plot("Posterior_Ex4.3", plot = "post", point = "map", ci = "hdi") # Example 2d: Do not display any point estimates and credible interval blimp.plot("Posterior_Ex4.3", plot = "post", point = "none", ci = "none") # Example 2d: Do not display histograms blimp.plot("Posterior_Ex4.3", plot = "post", hist = FALSE) #.......... # Save Plots # Example 3a: Save all plots in pdf format blimp.plot("Posterior_Ex4.3", saveplot = "all") # Example 3b: Save all plots in png format with 300 dpi blimp.plot("Posterior_Ex4.3", saveplot = "all", file = "Blimp_Plot.png", dpi = 300) # Example 3a: Save posterior distribution plot, specify width and height of the plot blimp.plot("Posterior_Ex4.3", plot = "none", saveplot = "post", width = 7.5, height = 7) #---------------------------------------------------------------------------- # Plot from misty.object # Create misty.object object <- blimp.plot("Posterior_Ex4.3", plot = "none") # Trace plot blimp.plot(object, plot = "trace") # Posterior distribution plot blimp.plot(object, plot = "post") #---------------------------------------------------------------------------- # Create Plots Manually # Load ggplot2 package library(ggplot2) # Create misty object object <- blimp.plot("Posterior_Ex4.3", plot = "none") #.......... # Example 4: Trace Plots # Extract data data.trace <- object$data$trace # Plot ggplot(data.trace, aes(x = iter, y = value, color = chain)) + annotate("rect", xmin = 0, xmax = 1000, ymin = -Inf, ymax = Inf, alpha = 0.4, fill = "gray85") + geom_line() + facet_wrap(~ param, ncol = 2, scales = "free") + scale_x_continuous(name = "", expand = c(0.02, 0)) + scale_y_continuous(name = "", expand = c(0.02, 0)) + scale_colour_manual(name = "Chain", values = hcl.colors(n = 2, palette = "Set 2")) + theme_bw() + guides(color = guide_legend(nrow = 1, byrow = TRUE)) + theme(plot.margin = margin(c(4, 15, -10, 0)), legend.position = "bottom", legend.title = element_text(size = 10), legend.text = element_text(size = 10), legend.box.margin = margin(c(-16, 6, 6, 6)), legend.background = element_rect(fill = "transparent")) #.......... # Example 5: Posterior Distribution Plots # Extract data data.post <- object$data$post # Plot ggplot(data.post, aes(x = value)) + geom_histogram(aes(y = after_stat(density)), color = "black", alpha = 0.4, fill = "gray85") + geom_density(color = "#0072B2") + geom_vline(data = data.frame(param = levels(data.post$param), stat = tapply(data.post$value, data.post$param, median)), aes(xintercept = stat, color = "Median"), linewidth = 0.6) + geom_vline(data = data.frame(param = levels(data.post$param), low = tapply(data.post$value, data.post$param, function(y) quantile(y, probs = 0.025))), aes(xintercept = low), linetype = "dashed", linewidth = 0.6) + geom_vline(data = data.frame(param = levels(data.post$param), upp = tapply(data.post$value, data.post$param, function(y) quantile(y, probs = 0.975))), aes(xintercept = upp), linetype = "dashed", linewidth = 0.6) + facet_wrap(~ param, ncol = 2, scales = "free") + scale_x_continuous(name = "", expand = c(0.02, 0)) + scale_y_continuous(name = "Probability Density, f(x)", expand = expansion(mult = c(0L, 0.05))) + scale_color_manual(name = "Point Estimate", values = c(Median = "#D55E00")) + labs(caption = "95% Equal-Tailed Interval") + theme_bw() + theme(plot.margin = margin(c(4, 15, -8, 4)), plot.caption = element_text(hjust = 0.5, vjust = 7), legend.position = "bottom", legend.title = element_text(size = 10), legend.text = element_text(size = 10), legend.box.margin = margin(c(-30, 6, 6, 6)), legend.background = element_rect(fill = "transparent")) ## End(Not run)
This function prints the result sections of a Blimp output file (.blimp-out
)
on the R console. By default, the function prints selected result sections,
i.e., Algorithmic Options Specified
, Data Information
,
Model Information
, Warning Messages
, Outcome Model Estimates
,
and Generated Parameters
.
blimp.print(x, result = c("all", "default", "algo.options", "data.info", "model.info", "warn.mess", "out.model", "gen.param"), exclude = NULL, color = c("none", "blue", "violet"), style = c("bold", "regular"), not.result = TRUE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
blimp.print(x, result = c("all", "default", "algo.options", "data.info", "model.info", "warn.mess", "out.model", "gen.param"), exclude = NULL, color = c("none", "blue", "violet"), style = c("bold", "regular"), not.result = TRUE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
x |
a character string indicating the name of the Blimp output
file with or without the file extension |
result |
a character vector specifying Blimp result sections included in the output (see 'Details'). |
exclude |
a character vector specifying Blimp input command or result sections excluded from the output (see 'Details'). |
color |
a character vector with two elements indicating the colors
used for the main headers (e.g., |
style |
a character vector with two elements indicating the style
used for headers (e.g., |
not.result |
logical: if |
write |
a character string naming a file for writing the output into
a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Following result sections can be selected by
using the result
argument or excluded by using the exclude
argument:
"algo.options"
for the ALGORITHMIC OPTIONS SPECIFIED
section
"simdat.summary"
for the SIMULATED DATA SUMMARIES
section
"order.simdat"
for the VARIABLE ORDER IN SIMULATED DATA
section
"burnin.psr"
for the BURN-IN POTENTIAL SCALE REDUCTION (PSR) OUTPUT
section
"mh.accept"
for the METROPOLIS-HASTINGS ACCEPTANCE RATES
section
"data.info"
for the DATA INFORMATION
section
"var.imp"
for the VARIABLES IN IMPUTATION MODEL
section
"model.info"
for the MODEL INFORMATION
section
"param.label"
for the PARAMETER LABELS
section
"warn.mess"
for the WARNING MESSAGES
section
"fit"
for the MODEL FIT
section
"cor.resid"
for the CORRELATIONS AMONG RESIDUALS
section
"out.model"
for the OUTCOME MODEL ESTIMATES
section
"pred.model"
for the PREDICTOR MODEL ESTIMATES
section
"gen.param"
for the GENERATED PARAMETERS
section
"order.impdat"
for the VARIABLE ORDER IN IMPUTED DATA
section
Note that all result sections are requested by specifying result = "all"
.
The result
argument is also used to select one (e.g., result = "algo.options"
)
or more than one result sections (e.g., result = c("algo.options", "fit")
),
or to request result sections in addition to the default setting (e.g.,
result = c("default", "fit")
). The exclude
argument is used
to exclude result sections from the output (e.g., exclude = "algo.options"
).
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
character string or misty object |
args |
specification of function arguments |
print |
print objects |
notprint |
character vectors indicating the result sections not requested |
result |
list with Blimp version ( |
Takuya Yanagida
Keller, B. T., & Enders, C. K. (2023). Blimp user’s guide (Version 3). Retrieved from www.appliedmissingdata.com/blimp
blimp
, blimp.update
, blimp.run
, blimp.plot
, blimp.bayes
## Not run: #---------------------------------------------------------------------------- # Blimp Example 4.3: Linear Regression # Example 1a: Default setting blimp.print("Ex4.3.blimp-out") # Example 1c: Print OUTCOME MODEL ESTIMATES only blimp.print("Ex4.3.blimp-out", result = "out.model") # Example 1d: Print MODEL FIT in addition to the default setting blimp.print("Ex4.3.blimp-out", result = c("default", "fit")) # Example 1e: Exclude DATA INFORMATION section blimp.print("Ex4.3.blimp-out", exclude = "data.info") # Example 1f: Print all result sections, but exclude MODEL FIT section blimp.print("Ex4.3.blimp-out", result = "all", exclude = "fit") # Example 1g: Print result section in a different order blimp.print("Ex4.3.blimp-out", result = c("model.info", "fit", "algo.options")) #---------------------------------------------------------------------------- # misty.object of type 'blimp.print' # Example 2 # Create misty.object object <- blimp.print("Ex4.3.blimp-out", output = FALSE) # Print misty.object blimp.print(object) #---------------------------------------------------------------------------- # Write Results # Example 3: Write Results into a text file blimp.print("Ex4.3.blimp-out", write = "Output_4-3.txt") ## End(Not run)
## Not run: #---------------------------------------------------------------------------- # Blimp Example 4.3: Linear Regression # Example 1a: Default setting blimp.print("Ex4.3.blimp-out") # Example 1c: Print OUTCOME MODEL ESTIMATES only blimp.print("Ex4.3.blimp-out", result = "out.model") # Example 1d: Print MODEL FIT in addition to the default setting blimp.print("Ex4.3.blimp-out", result = c("default", "fit")) # Example 1e: Exclude DATA INFORMATION section blimp.print("Ex4.3.blimp-out", exclude = "data.info") # Example 1f: Print all result sections, but exclude MODEL FIT section blimp.print("Ex4.3.blimp-out", result = "all", exclude = "fit") # Example 1g: Print result section in a different order blimp.print("Ex4.3.blimp-out", result = c("model.info", "fit", "algo.options")) #---------------------------------------------------------------------------- # misty.object of type 'blimp.print' # Example 2 # Create misty.object object <- blimp.print("Ex4.3.blimp-out", output = FALSE) # Print misty.object blimp.print(object) #---------------------------------------------------------------------------- # Write Results # Example 3: Write Results into a text file blimp.print("Ex4.3.blimp-out", write = "Output_4-3.txt") ## End(Not run)
This function runs a group of Blimp models (.imp
files) located within a
single directory or nested within subdirectories.
blimp.run(target = getwd(), recursive = FALSE, replace.out = c("always", "never", "modified"), posterior = FALSE, folder = "Posterior_", format = c("csv", "csv2", "xlsx", "rds", "RData"), clear = FALSE, Blimp = detect.blimp(), check = TRUE)
blimp.run(target = getwd(), recursive = FALSE, replace.out = c("always", "never", "modified"), posterior = FALSE, folder = "Posterior_", format = c("csv", "csv2", "xlsx", "rds", "RData"), clear = FALSE, Blimp = detect.blimp(), check = TRUE)
target |
a character string indicating the directory containing
Blimp input files ( |
recursive |
logical: if |
replace.out |
a character string for specifying three settings:
|
posterior |
logical: if |
folder |
a character string indicating the prefix of the folder for
saving the posterior distributions. The default setting is
|
format |
a character vector indicating the file format(s) for saving the
posterior distributions, i.e., |
clear |
logical: if |
Blimp |
a character string for specifying the name or path of the Blimp executable to be used for running models. This covers situations where Blimp is not in the system's path, or where one wants to test different versions of the Blimp program. Note that there is no need to specify this argument for most users since it has intelligent defaults. |
check |
logical: if |
None.
This function is based on the detect_blimp()
and rblimp()
function
in the rblimp package by Brian T.Keller (2024).
Takuya Yanagida
Keller, B. T., & Enders, C. K. (2023). Blimp user’s guide (Version 3). Retrieved from www.appliedmissingdata.com/blimp
Keller B (2024). rblimp: Integration of Blimp Software into R. R package version 0.1.31. https://github.com/blimp-stats/rblimp
blimp
, blimp.update
, blimp.print
, blimp.plot
, blimp.bayes
## Not run: # Example 1: Run Blimp models located within the current working directory blimp.run() # Example 2: Run Blimp models located nested within subdirectories blimp.run(recursive = TRUE) # Example 3: Run Blimp input file blimp.run("Ex4.1a.imp") # Example 4: Run Blimp input files blimp.run(c("Ex4.1a.imp", "Ex4.1b.imp")) # Example 5: Run Blimp models, save posterior distribution in a R workspace blimp.run(posterior = TRUE, format = "workspace") ## End(Not run)
## Not run: # Example 1: Run Blimp models located within the current working directory blimp.run() # Example 2: Run Blimp models located nested within subdirectories blimp.run(recursive = TRUE) # Example 3: Run Blimp input file blimp.run("Ex4.1a.imp") # Example 4: Run Blimp input files blimp.run(c("Ex4.1a.imp", "Ex4.1b.imp")) # Example 5: Run Blimp models, save posterior distribution in a R workspace blimp.run(posterior = TRUE, format = "workspace") ## End(Not run)
This function updates specific input command sections of a misty.object
of type blimp
to create an updated Blimp input file, run the updated
input file by using the blimp.run()
function, and print the updated
Blimp output file by using the blimp.print()
function.
blimp.update(x, update, file = "Blimp_Input_Update.imp", comment = FALSE, replace.inp = TRUE, blimp.run = TRUE, posterior = FALSE, folder = "Posterior_", format = c("csv", "csv2", "xlsx", "rds", "RData"), clear = TRUE, replace.out = c("always", "never", "modified"), Blimp = detect.blimp(), result = c("all", "default", "algo.options", "data.info", "model.info", "warn.mess", "out.model", "gen.param"), exclude = NULL, color = c("none", "blue", "violet"), style = c("bold", "regular"), not.result = TRUE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
blimp.update(x, update, file = "Blimp_Input_Update.imp", comment = FALSE, replace.inp = TRUE, blimp.run = TRUE, posterior = FALSE, folder = "Posterior_", format = c("csv", "csv2", "xlsx", "rds", "RData"), clear = TRUE, replace.out = c("always", "never", "modified"), Blimp = detect.blimp(), result = c("all", "default", "algo.options", "data.info", "model.info", "warn.mess", "out.model", "gen.param"), exclude = NULL, color = c("none", "blue", "violet"), style = c("bold", "regular"), not.result = TRUE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
x |
|
update |
a character vector containing the updated input command sections. |
file |
a character string indicating the name of the updated Blimp
input file with or without the file extension |
comment |
logical: if |
replace.inp |
logical: if |
blimp.run |
logical: if |
posterior |
logical: if |
folder |
a character string indicating the prefix of the folder for
saving the posterior distributions. The default setting is
|
format |
a character vector indicating the file format(s) for saving the
posterior distributions, i.e., |
clear |
logical: if |
replace.out |
a character string for specifying three settings:
|
Blimp |
a character string for specifying the name or path of the Blimp executable to be used for running models. This covers situations where Blimp is not in the system's path, or where one wants to test different versions of the Blimp program. Note that there is no need to specify this argument for most users since it has intelligent defaults. |
result |
a character vector specifying Blimp result sections included
in the output (see 'Details' in the |
exclude |
a character vector specifying Blimp input command or result
sections excluded from the output (see 'Details' in the
|
color |
a character vector with two elements indicating the colors
used for headers (e.g., |
style |
a character vector with two elements indicating the style
used for headers (e.g., |
not.result |
logical: if |
write |
a character string naming a file for writing the output into
a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
data |
a matrix or data frame from which the variables names for
the section |
The function is used to update following Blimp input sections:
DATA
VARIABLES
CLUSTERID
ORDINAL
NOMINAL
COUNT
WEIGHT
MISSING
LATENT
RANDOMEFFECT
TRANSFORM
BYGROUP
FIXED
CENTER
MODEL
SIMPLE
PARAMETERS
TEST
FCS
SIMUALTE
SEED
BURN
ITERATIONS
CHAINS
NIMPS
THIN
OPTIONS
OUTPUT
SAVE
---;
SpecificationThe ---;
specification
is used to remove entire sections (e.g., CENTER: ---;
) from the Blimp
input. Note that ---;
including the semicolon ;
needs to be
specified, i.e., ---
without the semicolon ;
will result in an
error message.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
|
update |
a character vector containing the updated Blimp input command sections |
args |
specification of function arguments |
write |
updated write command sections |
result |
list with result sections ( |
Takuya Yanagida
Keller, B. T., & Enders, C. K. (2023). Blimp user’s guide (Version 3). Retrieved from www.appliedmissingdata.com/blimp
blimp.run
, blimp.print
, blimp.plot
, blimp.bayes
## Not run: #---------------------------------------------------------------------------- # Example 1a: Update BURN and ITERATIONS section # Specify Blimp input input <- ' DATA: data1.csv; ORDINAL: d; MISSING: 999; FIXED: d; CENTER: x1 x2; MODEL: y ~ x1 x2 d; SEED: 90291; BURN: 1000; ITERATIONS: 10000; ' # Run Blimp input mod0 <- blimp(input, file = "Ex4.3.imp", clear = FALSE) # Update sections update1 <- ' BURN: 5000; ITERATIONS: 20000; ' # Run updated Blimp input mod1 <- blimp.update(mod0, update1, file = "Ex4.3_update1.imp") #---------------------------------------------------------------------------- # Example 1b: Remove CENTER section # Remove section update2 <- ' CENTER: ---; ' # Run updated Blimp input mod2 <- blimp.update(mod1, update2, file = "Ex4.3_update2.imp") ## End(Not run)
## Not run: #---------------------------------------------------------------------------- # Example 1a: Update BURN and ITERATIONS section # Specify Blimp input input <- ' DATA: data1.csv; ORDINAL: d; MISSING: 999; FIXED: d; CENTER: x1 x2; MODEL: y ~ x1 x2 d; SEED: 90291; BURN: 1000; ITERATIONS: 10000; ' # Run Blimp input mod0 <- blimp(input, file = "Ex4.3.imp", clear = FALSE) # Update sections update1 <- ' BURN: 5000; ITERATIONS: 20000; ' # Run updated Blimp input mod1 <- blimp.update(mod0, update1, file = "Ex4.3_update1.imp") #---------------------------------------------------------------------------- # Example 1b: Remove CENTER section # Remove section update2 <- ' CENTER: ---; ' # Run updated Blimp input mod2 <- blimp.update(mod1, update2, file = "Ex4.3_update2.imp") ## End(Not run)
This function centers predictor variables in single-level data, two-level data, and three-level data at the grand mean (CGM, i.e., grand mean centering) or within cluster (CWC, i.e., group mean centering).
center(..., data = NULL, cluster = NULL, type = c("CGM", "CWC"), cwc.mean = c("L2", "L3"), value = NULL, append = TRUE, name = ".c", as.na = NULL, check = TRUE)
center(..., data = NULL, cluster = NULL, type = c("CGM", "CWC"), cwc.mean = c("L2", "L3"), value = NULL, append = TRUE, name = ".c", as.na = NULL, check = TRUE)
... |
a numeric vector for centering a predictor variable, or a
data frame for centering more than one predictor. Alternatively,
an expression indicating the variable names in |
data |
a data frame when specifying one or more predictor variables
in the argument |
cluster |
a character string indicating the name of the cluster
variable in |
type |
a character string indicating the type of centering, i.e.,
|
cwc.mean |
a character string indicating the type of centering of a level-1
predictor variable in a three-level model, i.e., |
value |
a numeric value for centering on a specific user-defined value.
Note that this option is only available when specifying a
single-level predictor variable, i.e., |
append |
logical: if |
name |
a character string or character vector indicating the names of
the centered predictor variables. By default, centered predictor
variables are named with the ending |
as.na |
a numeric vector indicating user-defined missing values, i.e.
these values are converted to |
check |
logical: if |
Predictor variables in single-level
data can only be centered at the grand mean (CGM) by specifying
type = "CGM"
:
where is the predictor value of observation
and
is the average
score. Note that predictor variables
can be centered on any meaningful value specifying the argument
value
,
e.g., a predictor variable centered at 5 by applying following formula:
resulting in a mean of the centered predictor variable of 5.
Level-1 (L1) predictor variables in
two-level data can be centered at the grand mean (CGM) by specifying
type = "CGM"
:
where is the predictor value of observation
in L2 cluster
and
is the average
score.
L1 predictor variables are centered at the group mean (CWC) by specifying
type = "CWC"
(Default):
where is the average
score in cluster
.
Level-2 (L1) predictor variables in two-level data can only be centered at the grand mean:
where is the predictor value of Level 2 cluster
and
is the average Level-2 cluster score. Note that the cluster
membership variable needs to be specified when centering a L2 predictor variable
in two-level data. Otherwise the average
individual score instead
of the average
cluster score is used to center the predictor
variable.
Level-1 (L1) predictor variables in
three-level data can be centered at the grand mean (CGM) by specifying
type = "CGM"
:
where is the predictor value of observation
in Level-2
cluster
within Level-3 cluster
and
is the average
score.
L1 predictor variables are centered within cluster (CWC) by specifying
type = "CWC"
(Default). However, L1 predictor variables can be either
centered within Level-2 cluster (cwc.mean = "L2"
, Default, see Brincks et
al., 2017):
or within Level-3 cluster (cwc.mean = "L3"
, see Enders, 2013):
where is the average
score in Level-2 cluster
within Level-3 cluster
and
is the average
score in Level-3 cluster
.
Level-2 (L2) predictor variables in three-level data can be centered
at the grand mean (CGM) by specifying type = "CGM"
:
where is the predictor value of Level-2 cluster
within
Level-3 cluster
and
is the average Level-2 cluster
score.
L2 predictor variables are centered within cluster (CWC) by specifying
type = "CWC"
(Default):
where is the average
score in Level-3 cluster
.
Level-3 (L3) predictor variables in three-level data can only be centered at the grand mean:
where is the predictor value of Level-3 cluster
and
is the average Level-3 cluster score. Note that the cluster
membership variable needs to be specified when centering a L3 predictor variable
in three-level data.
Returns a numeric vector or data frame with the same length or same number of
rows as ...
containing the centered variable(s).
Takuya Yanagida [email protected]
Brincks, A. M., Enders, C. K., Llabre, M. M., Bulotsky-Shearer, R. J., Prado, G., & Feaster, D. J. (2017). Centering predictor variables in three-level contextual models. Multivariate Behavioral Research, 52(2), 149–163. https://doi.org/10.1080/00273171.2016.1256753
Chang, C.-N., & Kwok, O.-M. (2022) Partitioning Variance for a Within-Level Predictor in Multilevel Models. Structural Equation Modeling: A Multidisciplinary Journal. Advance online publication. https://doi.org/10.1080/10705511.2022.2051175
Enders, C. K. (2013). Centering predictors and contextual effects. In M. A. Scott, J. S. Simonoff, & B. D. Marx (Eds.), The Sage handbook of multilevel modeling (pp. 89-109). Sage. https://dx.doi.org/10.4135/9781446247600
Enders, C. K., & Tofighi, D. (2007). Centering predictor variables in cross-sectional multilevel models: A new look at an old issue. Psychological Methods, 12, 121-138. https://doi.org/10.1037/1082-989X.12.2.121
Rights, J. D., Preacher, K. J., & Cole, D. A. (2020). The danger of conflating level-specific effects of control variables when primary interest lies in level-2 effects. British Journal of Mathematical & Statistical Psychology, 73, 194-211. https://doi.org/10.1111/bmsp.12194
Yaremych, H. E., Preacher, K. J., & Hedeker, D. (2021). Centering categorical predictors in multilevel models: Best practices and interpretation. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000434
coding
, cluster.scores
, rec
,
item.reverse
, rwg.lindell
, item.scores
.
#---------------------------------------------------------------------------- # Predictor Variables in Single-Level Data # Example 1a: Center predictor 'disp' at the grand mean center(mtcars$disp) # Example 1b: Alternative specification using the 'data' argument center(disp, data = mtcars) # Example 2a: Center predictors 'disp' and 'hp' at the grand mean and append to 'mtcars' cbind(mtcars, center(mtcars[, c("disp", "hp")])) # Example 2b: Alternative specification using the 'data' argument center(disp, hp, data = mtcars) # Example 3: Center predictor 'disp' at the value 3 center(disp, data = mtcars, value = 3) # Example 4: Center predictors 'disp' and 'hp' and label with the suffix ".v" center(disp, hp, data = mtcars, name = ".v") #---------------------------------------------------------------------------- # Predictor Variables in Two-Level Data # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") # Example 5a: Center L1 predictor 'y1' within cluster center(Demo.twolevel$y1, cluster = Demo.twolevel$cluster) # Example 5b: Alternative specification using the 'data' argument center(y1, data = Demo.twolevel, cluster = "cluster") # Example 6: Center L2 predictor 'w2' at the grand mean center(w1, data = Demo.twolevel, cluster = "cluster") # Example 6: Center L1 predictor 'y1' within cluster and L2 predictor 'w1' at the grand mean center(y1, w1, data = Demo.twolevel, cluster = "cluster") #---------------------------------------------------------------------------- # Predictor Variables in Three-Level Data # Create arbitrary three-level data Demo.threelevel <- data.frame(Demo.twolevel, cluster2 = Demo.twolevel$cluster, cluster3 = rep(1:10, each = 250)) # Example 7a: Center L1 predictor 'y1' within L2 cluster center(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2")) # Example 7b: Center L1 predictor 'y1' within L3 cluster center(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2"), cwc.mean = "L3") # Example 7b: Center L1 predictor 'y1' within L2 cluster and L2 predictor 'w1' within L3 cluster center(y1, w1, data = Demo.threelevel, cluster = c("cluster3", "cluster2"))
#---------------------------------------------------------------------------- # Predictor Variables in Single-Level Data # Example 1a: Center predictor 'disp' at the grand mean center(mtcars$disp) # Example 1b: Alternative specification using the 'data' argument center(disp, data = mtcars) # Example 2a: Center predictors 'disp' and 'hp' at the grand mean and append to 'mtcars' cbind(mtcars, center(mtcars[, c("disp", "hp")])) # Example 2b: Alternative specification using the 'data' argument center(disp, hp, data = mtcars) # Example 3: Center predictor 'disp' at the value 3 center(disp, data = mtcars, value = 3) # Example 4: Center predictors 'disp' and 'hp' and label with the suffix ".v" center(disp, hp, data = mtcars, name = ".v") #---------------------------------------------------------------------------- # Predictor Variables in Two-Level Data # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") # Example 5a: Center L1 predictor 'y1' within cluster center(Demo.twolevel$y1, cluster = Demo.twolevel$cluster) # Example 5b: Alternative specification using the 'data' argument center(y1, data = Demo.twolevel, cluster = "cluster") # Example 6: Center L2 predictor 'w2' at the grand mean center(w1, data = Demo.twolevel, cluster = "cluster") # Example 6: Center L1 predictor 'y1' within cluster and L2 predictor 'w1' at the grand mean center(y1, w1, data = Demo.twolevel, cluster = "cluster") #---------------------------------------------------------------------------- # Predictor Variables in Three-Level Data # Create arbitrary three-level data Demo.threelevel <- data.frame(Demo.twolevel, cluster2 = Demo.twolevel$cluster, cluster3 = rep(1:10, each = 250)) # Example 7a: Center L1 predictor 'y1' within L2 cluster center(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2")) # Example 7b: Center L1 predictor 'y1' within L3 cluster center(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2"), cwc.mean = "L3") # Example 7b: Center L1 predictor 'y1' within L2 cluster and L2 predictor 'w1' within L3 cluster center(y1, w1, data = Demo.threelevel, cluster = c("cluster3", "cluster2"))
This function computes tolerance, standard error inflation factor, variance inflation factor, eigenvalues, condition index, and variance proportions for linear, generalized linear, and mixed-effects models.
check.collin(model, print = c("all", "vif", "eigen"), digits = 3, p.digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
check.collin(model, print = c("all", "vif", "eigen"), digits = 3, p.digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
model |
a fitted model of class |
print |
a character vector indicating which results to show, i.e. |
digits |
an integer value indicating the number of decimal places to be used for displaying results. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
Collinearity diagnostics can be conducted for objects returned from the lm()
and glm()
function, but also from objects returned from the lmer()
and glmer()
function from the lme4 package, lme()
function
from the nlme package, and the glmmTMB()
function from the glmmTMB
package.
The generalized variance inflation factor (Fox & Monette, 1992) is computed for
terms with more than 1 df resulting from factors with more than two levels. The
generalized VIF (GVIF) is interpretable as the inflation in size of the confidence
ellipse or ellipsoid for the coefficients of the term in comparison with what would
be obtained for orthogonal data. GVIF is invariant to the coding of the terms in
the model. In order to adjust for the dimension of the confidence ellipsoid,
GVIF is computed. Note that the adjusted GVIF (aGVIF) is
actually a generalized standard error inflation factor (GSIF). Thus, the aGIF
needs to be squared before applying a common cutoff threshold for the VIF (e.g.,
VIF > 10). Note that the output of
check.collin()
function reports either
the variance inflation factor or the squared generalized variance inflation factor
in the column VIF
, while the standard error inflation factor or the adjusted
generalized variance inflation factor is reported in the column SIF
.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
model |
model specified in the |
args |
specification of function arguments |
result |
list with result tables, i.e., |
The computation of the VIF and the GVIF is based on the vif()
function
in the car package by John Fox, Sanford Weisberg and Brad Price (2020),
and the computation of eigenvalues, condition index, and variance proportions
is based on the ols_eigen_cindex()
function in the olsrr package
by Aravind Hebbali (2020).
Takuya Yanagida [email protected]
Fox, J., & Monette, G. (1992). Generalized collinearity diagnostics. Journal of the American Statistical Association, 87, 178-183.
Fox, J., Weisberg, S., & Price, B. (2020). car: Companion to Applied Regression. R package version 3.0-8. https://cran.r-project.org/web/packages/car/
Hebbali, A. (2020). olsrr: Tools for building OLS regression models. R package version 0.5.3. https://cran.r-project.org/web/packages/olsrr/
dat <- data.frame(group = c(1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4), x1 = c(3, 2, 4, 9, 5, 3, 6, 4, 5, 6, 3, 5), x2 = c(1, 4, 3, 1, 2, 4, 3, 5, 1, 7, 8, 7), x3 = c(7, 3, 4, 2, 5, 6, 4, 2, 3, 5, 2, 8), x4 = c("a", "b", "a", "c", "c", "c", "a", "b", "b", "c", "a", "c"), y1 = c(2, 7, 4, 4, 7, 8, 4, 2, 5, 1, 3, 8), y2 = c(0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1), stringsAsFactors = TRUE) #------------------------------------------------------------------------------- # Linear model # Estimate linear model with continuous predictors mod.lm1 <- lm(y1 ~ x1 + x2 + x3, data = dat) # Example 1: Tolerance, std. error, and variance inflation factor check.collin(mod.lm1) # Example 2: Tolerance, std. error, and variance inflation factor # Eigenvalue, Condition index, and variance proportions check.collin(mod.lm1, print = "all") # Estimate model with continuous and categorical predictors mod.lm2 <- lm(y1 ~ x1 + x2 + x3 + x4, data = dat) # Example 3: Tolerance, generalized std. error, and variance inflation factor check.collin(mod.lm2) #------------------------------------------------------------------------------- # Generalized linear model # Estimate logistic regression model with continuous predictors mod.glm <- glm(y2 ~ x1 + x2 + x3, data = dat, family = "binomial") # Example 4: Tolerance, std. error, and variance inflation factor check.collin(mod.glm) ## Not run: #------------------------------------------------------------------------------- # Linear mixed-effects model # Estimate linear mixed-effects model with continuous predictors using lme4 package mod.lmer <- lme4::lmer(y1 ~ x1 + x2 + x3 + (1|group), data = dat) # Example 5: Tolerance, std. error, and variance inflation factor check.collin(mod.lmer) # Estimate linear mixed-effects model with continuous predictors using nlme package mod.lme <- nlme::lme(y1 ~ x1 + x2 + x3, random = ~ 1 | group, data = dat) # Example 6: Tolerance, std. error, and variance inflation factor check.collin(mod.lme) # Estimate linear mixed-effects model with continuous predictors using glmmTMB package mod.glmmTMB1 <- glmmTMB::glmmTMB(y1 ~ x1 + x2 + x3 + (1|group), data = dat) # Example 7: Tolerance, std. error, and variance inflation factor check.collin(mod.glmmTMB1) #------------------------------------------------------------------------------- # Generalized linear mixed-effects model # Estimate mixed-effects logistic regression model with continuous predictors using lme4 package mod.glmer <- lme4::glmer(y2 ~ x1 + x2 + x3 + (1|group), data = dat, family = "binomial") # Example 8: Tolerance, std. error, and variance inflation factor check.collin(mod.glmer) # Estimate mixed-effects logistic regression model with continuous predictors using glmmTMB package mod.glmmTMB2 <- glmmTMB::glmmTMB(y2 ~ x1 + x2 + x3 + (1|group), data = dat, family = "binomial") # Example 9: Tolerance, std. error, and variance inflation factor check.collin(mod.glmmTMB2) #---------------------------------------------------------------------------- # Write Results # Example 10: Write results into a text file check.collin(mod.lm1, write = "Diagnostics.txt") ## End(Not run)
dat <- data.frame(group = c(1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4), x1 = c(3, 2, 4, 9, 5, 3, 6, 4, 5, 6, 3, 5), x2 = c(1, 4, 3, 1, 2, 4, 3, 5, 1, 7, 8, 7), x3 = c(7, 3, 4, 2, 5, 6, 4, 2, 3, 5, 2, 8), x4 = c("a", "b", "a", "c", "c", "c", "a", "b", "b", "c", "a", "c"), y1 = c(2, 7, 4, 4, 7, 8, 4, 2, 5, 1, 3, 8), y2 = c(0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1), stringsAsFactors = TRUE) #------------------------------------------------------------------------------- # Linear model # Estimate linear model with continuous predictors mod.lm1 <- lm(y1 ~ x1 + x2 + x3, data = dat) # Example 1: Tolerance, std. error, and variance inflation factor check.collin(mod.lm1) # Example 2: Tolerance, std. error, and variance inflation factor # Eigenvalue, Condition index, and variance proportions check.collin(mod.lm1, print = "all") # Estimate model with continuous and categorical predictors mod.lm2 <- lm(y1 ~ x1 + x2 + x3 + x4, data = dat) # Example 3: Tolerance, generalized std. error, and variance inflation factor check.collin(mod.lm2) #------------------------------------------------------------------------------- # Generalized linear model # Estimate logistic regression model with continuous predictors mod.glm <- glm(y2 ~ x1 + x2 + x3, data = dat, family = "binomial") # Example 4: Tolerance, std. error, and variance inflation factor check.collin(mod.glm) ## Not run: #------------------------------------------------------------------------------- # Linear mixed-effects model # Estimate linear mixed-effects model with continuous predictors using lme4 package mod.lmer <- lme4::lmer(y1 ~ x1 + x2 + x3 + (1|group), data = dat) # Example 5: Tolerance, std. error, and variance inflation factor check.collin(mod.lmer) # Estimate linear mixed-effects model with continuous predictors using nlme package mod.lme <- nlme::lme(y1 ~ x1 + x2 + x3, random = ~ 1 | group, data = dat) # Example 6: Tolerance, std. error, and variance inflation factor check.collin(mod.lme) # Estimate linear mixed-effects model with continuous predictors using glmmTMB package mod.glmmTMB1 <- glmmTMB::glmmTMB(y1 ~ x1 + x2 + x3 + (1|group), data = dat) # Example 7: Tolerance, std. error, and variance inflation factor check.collin(mod.glmmTMB1) #------------------------------------------------------------------------------- # Generalized linear mixed-effects model # Estimate mixed-effects logistic regression model with continuous predictors using lme4 package mod.glmer <- lme4::glmer(y2 ~ x1 + x2 + x3 + (1|group), data = dat, family = "binomial") # Example 8: Tolerance, std. error, and variance inflation factor check.collin(mod.glmer) # Estimate mixed-effects logistic regression model with continuous predictors using glmmTMB package mod.glmmTMB2 <- glmmTMB::glmmTMB(y2 ~ x1 + x2 + x3 + (1|group), data = dat, family = "binomial") # Example 9: Tolerance, std. error, and variance inflation factor check.collin(mod.glmmTMB2) #---------------------------------------------------------------------------- # Write Results # Example 10: Write results into a text file check.collin(mod.lm1, write = "Diagnostics.txt") ## End(Not run)
This function computes statistical measures for leverage, distance, and
influence for linear models estimated by using the lm()
function.
Mahalanobis distance and hat values are computed for quantifying
leverage, standardized leverage-corrected residuals and
studentized leverage-corrected residuals are computed for quantifying
distance, and Cook's distance and DfBetas are computed
for quantifying influence.
check.outlier(model, check = TRUE, ...)
check.outlier(model, check = TRUE, ...)
model |
a fitted model of class |
check |
logical: if |
... |
further arguments to be passed to or from methods. |
In regression analysis, an observation can be extreme in three major ways (see
Darlington & Hayes, p. 484): (1) An observation has high leverage if it
has a atypical pattern of values on the predictors, (2) an observation has high
distance if its observed outcome value has a large deviation
from the predicted value
, and (3) an observation has high
influence if its inclusion substantially changes the estimates for the
intercept and/or slopes.
Returns a data frame with following entries:
idout |
ID variable |
mahal |
Mahalanobis distance |
hat |
hat values |
rstand |
standardized leverage-corrected residuals |
rstud |
studentized leverage-corrected residuals |
cook |
Cook's distance |
Intercept.dfb |
DFBetas for the intercept |
pred1.dfb |
DFBetas for the slope of the predictor pred1 |
....dfb |
DFBetas for the slope of the predictor ... |
Takuya Yanagida [email protected]
Darlington, R. B., &, Hayes, A. F. (2017). Regression analysis and linear models: Concepts, applications, and implementation. The Guilford Press.
# Example 1: Regression model and measures for leverage, distance, and influence mod.lm <- lm(mpg ~ cyl + disp + hp, data = mtcars) check.outlier(mod.lm) # Merge result table with the data dat1 <- cbind(mtcars, check.outlier(mod.lm))
# Example 1: Regression model and measures for leverage, distance, and influence mod.lm <- lm(mpg ~ cyl + disp + hp, data = mtcars) check.outlier(mod.lm) # Merge result table with the data dat1 <- cbind(mtcars, check.outlier(mod.lm))
This function performs residual diagnostics for linear models estimated by
using the lm()
function for detecting nonlinearity (partial residual or
component-plus-residual plots), nonconstant error variance (predicted values
vs. residuals plot), and non-normality of residuals (Q-Q plot and histogram
with density plot).
check.resid(model, type = c("linear", "homo", "normal"), resid = c("unstand", "stand", "student"), point.shape = 21, point.fill = "gray80", point.size = 1, line1 = TRUE, line2 = TRUE, line.type1 = "solid", line.type2 = "dashed", line.width1 = 1, line.width2 = 1, line.color1 = "#0072B2", line.color2 = "#D55E00", bar.width = NULL, bar.n = 30, bar.color = "black", bar.fill = "gray95", strip.size = 11, label.size = 10, axis.size = 10, xlimits = NULL, ylimits = NULL, xbreaks = ggplot2::waiver(), ybreaks = ggplot2::waiver(), check = TRUE, plot = TRUE)
check.resid(model, type = c("linear", "homo", "normal"), resid = c("unstand", "stand", "student"), point.shape = 21, point.fill = "gray80", point.size = 1, line1 = TRUE, line2 = TRUE, line.type1 = "solid", line.type2 = "dashed", line.width1 = 1, line.width2 = 1, line.color1 = "#0072B2", line.color2 = "#D55E00", bar.width = NULL, bar.n = 30, bar.color = "black", bar.fill = "gray95", strip.size = 11, label.size = 10, axis.size = 10, xlimits = NULL, ylimits = NULL, xbreaks = ggplot2::waiver(), ybreaks = ggplot2::waiver(), check = TRUE, plot = TRUE)
model |
a fitted model of class |
type |
a character string specifying the type of the plot, i.e.,
|
resid |
a character string specifying the type of residual used for
the partial (component-plus-residual) plots or Q-Q plot and
histogram, i.e., |
point.shape |
a numeric value for specifying the argument |
point.fill |
a numeric value for specifying the argument |
point.size |
a numeric value for specifying the argument |
line1 |
logical: if |
line2 |
logical: if |
line.type1 |
a character string or numeric value for specifying the argument
|
line.type2 |
a character string or numeric value for specifying the argument
|
line.width1 |
a numeric value for specifying the argument |
line.width2 |
a numeric value for specifying the argument |
line.color1 |
a character string or numeric value for specifying the argument
|
line.color2 |
a character string or numeric value for specifying the argument
|
bar.width |
a numeric value for specifying the argument |
bar.n |
a numeric value for specifying the argument |
bar.color |
a character string or numeric value for specifying the argument
|
bar.fill |
a character string or numeric value for specifying the argument
|
strip.size |
a numeric value for specifying the argument |
label.size |
a numeric value for specifying the argument |
axis.size |
a numeric value for specifying the argument |
xlimits |
a numeric value for specifying the argument |
ylimits |
a numeric value for specifying the argument |
xbreaks |
a numeric value for specifying the argument |
ybreaks |
a numeric value for specifying the argument |
check |
logical: if |
plot |
logical: if |
The violation of the assumption of linearity
implies that the model cannot accurately capture the systematic pattern of the
relationship between the outcome and predictor variables. In other words, the
specified regression surface does not accurately represent the relationship
between the conditional mean values of and the
s. That means
the average error
is not 0 at every point on the regression
surface (Fox, 2015).
In multiple regression, plotting the outcome variable against each predictor
variable
can be misleading because it does not reflect the partial
relationship between
and
(i.e., statistically controlling for
the other
s), but rather the marginal relationship between
and
(i.e., ignoring the other
s). Partial residual plots or
component-plus-residual plots should be used to detect nonlinearity in multiple
regression. The partial residual for the
th predictor variable is defined
as
The linear component of the partial relationship between and
is added back to the least-squares residuals, which may include an unmodeled
nonlinear component. Then, the partial residual
is plotted
against the predictor variable
. Nonlinearity may become apparent when
a non-parametric regression smoother is applied.
By default, the function plots each predictor against the partial residuals, and draws the linear regression and the loess smooth line to the partial residual plots.
The violation of the assumption of constant error variance, often referred to as heteroscedasticity, implies that the variance of the outcome variable around the regression surface is not the same at every point on the regression surface (Fox, 2015).
Plotting residuals against the outcome variable instead of the predicted
values
is not recommended because
. Consequently,
the linear correlation between the outcome variable
and the residuals
is
where
is the multiple correlation coefficient.
In contrast, plotting residuals against the predicted values
is
much easier to examine for evidence of nonconstant error variance as the correlation
between
and
is 0. Note that the least-squares residuals
generally have unequal variance
where
is the leverage of observation
, even if errors have constant
variance
. The studentized residuals
, however, have
a constant variance under the assumption of the regression model. Residuals
are studentized by dividing them by
where
is the estimate of
obtained after deleting the
th observation, and
is the leverage of observation
(Meuleman et al, 2015).
By default, the function plots the predicted values against the studentized residuals. It also draws a horizontal line at 0, a loess smooth lines for all residuals as well as separate loess smooth lines for positive and negative residuals.
Statistical inference under the violation of the assumption of normally distributed errors is approximately valid in all but small samples. However, the efficiency of least squares is not robust because the least-squares estimator is the most efficient and unbiased estimator only when the errors are normally distributed. For instance, when error distributions have heavy tails, the least-squares estimator becomes much less efficient compared to robust estimators. In addition, error distributions with heavy-tails result in outliers and compromise the interpretation of conditional means because the mean is not an accurate measure of central tendency in a highly skewed distribution. Moreover, a multimodal error distribution suggests the omission of one or more discrete explanatory variables that naturally divide the data into groups (Fox, 2016).
By default, the function plots a Q-Q plot of the unstandardized residuals, and
a histogram of the unstandardized residuals and a density plot. Note that
studentized residuals follow a -distribution with
degrees
of freedom where
is the sample size and
is the number of predictors.
However, the normal and
-distribution are nearly identical unless the
sample size is small. Moreover, even if the model is correct, the studentized
residuals are not an independent random sample from
. Residuals
are correlated with each other depending on the configuration of the predictor
values. The correlation is generally negligible unless the sample size is small.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
model |
model specified in |
plotdat |
data frame used for the plot |
args |
specification of function arguments |
plot |
ggplot2 object for plotting the residuals |
Takuya Yanagida [email protected]
Fox, J. (2016). Applied regression analysis and generalized linear models (3rd ed.). Sage Publications, Inc.
Meuleman, B., Loosveldt, G., & Emonds, V. (2015). Regression analysis: Assumptions and diagnostics. In H. Best & C. Wolf (Eds.), The SAGE handbook of regression analysis and causal inference (pp. 83-110). Sage.
## Not run: #------------------------------------------------------------------------------- # Residual diagnostics for a linear model mod <- lm(Ozone ~ Solar.R + Wind + Temp, data = airquality) # Example 1: Partial (component-plus-residual) plots check.resid(mod, type = "linear") # Example 2: Predicted values vs. residuals plot check.resid(mod, type = "homo") # Example 3: Q-Q plot and histogram with density plot check.resid(mod, type = "normal") #------------------------------------------------------------------------------- # Extract data and ggplot2 object object <- check.resid(mod, type = "linear", plot = FALSE) # Data frame object$plotdat # ggplot object object$plot ## End(Not run)
## Not run: #------------------------------------------------------------------------------- # Residual diagnostics for a linear model mod <- lm(Ozone ~ Solar.R + Wind + Temp, data = airquality) # Example 1: Partial (component-plus-residual) plots check.resid(mod, type = "linear") # Example 2: Predicted values vs. residuals plot check.resid(mod, type = "homo") # Example 3: Q-Q plot and histogram with density plot check.resid(mod, type = "normal") #------------------------------------------------------------------------------- # Extract data and ggplot2 object object <- check.resid(mod, type = "linear", plot = FALSE) # Data frame object$plotdat # ggplot object object$plot ## End(Not run)
This function adds color and style to output texts on terminals that support
'ANSI' color and highlight codes that can be printed by using the cat
function.
chr.color(x, color = c("black", "red", "green", "yellow", "blue", "violet", "cyan", "white", "gray", "b.red", "b.green", "b.yellow", "b.blue", "b.violet", "b.cyan", "b.white"), bg = c("none", "black", "red", "green", "yellow", "blue", "violet", "cyan", "white"), style = c("regular", "bold", "italic", "underline"), check = TRUE)
chr.color(x, color = c("black", "red", "green", "yellow", "blue", "violet", "cyan", "white", "gray", "b.red", "b.green", "b.yellow", "b.blue", "b.violet", "b.cyan", "b.white"), bg = c("none", "black", "red", "green", "yellow", "blue", "violet", "cyan", "white"), style = c("regular", "bold", "italic", "underline"), check = TRUE)
x |
a character vector. |
color |
a character string indicating the text color, e.g., |
bg |
a character string indicating the background color of the text,
e.g., |
style |
a character vector indicating the font style, i.e., |
check |
logical: if |
Returns a character vector.
This function is based on functions provided in the crayon package by Gábor Csárdi.
Takuya Yanagida
Csárdi G (2022). crayon: Colored Terminal Output. R package version 1.5.2, https://CRAN.R-project.org/package=crayon
## Not run: # Example 1: cat(chr.color("Text in red.", color = "red")) # Example 2: cat(chr.color("Text in blue with green background.", color = "blue", bg = "yellow")) # Example 3a: cat(chr.color("Text in boldface.", style = "bold")) # Example 3b: cat(chr.color("Text in boldface and italic.", style = c("bold", "italic"))) ## End(Not run)
## Not run: # Example 1: cat(chr.color("Text in red.", color = "red")) # Example 2: cat(chr.color("Text in blue with green background.", color = "blue", bg = "yellow")) # Example 3a: cat(chr.color("Text in boldface.", style = "bold")) # Example 3b: cat(chr.color("Text in boldface and italic.", style = c("bold", "italic"))) ## End(Not run)
This function searches for matches to the character vector specified in
pattern
within each element of the character vector x
.
chr.grep(pattern, x, ignore.case = FALSE, perl = FALSE, value = FALSE, fixed = FALSE, useBytes = FALSE, invert = FALSE, check = TRUE) chr.grepl(pattern, x, ignore.case = FALSE, perl = FALSE, fixed = FALSE, useBytes = FALSE, check = TRUE)
chr.grep(pattern, x, ignore.case = FALSE, perl = FALSE, value = FALSE, fixed = FALSE, useBytes = FALSE, invert = FALSE, check = TRUE) chr.grepl(pattern, x, ignore.case = FALSE, perl = FALSE, fixed = FALSE, useBytes = FALSE, check = TRUE)
pattern |
a character vector with character strings to be matched. |
x |
a character vector where matches are sought. |
ignore.case |
logical: if |
perl |
logical: if |
value |
logical: if |
fixed |
logical: if |
useBytes |
logical: if |
invert |
logical: if |
check |
logical: if |
Returns a integer vector with the indices of the mathces when value = FALSE
,
character vector containing the matching elements when value = TRUE
, or
a logical vector when using the chr.grepl
function.
Takuya Yanagida
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole
chr.vector <- c("James", "Mary", "Michael", "Patricia", "Robert", "Jennifer") # Example 1: Indices of matching elements chr.grep(c("am", "er"), chr.vector) # Example 2: Values of matching elements chr.grep(c("am", "er"), chr.vector, value = TRUE) # Example 3: Matching element? chr.grepl(c("am", "er"), chr.vector)
chr.vector <- c("James", "Mary", "Michael", "Patricia", "Robert", "Jennifer") # Example 1: Indices of matching elements chr.grep(c("am", "er"), chr.vector) # Example 2: Values of matching elements chr.grep(c("am", "er"), chr.vector, value = TRUE) # Example 3: Matching element? chr.grepl(c("am", "er"), chr.vector)
This function is a multiple global string replacement wrapper that allows access to multiple methods of specifying matches and replacements.
chr.gsub(pattern, replacement, x, recycle = FALSE, ...)
chr.gsub(pattern, replacement, x, recycle = FALSE, ...)
pattern |
a character vector with character strings to be matched. |
replacement |
a character vector equal in length to |
x |
a character vector where matches and replacements are sought. |
recycle |
logical: if |
... |
additional arguments to pass to the |
Return a character vector of the same length and with the same attributes as
x
(after possible coercion to character).
This function was adapted from the mgsub()
function in the mgsub
package by Mark Ewing (2019).
Mark Ewing
Mark Ewing (2019). mgsub: Safe, Multiple, Simultaneous String Substitution. R package version 1.7.1. https://CRAN.R-project.org/package=mgsub
chr.grep
, chr.grepl
, chr.omit
, chr.trim
# Example 1: Replace 'the' and 'they' with 'a' and 'we' chr.vector <- "they don't understand the value of what they seek." chr.gsub(c("the", "they"), c("a", "we"), chr.vector) # Example 2: Replace 'heyy' and 'ho' with 'yo' chr.vector <- c("hey ho, let's go!") chr.gsub(c("hey", "ho"), "yo", chr.vector, recycle = TRUE) # Example 3: Replace with regular expressions chr.vector <- "Dopazamine is not the same as dopachloride or dopastriamine, yet is still fake." chr.gsub(c("[Dd]opa([^ ]*?mine)","fake"), c("Meta\1","real"), chr.vector)
# Example 1: Replace 'the' and 'they' with 'a' and 'we' chr.vector <- "they don't understand the value of what they seek." chr.gsub(c("the", "they"), c("a", "we"), chr.vector) # Example 2: Replace 'heyy' and 'ho' with 'yo' chr.vector <- c("hey ho, let's go!") chr.gsub(c("hey", "ho"), "yo", chr.vector, recycle = TRUE) # Example 3: Replace with regular expressions chr.vector <- "Dopazamine is not the same as dopachloride or dopastriamine, yet is still fake." chr.gsub(c("[Dd]opa([^ ]*?mine)","fake"), c("Meta\1","real"), chr.vector)
This function omits user-specified values or strings from a numeric vector, character vector or factor.
chr.omit(x, omit = "", na.omit = FALSE, check = TRUE)
chr.omit(x, omit = "", na.omit = FALSE, check = TRUE)
x |
a numeric vector, character vector or factor. |
omit |
a numeric vector or character vector indicating values or
strings to be omitted
from the vector |
na.omit |
logical: if |
check |
logical: if |
Returns a numeric vector, character vector or factor with values or strings
specified in omit
omitted from the vector specified in x
.
Takuya Yanagida [email protected]
chr.grep
, chr.grepl
, chr.gsub
, chr.trim
#------------------------------------------------------------------------------- # Charater vector x.chr <- c("a", "", "c", NA, "", "d", "e", NA) # Example 1: Omit character string "" chr.omit(x.chr) # Example 2: Omit character string "" and missing values (NA) chr.omit(x.chr, na.omit = TRUE) # Example 3: Omit character string "c" and "e" chr.omit(x.chr, omit = c("c", "e")) # Example 4: Omit character string "c", "e", and missing values (NA) chr.omit(x.chr, omit = c("c", "e"), na.omit = TRUE) #------------------------------------------------------------------------------- # Numeric vector x.num <- c(1, 2, NA, 3, 4, 5, NA) # Example 5: Omit values 2 and 4 chr.omit(x.num, omit = c(2, 4)) # Example 6: Omit values 2, 4, and missing values (NA) chr.omit(x.num, omit = c(2, 4), na.omit = TRUE) #------------------------------------------------------------------------------- # Factor x.factor <- factor(letters[1:10]) # Example 7: Omit factor levels "a", "c", "e", and "g" chr.omit(x.factor, omit = c("a", "c", "e", "g"))
#------------------------------------------------------------------------------- # Charater vector x.chr <- c("a", "", "c", NA, "", "d", "e", NA) # Example 1: Omit character string "" chr.omit(x.chr) # Example 2: Omit character string "" and missing values (NA) chr.omit(x.chr, na.omit = TRUE) # Example 3: Omit character string "c" and "e" chr.omit(x.chr, omit = c("c", "e")) # Example 4: Omit character string "c", "e", and missing values (NA) chr.omit(x.chr, omit = c("c", "e"), na.omit = TRUE) #------------------------------------------------------------------------------- # Numeric vector x.num <- c(1, 2, NA, 3, 4, 5, NA) # Example 5: Omit values 2 and 4 chr.omit(x.num, omit = c(2, 4)) # Example 6: Omit values 2, 4, and missing values (NA) chr.omit(x.num, omit = c(2, 4), na.omit = TRUE) #------------------------------------------------------------------------------- # Factor x.factor <- factor(letters[1:10]) # Example 7: Omit factor levels "a", "c", "e", and "g" chr.omit(x.factor, omit = c("a", "c", "e", "g"))
This function removes whitespace from start and/or end of a string
chr.trim(x, side = c("both", "left", "right"), check = TRUE)
chr.trim(x, side = c("both", "left", "right"), check = TRUE)
x |
a character vector. |
side |
a character string indicating the side on which to remove whitespace,
i.e., |
check |
logical: if |
Returns a character vector with whitespaces removed from the vector specified
in x
.
This function is based on the str_trim()
function from the stringr
package by Hadley Wickham.
Takuya Yanagida [email protected]
Wickham, H. (2019). stringr: Simple, consistent wrappers for common string operations. R package version 1.4.0.
chr.grep
, chr.grepl
, chr.gsub
, chr.omit
x <- " string " # Example 1: Remove whitespace at both sides chr.trim(x) # Example 2: Remove whitespace at the left side chr.trim(x, side = "left") # Example 3: Remove whitespace at the right side chr.trim(x, side = "right")
x <- " string " # Example 1: Remove whitespace at both sides chr.trim(x) # Example 2: Remove whitespace at the left side chr.trim(x, side = "left") # Example 3: Remove whitespace at the right side chr.trim(x, side = "right")
The function ci.mean
computes a confidence interval for the arithmetic
mean with known or unknown population standard deviation or population variance
and the function ci.median
computes the confidence interval for the
median for one or more variables, optionally by a grouping and/or split variable.
ci.mean(..., data = NULL, sigma = NULL, sigma2 = NULL, adjust = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE) ci.median(..., data = NULL, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
ci.mean(..., data = NULL, sigma = NULL, sigma2 = NULL, adjust = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE) ci.median(..., data = NULL, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a numeric vector, matrix or data frame with numeric variables,
i.e., factors and character variables are excluded from |
data |
a data frame when specifying one or more variables in the
argument |
sigma |
a numeric vector indicating the population standard deviation when computing confidence
intervals for the arithmetic mean with known standard deviation Note that either argument
|
sigma2 |
a numeric vector indicating the population variance when computing confidence intervals
for the arithmetic mean with known variance. Note that either argument |
adjust |
logical: if |
alternative |
a character string specifying the alternative hypothesis, must be one of
|
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
group |
either a character string indicating the variable name of
the grouping variable in |
split |
either a character string indicating the variable name of
the split variable in |
sort.var |
logical: if |
na.omit |
logical: if |
digits |
an integer value indicating the number of decimal places to be used. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
check |
logical: if |
write |
a character string naming a text file with file extension
|
append |
logical: if |
output |
logical: if |
A difference-adjusted confidence interval (Baguley, 2012) for the arithmetic
mean can be computed by specifying adjust = TRUE
.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
list with the input specified in |
args |
specification of function arguments |
result |
result table |
Takuya Yanagida [email protected]
Baguley, T. S. (2012). Serious stats: A guide to advanced statistics for the behavioral sciences. Palgrave Macmillan.
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
test.z
, test.t
, ci.mean.diff
,
ci.prop
, ci.var
, ci.sd
,
descript
# Example 1a: Two-Sided 95% Confidence Interval for the Arithmetic Mean for 'mpg' ci.mean(mtcars$mpg) # Example 1b: Alternative specification using the 'data' argument ci.mean(mpg, data = mtcars) # Example 2: Two-Sided 95% Confidence Interval for the Median ci.median(mtcars$mpg) # Example 3: Two-Sided 95% Difference-Adjusted Confidence Interval ci.mean(mtcars$mpg, adjust = TRUE) # Example 4: Two-Sided 95% Confidence Interval with known standard deviation ci.mean(mtcars$mpg, sigma = 1.2) # Example 5: Two-Sided 95% Confidence Interval with known variance ci.mean(mtcars$mpg, sigma2 = 2.5) # Example 6: One-Sided 95% Confidence Interval ci.mean(mtcars$mpg, alternative = "less") # Example 7: Two-Sided 99% Confidence Interval ci.mean(mtcars$mpg, conf.level = 0.99) # Example 8: Two-Sided 95% Confidence Interval, print results with 3 digits ci.mean(mtcars$mpg, digits = 3) # Example 9a: Two-Sided 95% Confidence Interval for 'mpg', 'cyl', and 'disp', # listwise deletion for missing data ci.mean(mtcars[, c("mpg", "cyl", "disp")], na.omit = TRUE) # # Example 9b: Alternative specification using the 'data' argument ci.mean(mpg:disp, data = mtcars, na.omit = TRUE) # Example 10a: Two-Sided 95% Confidence Interval, analysis by 'vs' separately ci.mean(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs) # Example 10b: Alternative specification using the 'data' argument ci.mean(mpg:disp, data = mtcars, group = "vs") # Example 11: Two-Sided 95% Confidence Interval, analysis by 'vs' separately, # sort by variables ci.mean(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs, sort.var = TRUE) # Example 12: Two-Sided 95% Confidence Interval, split analysis by 'am' ci.mean(mtcars[, c("mpg", "cyl", "disp")], split = mtcars$am) # Example 13a: Two-Sided 95% Confidence Interval for 'mpg', 'cyl', and 'disp' # analysis by 'vs' separately, split analysis by 'am' ci.mean(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs, split = mtcars$am) # Example 13b: Alternative specification using the 'data' argument ci.mean(mpg:disp, data = mtcars, group = "vs", split = "am") ## Not run: # Example 14: Write results into a text file ci.mean(mpg:disp, data = mtcars, group = "vs", split = "am", write = "Means.txt") ## End(Not run)
# Example 1a: Two-Sided 95% Confidence Interval for the Arithmetic Mean for 'mpg' ci.mean(mtcars$mpg) # Example 1b: Alternative specification using the 'data' argument ci.mean(mpg, data = mtcars) # Example 2: Two-Sided 95% Confidence Interval for the Median ci.median(mtcars$mpg) # Example 3: Two-Sided 95% Difference-Adjusted Confidence Interval ci.mean(mtcars$mpg, adjust = TRUE) # Example 4: Two-Sided 95% Confidence Interval with known standard deviation ci.mean(mtcars$mpg, sigma = 1.2) # Example 5: Two-Sided 95% Confidence Interval with known variance ci.mean(mtcars$mpg, sigma2 = 2.5) # Example 6: One-Sided 95% Confidence Interval ci.mean(mtcars$mpg, alternative = "less") # Example 7: Two-Sided 99% Confidence Interval ci.mean(mtcars$mpg, conf.level = 0.99) # Example 8: Two-Sided 95% Confidence Interval, print results with 3 digits ci.mean(mtcars$mpg, digits = 3) # Example 9a: Two-Sided 95% Confidence Interval for 'mpg', 'cyl', and 'disp', # listwise deletion for missing data ci.mean(mtcars[, c("mpg", "cyl", "disp")], na.omit = TRUE) # # Example 9b: Alternative specification using the 'data' argument ci.mean(mpg:disp, data = mtcars, na.omit = TRUE) # Example 10a: Two-Sided 95% Confidence Interval, analysis by 'vs' separately ci.mean(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs) # Example 10b: Alternative specification using the 'data' argument ci.mean(mpg:disp, data = mtcars, group = "vs") # Example 11: Two-Sided 95% Confidence Interval, analysis by 'vs' separately, # sort by variables ci.mean(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs, sort.var = TRUE) # Example 12: Two-Sided 95% Confidence Interval, split analysis by 'am' ci.mean(mtcars[, c("mpg", "cyl", "disp")], split = mtcars$am) # Example 13a: Two-Sided 95% Confidence Interval for 'mpg', 'cyl', and 'disp' # analysis by 'vs' separately, split analysis by 'am' ci.mean(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs, split = mtcars$am) # Example 13b: Alternative specification using the 'data' argument ci.mean(mpg:disp, data = mtcars, group = "vs", split = "am") ## Not run: # Example 14: Write results into a text file ci.mean(mpg:disp, data = mtcars, group = "vs", split = "am", write = "Means.txt") ## End(Not run)
This function computes a confidence interval for the difference in arithmetic means in a one-sample, two-sample and paired-sample design with known or unknown population standard deviation or population variance for one or more variables, optionally by a grouping and/or split variable.
ci.mean.diff(x, ...) ## Default S3 method: ci.mean.diff(x, y, mu = 0, sigma = NULL, sigma2 = NULL, var.equal = FALSE, paired = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...) ## S3 method for class 'formula' ci.mean.diff(formula, data, sigma = NULL, sigma2 = NULL, var.equal = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
ci.mean.diff(x, ...) ## Default S3 method: ci.mean.diff(x, y, mu = 0, sigma = NULL, sigma2 = NULL, var.equal = FALSE, paired = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...) ## S3 method for class 'formula' ci.mean.diff(formula, data, sigma = NULL, sigma2 = NULL, var.equal = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
x |
a numeric vector of data values. |
... |
further arguments to be passed to or from methods. |
y |
a numeric vector of data values. |
mu |
a numeric value indicating the population mean under the
null hypothesis. Note that the argument |
sigma |
a numeric vector indicating the population standard deviation(s)
when computing confidence intervals for the difference in
arithmetic means with known standard deviation(s). In case
of independent samples, equal standard deviations are assumed
when specifying one value for the argument |
sigma2 |
a numeric vector indicating the population variance(s) when
computing confidence intervals for the difference in arithmetic
means with known variance(s). In case of independent samples,
equal variances are assumed when specifying one value for the
argument |
var.equal |
logical: if |
paired |
logical: if |
alternative |
a character string specifying the alternative hypothesis,
must be one of |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
group |
a numeric vector, character vector or factor as grouping variable. Note that a grouping variable can only be used when computing confidence intervals with unknown population standard deviation and population variance. |
split |
a numeric vector, character vector or factor as split variable. Note that a split variable can only be used when computing confidence intervals with unknown population |
sort.var |
logical: if |
digits |
an integer value indicating the number of decimal places to be used. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
formula |
a formula of the form |
data |
a matrix or data frame containing the variables in the formula
|
na.omit |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
list with the input specified in |
args |
specification of function arguments |
result |
result table |
Takuya Yanagida [email protected]
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
test.z
, test.t
, ci.mean
, ci.median
,
ci.prop
, ci.var
, ci.sd
, descript
dat1 <- data.frame(group1 = c(1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2), group2 = c(1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 1, 2, 2, 2), group3 = c(1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2), x1 = c(3, 1, 4, 2, 5, 3, 2, 3, 6, 4, 3, NA, 5, 3, 3, 2, 6, 3, 1, 4, 3, 5, 6, 7, 4, 3, 6, 4), x2 = c(4, NA, 3, 6, 3, 7, 2, 7, 3, 3, 3, 1, 3, 6, 3, 5, 2, 6, 8, 3, 4, 5, 2, 1, 3, 1, 2, NA), x3 = c(7, 8, 5, 6, 4, 2, 8, 3, 6, 1, 2, 5, 8, 6, 2, 5, 3, 1, 6, 4, 5, 5, 3, 6, 3, 2, 2, 4)) #------------------------------------------------------------------------------- # One-sample design # Example 1: Two-Sided 95% CI for x1 # population mean = 3 ci.mean.diff(dat1$x1, mu = 3) #------------------------------------------------------------------------------- # Two-sample design # Example 2: Two-Sided 95% CI for y1 by group1 # unknown population variances, unequal variance assumption ci.mean.diff(x1 ~ group1, data = dat1) # Example 3: Two-Sided 95% CI for y1 by group1 # unknown population variances, equal variance assumption ci.mean.diff(x1 ~ group1, data = dat1, var.equal = TRUE) # Example 4: Two-Sided 95% CI with known standard deviations for x1 by group1 # known population standard deviations, equal standard deviation assumption ci.mean.diff(x1 ~ group1, data = dat1, sigma = 1.2) # Example 5: Two-Sided 95% CI with known standard deviations for x1 by group1 # known population standard deviations, unequal standard deviation assumption ci.mean.diff(x1 ~ group1, data = dat1, sigma = c(1.5, 1.2)) # Example 6: Two-Sided 95% CI with known variance for x1 by group1 # known population variances, equal variance assumption ci.mean.diff(x1 ~ group1, data = dat1, sigma2 = 1.44) # Example 7: Two-Sided 95% CI with known variance for x1 by group1 # known population variances, unequal variance assumption ci.mean.diff(x1 ~ group1, data = dat1, sigma2 = c(2.25, 1.44)) # Example 8: One-Sided 95% CI for y1 by group1 # unknown population variances, unequal variance assumption ci.mean.diff(x1 ~ group1, data = dat1, alternative = "less") # Example 9: Two-Sided 99% CI for y1 by group1 # unknown population variances, unequal variance assumption ci.mean.diff(x1 ~ group1, data = dat1, conf.level = 0.99) # Example 10: Two-Sided 95% CI for y1 by group1 # unknown population variances, unequal variance assumption # print results with 3 digits ci.mean.diff(x1 ~ group1, data = dat1, digits = 3) # Example 11: Two-Sided 95% CI for y1 by group1 # unknown population variances, unequal variance assumption # convert value 4 to NA ci.mean.diff(x1 ~ group1, data = dat1, as.na = 4) # Example 12: Two-Sided 95% CI for y1, y2, and y3 by group1 # unknown population variances, unequal variance assumption ci.mean.diff(cbind(x1, x2, x3) ~ group1, data = dat1) # Example 13: Two-Sided 95% CI for y1, y2, and y3 by group1 # unknown population variances, unequal variance assumption, # listwise deletion for missing data ci.mean.diff(cbind(x1, x2, x3) ~ group1, data = dat1, na.omit = TRUE) # Example 14: Two-Sided 95% CI for y1, y2, and y3 by group1 # unknown population variances, unequal variance assumption, # analysis by group2 separately ci.mean.diff(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2) # Example 15: Two-Sided 95% CI for y1, y2, and y3 by group1 # unknown population variances, unequal variance assumption, # analysis by group2 separately, sort by variables ci.mean.diff(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2, sort.var = TRUE)# Check if input 'y' is NULL # Example 16: Two-Sided 95% CI for y1, y2, and y3 by group1 # unknown population variances, unequal variance assumption, # split analysis by group2 ci.mean.diff(cbind(x1, x2, x3) ~ group1, data = dat1, split = dat1$group2) # Example 17: Two-Sided 95% CI for y1, y2, and y3 by group1 # unknown population variances, unequal variance assumption, # analysis by group2 separately, split analysis by group3 ci.mean.diff(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2, split = dat1$group3) #----------------- group1 <- c(3, 1, 4, 2, 5, 3, 6, 7) group2 <- c(5, 2, 4, 3, 1) # Example 18: Two-Sided 95% CI for the mean difference between group1 and group2 # unknown population variances, unequal variance assumption ci.mean.diff(group1, group2) # Example 19: Two-Sided 95% CI for the mean difference between group1 and group2 # unknown population variances, equal variance assumption ci.mean.diff(group1, group2, var.equal = TRUE) #------------------------------------------------------------------------------- # Paired-sample design dat2 <- data.frame(pre = c(1, 3, 2, 5, 7, 6), post = c(2, 2, 1, 6, 8, 9), group = c(1, 1, 1, 2, 2, 2), stringsAsFactors = FALSE) # Example 20: Two-Sided 95% CI for the mean difference in pre and post # unknown poulation variance of difference scores ci.mean.diff(dat2$pre, dat2$post, paired = TRUE) # Example 21: Two-Sided 95% CI for the mean difference in pre and post # unknown poulation variance of difference scores # analysis by group separately ci.mean.diff(dat2$pre, dat2$post, paired = TRUE, group = dat2$group) # Example 22: Two-Sided 95% CI for the mean difference in pre and post # unknown poulation variance of difference scores # analysis by group separately ci.mean.diff(dat2$pre, dat2$post, paired = TRUE, split = dat2$group) # Example 23: Two-Sided 95% CI for the mean difference in pre and post # known population standard deviation of difference scores ci.mean.diff(dat2$pre, dat2$post, sigma = 2, paired = TRUE) # Example 24: Two-Sided 95% CI for the mean difference in pre and post # known population variance of difference scores ci.mean.diff(dat2$pre, dat2$post, sigma2 = 4, paired = TRUE) # Example 25: One-Sided 95% CI for the mean difference in pre and post # unknown poulation variance of difference scores ci.mean.diff(dat2$pre, dat2$post, alternative = "less", paired = TRUE) # Example 26: Two-Sided 99% CI for the mean difference in pre and post # unknown poulation variance of difference scores ci.mean.diff(dat2$pre, dat2$post, conf.level = 0.99, paired = TRUE) # Example 27: Two-Sided 95% CI for for the mean difference in pre and post # unknown poulation variance of difference scores # print results with 3 digits ci.mean.diff(dat2$pre, dat2$post, paired = TRUE, digits = 3) # Example 28: Two-Sided 95% CI for for the mean difference in pre and post # unknown poulation variance of difference scores # convert value 1 to NA ci.mean.diff(dat2$pre, dat2$post, as.na = 1, paired = TRUE)
dat1 <- data.frame(group1 = c(1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2), group2 = c(1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 1, 2, 2, 2), group3 = c(1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2), x1 = c(3, 1, 4, 2, 5, 3, 2, 3, 6, 4, 3, NA, 5, 3, 3, 2, 6, 3, 1, 4, 3, 5, 6, 7, 4, 3, 6, 4), x2 = c(4, NA, 3, 6, 3, 7, 2, 7, 3, 3, 3, 1, 3, 6, 3, 5, 2, 6, 8, 3, 4, 5, 2, 1, 3, 1, 2, NA), x3 = c(7, 8, 5, 6, 4, 2, 8, 3, 6, 1, 2, 5, 8, 6, 2, 5, 3, 1, 6, 4, 5, 5, 3, 6, 3, 2, 2, 4)) #------------------------------------------------------------------------------- # One-sample design # Example 1: Two-Sided 95% CI for x1 # population mean = 3 ci.mean.diff(dat1$x1, mu = 3) #------------------------------------------------------------------------------- # Two-sample design # Example 2: Two-Sided 95% CI for y1 by group1 # unknown population variances, unequal variance assumption ci.mean.diff(x1 ~ group1, data = dat1) # Example 3: Two-Sided 95% CI for y1 by group1 # unknown population variances, equal variance assumption ci.mean.diff(x1 ~ group1, data = dat1, var.equal = TRUE) # Example 4: Two-Sided 95% CI with known standard deviations for x1 by group1 # known population standard deviations, equal standard deviation assumption ci.mean.diff(x1 ~ group1, data = dat1, sigma = 1.2) # Example 5: Two-Sided 95% CI with known standard deviations for x1 by group1 # known population standard deviations, unequal standard deviation assumption ci.mean.diff(x1 ~ group1, data = dat1, sigma = c(1.5, 1.2)) # Example 6: Two-Sided 95% CI with known variance for x1 by group1 # known population variances, equal variance assumption ci.mean.diff(x1 ~ group1, data = dat1, sigma2 = 1.44) # Example 7: Two-Sided 95% CI with known variance for x1 by group1 # known population variances, unequal variance assumption ci.mean.diff(x1 ~ group1, data = dat1, sigma2 = c(2.25, 1.44)) # Example 8: One-Sided 95% CI for y1 by group1 # unknown population variances, unequal variance assumption ci.mean.diff(x1 ~ group1, data = dat1, alternative = "less") # Example 9: Two-Sided 99% CI for y1 by group1 # unknown population variances, unequal variance assumption ci.mean.diff(x1 ~ group1, data = dat1, conf.level = 0.99) # Example 10: Two-Sided 95% CI for y1 by group1 # unknown population variances, unequal variance assumption # print results with 3 digits ci.mean.diff(x1 ~ group1, data = dat1, digits = 3) # Example 11: Two-Sided 95% CI for y1 by group1 # unknown population variances, unequal variance assumption # convert value 4 to NA ci.mean.diff(x1 ~ group1, data = dat1, as.na = 4) # Example 12: Two-Sided 95% CI for y1, y2, and y3 by group1 # unknown population variances, unequal variance assumption ci.mean.diff(cbind(x1, x2, x3) ~ group1, data = dat1) # Example 13: Two-Sided 95% CI for y1, y2, and y3 by group1 # unknown population variances, unequal variance assumption, # listwise deletion for missing data ci.mean.diff(cbind(x1, x2, x3) ~ group1, data = dat1, na.omit = TRUE) # Example 14: Two-Sided 95% CI for y1, y2, and y3 by group1 # unknown population variances, unequal variance assumption, # analysis by group2 separately ci.mean.diff(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2) # Example 15: Two-Sided 95% CI for y1, y2, and y3 by group1 # unknown population variances, unequal variance assumption, # analysis by group2 separately, sort by variables ci.mean.diff(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2, sort.var = TRUE)# Check if input 'y' is NULL # Example 16: Two-Sided 95% CI for y1, y2, and y3 by group1 # unknown population variances, unequal variance assumption, # split analysis by group2 ci.mean.diff(cbind(x1, x2, x3) ~ group1, data = dat1, split = dat1$group2) # Example 17: Two-Sided 95% CI for y1, y2, and y3 by group1 # unknown population variances, unequal variance assumption, # analysis by group2 separately, split analysis by group3 ci.mean.diff(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2, split = dat1$group3) #----------------- group1 <- c(3, 1, 4, 2, 5, 3, 6, 7) group2 <- c(5, 2, 4, 3, 1) # Example 18: Two-Sided 95% CI for the mean difference between group1 and group2 # unknown population variances, unequal variance assumption ci.mean.diff(group1, group2) # Example 19: Two-Sided 95% CI for the mean difference between group1 and group2 # unknown population variances, equal variance assumption ci.mean.diff(group1, group2, var.equal = TRUE) #------------------------------------------------------------------------------- # Paired-sample design dat2 <- data.frame(pre = c(1, 3, 2, 5, 7, 6), post = c(2, 2, 1, 6, 8, 9), group = c(1, 1, 1, 2, 2, 2), stringsAsFactors = FALSE) # Example 20: Two-Sided 95% CI for the mean difference in pre and post # unknown poulation variance of difference scores ci.mean.diff(dat2$pre, dat2$post, paired = TRUE) # Example 21: Two-Sided 95% CI for the mean difference in pre and post # unknown poulation variance of difference scores # analysis by group separately ci.mean.diff(dat2$pre, dat2$post, paired = TRUE, group = dat2$group) # Example 22: Two-Sided 95% CI for the mean difference in pre and post # unknown poulation variance of difference scores # analysis by group separately ci.mean.diff(dat2$pre, dat2$post, paired = TRUE, split = dat2$group) # Example 23: Two-Sided 95% CI for the mean difference in pre and post # known population standard deviation of difference scores ci.mean.diff(dat2$pre, dat2$post, sigma = 2, paired = TRUE) # Example 24: Two-Sided 95% CI for the mean difference in pre and post # known population variance of difference scores ci.mean.diff(dat2$pre, dat2$post, sigma2 = 4, paired = TRUE) # Example 25: One-Sided 95% CI for the mean difference in pre and post # unknown poulation variance of difference scores ci.mean.diff(dat2$pre, dat2$post, alternative = "less", paired = TRUE) # Example 26: Two-Sided 99% CI for the mean difference in pre and post # unknown poulation variance of difference scores ci.mean.diff(dat2$pre, dat2$post, conf.level = 0.99, paired = TRUE) # Example 27: Two-Sided 95% CI for for the mean difference in pre and post # unknown poulation variance of difference scores # print results with 3 digits ci.mean.diff(dat2$pre, dat2$post, paired = TRUE, digits = 3) # Example 28: Two-Sided 95% CI for for the mean difference in pre and post # unknown poulation variance of difference scores # convert value 1 to NA ci.mean.diff(dat2$pre, dat2$post, as.na = 1, paired = TRUE)
This function computes difference-adjusted Cousineau-Morey within-subject confidence interval for the arithmetic mean.
ci.mean.w(..., data = NULL, adjust = TRUE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, na.omit = TRUE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
ci.mean.w(..., data = NULL, adjust = TRUE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, na.omit = TRUE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame with numeric variables representing
the levels of the within-subject factor, i.e., data are
specified in wide-format (i.e., multivariate person level
format). Alternatively, an expression indicating the variable
names in |
data |
a data frame when specifying one or more variables in the
argument |
adjust |
logical: if |
alternative |
a character string specifying the alternative hypothesis,
must be one of |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
na.omit |
logical: if |
digits |
an integer value indicating the number of decimal places to be used. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
The Cousineau within-subject confidence interval (CI, Cousineau, 2005) is an alternative to the Loftus-Masson within-subject CI (Loftus & Masson, 1994) that does not assume sphericity or homogeneity of covariances. This approach removes individual differences by normalizing the raw scores using participant-mean centering and adding the grand mean back to every score:
where is the score of the
th participant in condition
(for
to
),
is the mean of
participant
across all
levels (for
to
),
and
is the grand mean.
Morey (2008) pointed out that Cousineau's (2005) approach produces intervals
that are consistently too narrow due to inducing a positive covariance
between normalized scores within a condition introducing bias into the
estimate of the sample variances. The degree of bias is proportional to the
number of means and can be removed by rescaling the confidence interval by
a factor of :
where is the standard error of the mean computed
from the normalized scores of he
th factor level.
Baguley (2012) pointed out that the Cousineau-Morey interval is larger than
that for a difference in means by a factor of leading to a
misinterpretation of these intervals that overlap of 95% confidence intervals
around individual means is indicates that a 95% confidence interval for the
difference in means would include zero. Hence, following adjustment to the
Cousineau-Morey interval was proposed:
The adjusted Cousineau-Morey interval is informative about the pattern of
differences between means and is computed by default (i.e., adjust = TRUE
).
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame used for the current analysis |
args |
specification of function arguments |
result |
result table |
Takuya Yanagida [email protected]
Baguley, T. (2012). Calculating and graphing within-subject confidence intervals for ANOVA. Behavior Research Methods, 44, 158-175. https://doi.org/10.3758/s13428-011-0123-7
Cousineau, D. (2005) Confidence intervals in within-subject designs: A simpler solution to Loftus and Masson’s Method. Tutorials in Quantitative Methods for Psychology, 1, 42–45. https://doi.org/10.20982/tqmp.01.1.p042
Loftus, G. R., and Masson, M. E. J. (1994). Using confidence intervals in within-subject designs. Psychonomic Bulletin and Review, 1, 476–90. https://doi.org/10.3758/BF03210951
Morey, R. D. (2008). Confidence intervals from normalized data: A correction to Cousineau. Tutorials in Quantitative Methods for Psychology, 4, 61–4. https://doi.org/10.20982/tqmp.01.1.p042
aov.w
, test.z
, test.t
,
ci.mean.diff
,' ci.median
, ci.prop
,
ci.var
, ci.sd
, descript
dat <- data.frame(time1 = c(3, 2, 1, 4, 5, 2, 3, 5, 6, 7), time2 = c(4, 3, 6, 5, 8, 6, 7, 3, 4, 5), time3 = c(1, 2, 2, 3, 6, 5, 1, 2, 4, 6)) # Example 1: Difference-adjusted Cousineau-Morey confidence intervals ci.mean.w(dat) # Example 1: Alternative specification using the 'data' argument ci.mean.w(., data = dat) # Example 2: Cousineau-Morey confidence intervals ci.mean.w(dat, adjust = FALSE) ## Not run: # Example 3: Write results into a text file ci.mean.w(dat, write = "WS_Confidence_Interval.txt") ## End(Not run)
dat <- data.frame(time1 = c(3, 2, 1, 4, 5, 2, 3, 5, 6, 7), time2 = c(4, 3, 6, 5, 8, 6, 7, 3, 4, 5), time3 = c(1, 2, 2, 3, 6, 5, 1, 2, 4, 6)) # Example 1: Difference-adjusted Cousineau-Morey confidence intervals ci.mean.w(dat) # Example 1: Alternative specification using the 'data' argument ci.mean.w(., data = dat) # Example 2: Cousineau-Morey confidence intervals ci.mean.w(dat, adjust = FALSE) ## Not run: # Example 3: Write results into a text file ci.mean.w(dat, write = "WS_Confidence_Interval.txt") ## End(Not run)
This function computes a confidence interval for proportions for one or more variables, optionally by a grouping and/or split variable.
ci.prop(..., data = NULL, method = c("wald", "wilson"), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
ci.prop(..., data = NULL, method = c("wald", "wilson"), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a numeric vector, matrix or data frame with numeric variables
with 0 and 1 values, i.e., factors and character variables
are excluded from |
data |
a data frame when specifying one or more variables in the
argument |
method |
a character string specifying the method for computing the confidence interval,
must be one of |
alternative |
a character string specifying the alternative hypothesis, must be one of
|
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
group |
either a character string indicating the variable name of
the grouping variable in |
split |
either a character string indicating the variable name of
the split variable in |
sort.var |
logical: if |
na.omit |
logical: if |
digits |
an integer value indicating the number of decimal places to be used. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
The Wald confidence interval which is based on the normal approximation to the binomial distribution are
computed by specifying method = "wald"
, while the Wilson (1927) confidence interval (aka Wilson
score interval) is requested by specifying method = "wilson"
. By default, Wilson confidence
interval is computed which have been shown to be reliable in small samples of n = 40 or less, and
larger samples of n > 40 (Brown, Cai & DasGupta, 2001), while the Wald confidence intervals is
inadequate in small samples and when p is near 0 or 1 (Agresti & Coull, 1998).
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
list with the input specified in |
args |
specification of function arguments |
result |
result table |
Takuya Yanagida [email protected]
Agresti, A. & Coull, B.A. (1998). Approximate is better than "exact" for interval estimation of binomial proportions. American Statistician, 52, 119-126.
Brown, L. D., Cai, T. T., & DasGupta, A., (2001). Interval estimation for a binomial proportion. Statistical Science, 16, 101-133.
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
Wilson, E. B. (1927). Probable inference, the law of succession, and statistical inference. Journal of the American Statistical Association, 22, 209-212.
ci.mean
, ci.mean.diff
, ci.median
,
ci.prop.diff
, ci.var
, ci.sd
,
descript
# Example 1a: Two-Sided 95% CI for 'vs' ci.prop(mtcars$vs) # # Example 1b: Alternative specification using the 'data' argument ci.prop(vs, data = mtcars) # Example 2: Two-Sided 95% CI using Wald method ci.prop(mtcars$vs, method = "wald") # Example 3: One-Sided 95% CI ci.prop(mtcars$vs, alternative = "less") # Example 4: Two-Sided 99% CI ci.prop(mtcars$vs, conf.level = 0.99) # Example 5: Two-Sided 95% CI, print results with 4 digits ci.prop(mtcars$vs, digits = 4) # Example 6a: Two-Sided 95% CI for 'vs' and 'am', # listwise deletion for missing data ci.prop(mtcars[, c("vs", "am")], na.omit = TRUE) # Example 6b: Alternative specification using the 'data' argument # listwise deletion for missing data ci.prop(vs, am, data = mtcars, na.omit = TRUE) # Example 7a: Two-Sided 95% CI, analysis by 'gear' separately ci.prop(mtcars[, c("vs", "am")], group = mtcars$gear) # Example 7b: Alternative specification using the 'data' argument ci.prop(vs, am, data = mtcars, group = "gear") # Example 8: Two-Sided 95% CI, analysis by 'gear' separately, sort by variables ci.prop(mtcars[, c("vs", "am")], group = mtcars$gear, sort.var = TRUE) # Example 9: Two-Sided 95% CI, split analysis by 'cyl' ci.prop(mtcars[, c("vs", "am")], split = mtcars$cyl) # Example 10a: Two-Sided 95% CI, analysis by 'gear' separately, split by 'cyl' ci.prop(mtcars[, c("vs", "am")], group = mtcars$gear, split = mtcars$cyl) # Example 10b: Alternative specification using the 'data' argument ci.prop(vs, am, data = mtcars, group = "gear", split = "cyl") ## Not run: # Example 11: Write results into a text file ci.prop(vs, am, data = mtcars, group = "gear", split = "cyl", write = "Prop.txt") ## End(Not run)
# Example 1a: Two-Sided 95% CI for 'vs' ci.prop(mtcars$vs) # # Example 1b: Alternative specification using the 'data' argument ci.prop(vs, data = mtcars) # Example 2: Two-Sided 95% CI using Wald method ci.prop(mtcars$vs, method = "wald") # Example 3: One-Sided 95% CI ci.prop(mtcars$vs, alternative = "less") # Example 4: Two-Sided 99% CI ci.prop(mtcars$vs, conf.level = 0.99) # Example 5: Two-Sided 95% CI, print results with 4 digits ci.prop(mtcars$vs, digits = 4) # Example 6a: Two-Sided 95% CI for 'vs' and 'am', # listwise deletion for missing data ci.prop(mtcars[, c("vs", "am")], na.omit = TRUE) # Example 6b: Alternative specification using the 'data' argument # listwise deletion for missing data ci.prop(vs, am, data = mtcars, na.omit = TRUE) # Example 7a: Two-Sided 95% CI, analysis by 'gear' separately ci.prop(mtcars[, c("vs", "am")], group = mtcars$gear) # Example 7b: Alternative specification using the 'data' argument ci.prop(vs, am, data = mtcars, group = "gear") # Example 8: Two-Sided 95% CI, analysis by 'gear' separately, sort by variables ci.prop(mtcars[, c("vs", "am")], group = mtcars$gear, sort.var = TRUE) # Example 9: Two-Sided 95% CI, split analysis by 'cyl' ci.prop(mtcars[, c("vs", "am")], split = mtcars$cyl) # Example 10a: Two-Sided 95% CI, analysis by 'gear' separately, split by 'cyl' ci.prop(mtcars[, c("vs", "am")], group = mtcars$gear, split = mtcars$cyl) # Example 10b: Alternative specification using the 'data' argument ci.prop(vs, am, data = mtcars, group = "gear", split = "cyl") ## Not run: # Example 11: Write results into a text file ci.prop(vs, am, data = mtcars, group = "gear", split = "cyl", write = "Prop.txt") ## End(Not run)
This function computes a confidence interval for the difference in proportions in a two-sample and paired-sample design for one or more variables, optionally by a grouping and/or split variable.
ci.prop.diff(x, ...) ## Default S3 method: ci.prop.diff(x, y, method = c("wald", "newcombe"), paired = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...) ## S3 method for class 'formula' ci.prop.diff(formula, data, method = c("wald", "newcombe"), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
ci.prop.diff(x, ...) ## Default S3 method: ci.prop.diff(x, y, method = c("wald", "newcombe"), paired = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...) ## S3 method for class 'formula' ci.prop.diff(formula, data, method = c("wald", "newcombe"), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
x |
a numeric vector with 0 and 1 values. |
... |
further arguments to be passed to or from methods. |
y |
a numeric vector with 0 and 1 values. |
method |
a character string specifying the method for computing the confidence interval,
must be one of |
paired |
logical: if |
alternative |
a character string specifying the alternative hypothesis, must be one of
|
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
group |
a numeric vector, character vector or factor as grouping variable. Note that a grouping variable can only be used when computing confidence intervals with unknown population standard deviation and population variance. |
split |
a numeric vector, character vector or factor as split variable. Note that a split variable can only be used when computing confidence intervals with unknown population standard deviation and population variance. |
sort.var |
logical: if |
digits |
an integer value indicating the number of decimal places to be used. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
formula |
a formula of the form |
data |
a matrix or data frame containing the variables in the formula |
na.omit |
logical: if |
The Wald confidence interval which is based on the normal approximation to the binomial distribution are
computed by specifying method = "wald"
, while the Newcombe Hybrid Score interval (Newcombe, 1998a;
Newcombe, 1998b) is requested by specifying method = "newcombe"
. By default, Newcombe Hybrid Score
interval is computed which have been shown to be reliable in small samples (less than n = 30 in each sample)
as well as moderate to larger samples(n > 30 in each sample) and with proportions close to 0 or 1, while the
Wald confidence intervals does not perform well unless the sample size is large (Fagerland, Lydersen & Laake, 2011).
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
list with the input specified in |
args |
specification of function arguments |
result |
result table |
Takuya Yanagida [email protected]
Fagerland, M. W., Lydersen S., & Laake, P. (2011) Recommended confidence intervals for two independent binomial proportions. Statistical Methods in Medical Research, 24, 224-254.
Newcombe, R. G. (1998a). Interval estimation for the difference between independent proportions: Comparison of eleven methods. Statistics in Medicine, 17, 873-890.
Newcombe, R. G. (1998b). Improved confidence intervals for the difference between binomial proportions based on paired data. Statistics in Medicine, 17, 2635-2650.
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
ci.prop
, ci.mean
, ci.mean.diff
,
ci.median
, ci.var
, ci.sd
,
descript
dat1 <- data.frame(group1 = c(1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2), group2 = c(1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 1, 2, 2, 2), group3 = c(1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2), x1 = c(0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, NA, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0), x2 = c(0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, NA, 1, 0, 0, 1, 1, 1), x3 = c(1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, NA, 1, 0, 1)) #------------------------------------------------------------------------------- # Two-sample design # Example 1: Two-Sided 95% CI for x1 by group1 # Newcombes Hybrid Score interval ci.prop.diff(x1 ~ group1, data = dat1) # Example 2: Two-Sided 95% CI for x1 by group1 # Wald CI ci.prop.diff(x1 ~ group1, data = dat1, method = "wald") # Example 3: One-Sided 95% CI for x1 by group1 # Newcombes Hybrid Score interval ci.prop.diff(x1 ~ group1, data = dat1, alternative = "less") # Example 4: Two-Sided 99% CI for x1 by group1 # Newcombes Hybrid Score interval ci.prop.diff(x1 ~ group1, data = dat1, conf.level = 0.99) # Example 5: Two-Sided 95% CI for y1 by group1 # Newcombes Hybrid Score interval, print results with 3 digits ci.prop.diff(x1 ~ group1, data = dat1, digits = 3) # Example 6: Two-Sided 95% CI for y1 by group1 # Newcombes Hybrid Score interval, convert value 0 to NA ci.prop.diff(x1 ~ group1, data = dat1, as.na = 0) # Example 7: Two-Sided 95% CI for y1, y2, and y3 by group1 # Newcombes Hybrid Score interval ci.prop.diff(cbind(x1, x2, x3) ~ group1, data = dat1) # Example 8: Two-Sided 95% CI for y1, y2, and y3 by group1 # Newcombes Hybrid Score interval, listwise deletion for missing data ci.prop.diff(cbind(x1, x2, x3) ~ group1, data = dat1, na.omit = TRUE) # Example 9: Two-Sided 95% CI for y1, y2, and y3 by group1 # Newcombes Hybrid Score interval, analysis by group2 separately ci.prop.diff(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2) # Example 10: Two-Sided 95% CI for y1, y2, and y3 by group1 # Newcombes Hybrid Score interval, analysis by group2 separately, sort by variables ci.prop.diff(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2, sort.var = TRUE) # Example 11: Two-Sided 95% CI for y1, y2, and y3 by group1 # split analysis by group2 ci.prop.diff(cbind(x1, x2, x3) ~ group1, data = dat1, split = dat1$group2) # Example 12: Two-Sided 95% CI for y1, y2, and y3 by group1 # Newcombes Hybrid Score interval, analysis by group2 separately, split analysis by group3 ci.prop.diff(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2, split = dat1$group3) #----------------- group1 <- c(0, 1, 1, 0, 0, 1, 0, 1) group2 <- c(1, 1, 1, 0, 0) # Example 13: Two-Sided 95% CI for the mean difference between group1 amd group2 # Newcombes Hybrid Score interval ci.prop.diff(group1, group2) #------------------------------------------------------------------------------- # Paires-sample design dat2 <- data.frame(pre = c(0, 1, 1, 0, 1), post = c(1, 1, 0, 1, 1)) # Example 14: Two-Sided 95% CI for the mean difference in x1 and x2 # Newcombes Hybrid Score interval ci.prop.diff(dat2$pre, dat2$post, paired = TRUE) # Example 15: Two-Sided 95% CI for the mean difference in x1 and x2 # Wald CI ci.prop.diff(dat2$pre, dat2$post, method = "wald", paired = TRUE) # Example 16: One-Sided 95% CI for the mean difference in x1 and x2 # Newcombes Hybrid Score interval ci.prop.diff(dat2$pre, dat2$post, alternative = "less", paired = TRUE) # Example 17: Two-Sided 99% CI for the mean difference in x1 and x2 # Newcombes Hybrid Score interval ci.prop.diff(dat2$pre, dat2$post, conf.level = 0.99, paired = TRUE) # Example 18: Two-Sided 95% CI for for the mean difference in x1 and x2 # Newcombes Hybrid Score interval, print results with 3 digits ci.prop.diff(dat2$pre, dat2$post, paired = TRUE, digits = 3)
dat1 <- data.frame(group1 = c(1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2), group2 = c(1, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2, 2, 1, 1, 1, 1, 2, 2, 2), group3 = c(1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2), x1 = c(0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, NA, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0), x2 = c(0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, NA, 1, 0, 0, 1, 1, 1), x3 = c(1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, NA, 1, 0, 1)) #------------------------------------------------------------------------------- # Two-sample design # Example 1: Two-Sided 95% CI for x1 by group1 # Newcombes Hybrid Score interval ci.prop.diff(x1 ~ group1, data = dat1) # Example 2: Two-Sided 95% CI for x1 by group1 # Wald CI ci.prop.diff(x1 ~ group1, data = dat1, method = "wald") # Example 3: One-Sided 95% CI for x1 by group1 # Newcombes Hybrid Score interval ci.prop.diff(x1 ~ group1, data = dat1, alternative = "less") # Example 4: Two-Sided 99% CI for x1 by group1 # Newcombes Hybrid Score interval ci.prop.diff(x1 ~ group1, data = dat1, conf.level = 0.99) # Example 5: Two-Sided 95% CI for y1 by group1 # Newcombes Hybrid Score interval, print results with 3 digits ci.prop.diff(x1 ~ group1, data = dat1, digits = 3) # Example 6: Two-Sided 95% CI for y1 by group1 # Newcombes Hybrid Score interval, convert value 0 to NA ci.prop.diff(x1 ~ group1, data = dat1, as.na = 0) # Example 7: Two-Sided 95% CI for y1, y2, and y3 by group1 # Newcombes Hybrid Score interval ci.prop.diff(cbind(x1, x2, x3) ~ group1, data = dat1) # Example 8: Two-Sided 95% CI for y1, y2, and y3 by group1 # Newcombes Hybrid Score interval, listwise deletion for missing data ci.prop.diff(cbind(x1, x2, x3) ~ group1, data = dat1, na.omit = TRUE) # Example 9: Two-Sided 95% CI for y1, y2, and y3 by group1 # Newcombes Hybrid Score interval, analysis by group2 separately ci.prop.diff(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2) # Example 10: Two-Sided 95% CI for y1, y2, and y3 by group1 # Newcombes Hybrid Score interval, analysis by group2 separately, sort by variables ci.prop.diff(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2, sort.var = TRUE) # Example 11: Two-Sided 95% CI for y1, y2, and y3 by group1 # split analysis by group2 ci.prop.diff(cbind(x1, x2, x3) ~ group1, data = dat1, split = dat1$group2) # Example 12: Two-Sided 95% CI for y1, y2, and y3 by group1 # Newcombes Hybrid Score interval, analysis by group2 separately, split analysis by group3 ci.prop.diff(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2, split = dat1$group3) #----------------- group1 <- c(0, 1, 1, 0, 0, 1, 0, 1) group2 <- c(1, 1, 1, 0, 0) # Example 13: Two-Sided 95% CI for the mean difference between group1 amd group2 # Newcombes Hybrid Score interval ci.prop.diff(group1, group2) #------------------------------------------------------------------------------- # Paires-sample design dat2 <- data.frame(pre = c(0, 1, 1, 0, 1), post = c(1, 1, 0, 1, 1)) # Example 14: Two-Sided 95% CI for the mean difference in x1 and x2 # Newcombes Hybrid Score interval ci.prop.diff(dat2$pre, dat2$post, paired = TRUE) # Example 15: Two-Sided 95% CI for the mean difference in x1 and x2 # Wald CI ci.prop.diff(dat2$pre, dat2$post, method = "wald", paired = TRUE) # Example 16: One-Sided 95% CI for the mean difference in x1 and x2 # Newcombes Hybrid Score interval ci.prop.diff(dat2$pre, dat2$post, alternative = "less", paired = TRUE) # Example 17: Two-Sided 99% CI for the mean difference in x1 and x2 # Newcombes Hybrid Score interval ci.prop.diff(dat2$pre, dat2$post, conf.level = 0.99, paired = TRUE) # Example 18: Two-Sided 95% CI for for the mean difference in x1 and x2 # Newcombes Hybrid Score interval, print results with 3 digits ci.prop.diff(dat2$pre, dat2$post, paired = TRUE, digits = 3)
The function ci.var
computes the confidence interval for the variance,
and the function ci.sd
computes the confidence interval for the standard
deviation for one or more variables, optionally by a grouping and/or split variable.
ci.var(..., data = NULL, method = c("chisq", "bonett"), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE) ci.sd(..., data = NULL, method = c("chisq", "bonett"), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
ci.var(..., data = NULL, method = c("chisq", "bonett"), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE) ci.sd(..., data = NULL, method = c("chisq", "bonett"), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a numeric vector, matrix or data frame with numeric variables,
i.e., factors and character variables are excluded from |
data |
a data frame when specifying one or more variables in the
argument |
method |
a character string specifying the method for computing the confidence interval,
must be one of |
alternative |
a character string specifying the alternative hypothesis, must be one of
|
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
group |
either a character string indicating the variable name of
the grouping variable in |
split |
either a character string indicating the variable name of
the split variable in |
sort.var |
logical: if |
na.omit |
logical: if |
digits |
an integer value indicating the number of decimal places to be used. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
The confidence interval based on the chi-square distribution is computed by specifying method = "chisq"
,
while the Bonett (2006) confidence interval is requested by specifying method = "bonett"
. By default,
the Bonett confidence interval interval is computed which performs well under moderate departure from
normality, while the confidence interval based on the chi-square distribution is highly sensitive to minor
violations of the normality assumption and its performance does not improve with increasing sample size.
Note that at least four valid observations are needed to compute the Bonett confidence interval.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
list with the input specified in |
args |
specification of function arguments |
result |
result table |
Takuya Yanagida [email protected]
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
Bonett, D. G. (2006). Approximate confidence interval for standard deviation of nonnormal distributions. Computational Statistics and Data Analysis, 50, 775-782. https://doi.org/10.1016/j.csda.2004.10.003
ci.mean
, ci.mean.diff
, ci.median
,
ci.prop
, ci.prop.diff
, descript
# Example 1a: Two-Sided 95% CI for the variance for 'mpg' ci.var(mtcars$mpg) # Example 1b: Alternative specification using the 'data' argument ci.var(mpg, data = mtcars) # Example 2a: Two-Sided 95% CI for the standard deviation for 'mpg' ci.sd(mtcars$mpg) # Example 2b: Alternative specification using the 'data' argument ci.sd(mpg, data = mtcars) # Example 3: Two-Sided 95% CI using chi square distribution ci.var(mtcars$mpg, method = "chisq") # Example 4: One-Sided 95% CI ci.var(mtcars$mpg, alternative = "less") # Example 5: Two-Sided 99% CI ci.var(mtcars$mpg, conf.level = 0.99) # Example 6: Two-Sided 95% CI, print results with 3 digits ci.var(mtcars$mpg, digits = 3) # Example 7a: Two-Sided 95% CI for 'mpg', 'disp', and 'hp', # listwise deletion for missing data ci.var(mtcars[, c("mpg", "disp", "hp")]) # Example 7b: Alternative specification using the 'data' argument ci.var(mpg:hp, data = mtcars) # Example 8a: Two-Sided 95% CI, analysis by 'vs' separately ci.var(mtcars[, c("mpg", "disp", "hp")], group = mtcars$vs) # Example 8b: Alternative specification using the 'data' argument ci.var(mpg:hp, data = mtcars, group = "vs") # Example 9: Two-Sided 95% CI for, analysis by 'vs' separately, sort by variables ci.var(mtcars[, c("mpg", "disp", "hp")], group = mtcars$vs, sort.var = TRUE) # Example 10: Two-Sided 95% CI, split analysis by 'vs' ci.var(mtcars[, c("mpg", "disp", "hp")], split = mtcars$vs) # Example 11a: Two-Sided 95% CI, analysis by 'vs' separately, split analysis by 'am' ci.var(mtcars[, c("mpg", "disp", "hp")], group = mtcars$vs, split = mtcars$am) # Example 11b: Alternative specification using the 'data' argument ci.var(mpg:hp, data = mtcars, group = "vs", split = "am") ## Not run: # Example 12: Write results into a text file ci.var(mpg:hp, data = mtcars, group = "vs", split = "am", write = "Variance.txt") ## End(Not run)
# Example 1a: Two-Sided 95% CI for the variance for 'mpg' ci.var(mtcars$mpg) # Example 1b: Alternative specification using the 'data' argument ci.var(mpg, data = mtcars) # Example 2a: Two-Sided 95% CI for the standard deviation for 'mpg' ci.sd(mtcars$mpg) # Example 2b: Alternative specification using the 'data' argument ci.sd(mpg, data = mtcars) # Example 3: Two-Sided 95% CI using chi square distribution ci.var(mtcars$mpg, method = "chisq") # Example 4: One-Sided 95% CI ci.var(mtcars$mpg, alternative = "less") # Example 5: Two-Sided 99% CI ci.var(mtcars$mpg, conf.level = 0.99) # Example 6: Two-Sided 95% CI, print results with 3 digits ci.var(mtcars$mpg, digits = 3) # Example 7a: Two-Sided 95% CI for 'mpg', 'disp', and 'hp', # listwise deletion for missing data ci.var(mtcars[, c("mpg", "disp", "hp")]) # Example 7b: Alternative specification using the 'data' argument ci.var(mpg:hp, data = mtcars) # Example 8a: Two-Sided 95% CI, analysis by 'vs' separately ci.var(mtcars[, c("mpg", "disp", "hp")], group = mtcars$vs) # Example 8b: Alternative specification using the 'data' argument ci.var(mpg:hp, data = mtcars, group = "vs") # Example 9: Two-Sided 95% CI for, analysis by 'vs' separately, sort by variables ci.var(mtcars[, c("mpg", "disp", "hp")], group = mtcars$vs, sort.var = TRUE) # Example 10: Two-Sided 95% CI, split analysis by 'vs' ci.var(mtcars[, c("mpg", "disp", "hp")], split = mtcars$vs) # Example 11a: Two-Sided 95% CI, analysis by 'vs' separately, split analysis by 'am' ci.var(mtcars[, c("mpg", "disp", "hp")], group = mtcars$vs, split = mtcars$am) # Example 11b: Alternative specification using the 'data' argument ci.var(mpg:hp, data = mtcars, group = "vs", split = "am") ## Not run: # Example 12: Write results into a text file ci.var(mpg:hp, data = mtcars, group = "vs", split = "am", write = "Variance.txt") ## End(Not run)
This function clears the console equivalent to Ctrl + L
in RStudio on
Windows, Mac, UNIX, or Linux operating system.
clear()
clear()
Takuya Yanagida
## Not run: # Clear console clear() ## End(Not run)
## Not run: # Clear console clear() ## End(Not run)
This function computes group means by default.
cluster.scores(..., data = NULL, cluster, fun = c("mean", "sum", "median", "var", "sd", "min", "max"), expand = TRUE, append = TRUE, name = ".a", as.na = NULL, check = TRUE)
cluster.scores(..., data = NULL, cluster, fun = c("mean", "sum", "median", "var", "sd", "min", "max"), expand = TRUE, append = TRUE, name = ".a", as.na = NULL, check = TRUE)
... |
a numeric vector for computing cluster scores for a variable,
matrix or data frame for computing cluster scores for more than
one variable. Alternatively, an expression indicating the variable
names in |
data |
a data frame when specifying one or more variables in the
argument |
cluster |
either a character string indicating the variable name of
the cluster variable in |
fun |
character string indicating the function used to compute group
scores, default: |
expand |
logical: if |
append |
logical: if |
name |
a character string or character vector indicating the names
of the computed variables. By default, variables are named with the ending
|
as.na |
a numeric vector indicating user-defined missing values, i.e.
these values are converted to |
check |
logical: if |
Returns a numeric vector or data frame containing cluster scores with the same
length or same number of rows as x
if expand = TRUE
or with the
length or number of rows as length(unique(cluster))
if expand = FALSE
.
Takuya Yanagida [email protected]
Hox, J., Moerbeek, M., & van de Schoot, R. (2018). Multilevel analysis: Techniques and applications (3rd. ed.). Routledge.
Snijders, T. A. B., & Bosker, R. J. (2012). Multilevel analysis: An introduction to basic and advanced multilevel modeling (2nd ed.). Sage Publishers.
item.scores
, multilevel.descript
,
multilevel.icc
# Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") # Example 1a: Compute cluster means for 'y1' and expand to match the input 'y1' cluster.scores(Demo.twolevel$y1, cluster = Demo.twolevel$cluster) # Example 1b: Alternative specification using the 'data' argument cluster.scores(y1, data = Demo.twolevel, cluster = "cluster") # Example 2: Compute standard deviation for each cluster # and expand to match the input x cluster.scores(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, fun = "sd") # Example 3: Compute cluster means without expanding the vector cluster.scores(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, expand = FALSE) # Example 4a: Compute cluster means for 'y1' and 'y2' and append to 'Demo.twolevel' cbind(Demo.twolevel, cluster.scores(Demo.twolevel[, c("y1", "y2")], cluster = Demo.twolevel$cluster)) # Example 4b: Alternative specification using the 'data' argument cluster.scores(y1, y2, data = Demo.twolevel, cluster = "cluster")
# Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") # Example 1a: Compute cluster means for 'y1' and expand to match the input 'y1' cluster.scores(Demo.twolevel$y1, cluster = Demo.twolevel$cluster) # Example 1b: Alternative specification using the 'data' argument cluster.scores(y1, data = Demo.twolevel, cluster = "cluster") # Example 2: Compute standard deviation for each cluster # and expand to match the input x cluster.scores(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, fun = "sd") # Example 3: Compute cluster means without expanding the vector cluster.scores(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, expand = FALSE) # Example 4a: Compute cluster means for 'y1' and 'y2' and append to 'Demo.twolevel' cbind(Demo.twolevel, cluster.scores(Demo.twolevel[, c("y1", "y2")], cluster = Demo.twolevel$cluster)) # Example 4b: Alternative specification using the 'data' argument cluster.scores(y1, y2, data = Demo.twolevel, cluster = "cluster")
This function creates variables for a categorical variable with
distinct levels. The coding system available in this function are
dummy coding, simple coding, unweighted effect coding, weighted effect coding,
repeated coding, forward Helmert coding, reverse Helmert coding, and orthogonal
polynomial coding.
coding(..., data = NULL, type = c("dummy", "simple", "effect", "weffect", "repeat", "fhelm", "rhelm", "poly"), base = NULL, name = c("dum.", "sim.", "eff.", "weff.", "rep.", "fhelm.", "rhelm.", "poly."), append = TRUE, as.na = NULL, check = TRUE)
coding(..., data = NULL, type = c("dummy", "simple", "effect", "weffect", "repeat", "fhelm", "rhelm", "poly"), base = NULL, name = c("dum.", "sim.", "eff.", "weff.", "rep.", "fhelm.", "rhelm.", "poly."), append = TRUE, as.na = NULL, check = TRUE)
... |
a numeric vector with integer values, character vector or factor
Alternatively, an expression indicating the variable name in
|
data |
a data frame when specifying a variable in the argument |
type |
a character string indicating the type of coding, i.e.,
|
base |
a numeric value or character string indicating the baseline group for dummy and simple coding and the omitted group in effect coding. By default, the first group or factor level is selected as baseline or omitted group. |
name |
a character string or character vector indicating the names
of the coded variables. By default, variables are named
|
append |
logical: if |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
check |
logical: if |
Dummy or treatment coding compares the mean of
each level of the categorical variable to the mean of a baseline group. By
default, the first group or factor level is selected as baseline group. The
intercept in the regression model represents the mean of the baseline group.
For example, dummy coding based on a categorical variable with four groups
A
, B
, C
, D
makes following comparisons:
B vs A
, C vs A
, and D vs A
with A
being the
baseline group.
Simple coding compares each level of the
categorical variable to the mean of a baseline level. By default, the first
group or factor level is selected as baseline group. The intercept in the
regression model represents the unweighted grand mean, i.e., mean of group
means. For example, simple coding based on a categorical variable with four
groups A
, B
, C
, D
makes following comparisons:
B vs A
, C vs A
, and D vs A
with A
being the
baseline group.
Unweighted effect or sum coding
compares the mean of a given level to the unweighted grand mean, i.e., mean of
group means. By default, the first group or factor level is selected as
omitted group. For example, effect coding based on a categorical variable
with four groups A
, B
, C
, D
makes following
comparisons: B vs (A, B, C, D)
, C vs (A, B, C, D)
, and
D vs (A, B, C, D)
with A
being the omitted group.
Weighted effect or sum coding compares
the mean of a given level to the weighed grand mean, i.e., sample mean. By
default, the first group or factor level is selected as omitted group. For
example, effect coding based on a categorical variable with four groups
A
, B
, C
, D
makes following comparisons:
B vs (A, B, C, D)
, C vs (A, B, C, D)
, and D vs (A, B, C, D)
with A
being the omitted group.
Repeated or difference coding compares the
mean of each level of the categorical variable to the mean of the previous
adjacent level. For example, repeated coding based on a categorical variable
with four groups A
, B
, C
, D
makes following
comparisons: B vs A
, C vs B
, and D vs C
.
Forward Helmert coding compares the
mean of each level of the categorical variable to the unweighted mean of all
subsequent level(s) of the categorical variable. For example, forward Helmert
coding based on a categorical variable with four groups A
, B
,
C
, D
makes following comparisons: (B, C, D) vs A
,
(C, D) vs B
, and D vs C
.
Reverse Helmert coding compares the
mean of each level of the categorical variable to the unweighted mean of all
prior level(s) of the categorical variable. For example, reverse Helmert
coding based on a categorical variable with four groups A
, B
,
C
, D
makes following comparisons: B vs A
, C vs (A, B)
,
and D vs (A, B, C)
.
Orthogonal polynomial coding is
a form of trend analysis based on polynomials of order , where
is the number of levels of the categorical variable. This coding
scheme assumes an ordered-categorical variable with equally spaced levels.
For example, orthogonal polynomial coding based on a categorical variable with
four groups
A
, B
, C
, D
investigates a linear,
quadratic, and cubic trends in the categorical variable.
Returns a data frame with coded variables or a data frame with the
same length or same number of rows as
...
containing the coded variables.
This function uses the contr.treatment
function from the stats
package for dummy coding and simple coding, a modified copy of the
contr.sum
function from the stats package for effect coding,
a modified copy of the contr.wec
function from the wec package
for weighted effect coding, a modified copy of the contr.sdif
function from the MASS package for repeated coding, a modified copy
of the code_helmert_forward
function from the codingMatrices
for forward Helmert coding, a modified copy of the contr_code_helmert
function from the faux package for reverse Helmert coding, and the
contr.poly
function from the stats package for orthogonal
polynomial coding.
Takuya Yanagida [email protected]
# Example 1a: Dummy coding for 'gear', baseline group = 3 coding(gear, data = mtcars) # Example 1b: Alterantive specification without using the 'data' argument coding(mtcars$gear) # Example 2: Dummy coding for 'gear', baseline group = 4 coding(gear, data = mtcars, base = 4) # Example 3: Effect coding for 'gear', omitted group = 3 coding(gear, data = mtcars, type = "effect") # Example 3: Effect coding for 'gear', omitted group = 4 coding(gear, data = mtcars, type = "effect", base = 4) # Example 4a: Dummy-coded variable names with prefix "gear3." coding(gear, data = mtcars, name = "gear3.") # Example 4b: Dummy-coded variables named "gear_4vs3" and "gear_5vs3" coding(gear, data = mtcars, name = c("gear_4vs3", "gear_5vs3"))
# Example 1a: Dummy coding for 'gear', baseline group = 3 coding(gear, data = mtcars) # Example 1b: Alterantive specification without using the 'data' argument coding(mtcars$gear) # Example 2: Dummy coding for 'gear', baseline group = 4 coding(gear, data = mtcars, base = 4) # Example 3: Effect coding for 'gear', omitted group = 3 coding(gear, data = mtcars, type = "effect") # Example 3: Effect coding for 'gear', omitted group = 4 coding(gear, data = mtcars, type = "effect", base = 4) # Example 4a: Dummy-coded variable names with prefix "gear3." coding(gear, data = mtcars, name = "gear3.") # Example 4b: Dummy-coded variables named "gear_4vs3" and "gear_5vs3" coding(gear, data = mtcars, name = c("gear_4vs3", "gear_5vs3"))
This function computes Cohen's d for one-sample, two-sample (i.e., between-subject design),
and paired-sample designs (i.e., within-subject design) for one or more variables, optionally
by a grouping and/or split variable. In a two-sample design, the function computes the
standardized mean difference by dividing the difference between means of the two groups
of observations by the weighted pooled standard deviation (i.e., Cohen's
according to Lakens, 2013) by default. In a paired-sample design, the function computes the
standardized mean difference by dividing the mean of the difference scores by the standard
deviation of the difference scores (i.e., Cohen's
according to Lakens, 2013) by
default. Note that by default Cohen's d is computed without applying the correction factor
for removing the small sample bias (i.e., Hedges' g).
cohens.d(x, ...) ## Default S3 method: cohens.d(x, y = NULL, mu = 0, paired = FALSE, weighted = TRUE, cor = TRUE, ref = NULL, correct = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...) ## S3 method for class 'formula' cohens.d(formula, data, weighted = TRUE, cor = TRUE, ref = NULL, correct = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
cohens.d(x, ...) ## Default S3 method: cohens.d(x, y = NULL, mu = 0, paired = FALSE, weighted = TRUE, cor = TRUE, ref = NULL, correct = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...) ## S3 method for class 'formula' cohens.d(formula, data, weighted = TRUE, cor = TRUE, ref = NULL, correct = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
x |
a numeric vector or data frame. |
... |
further arguments to be passed to or from methods. |
y |
a numeric vector. |
mu |
a numeric value indicating the reference mean. |
paired |
logical: if |
weighted |
logical: if |
cor |
logical: if |
ref |
character string |
correct |
logical: if |
alternative |
a character string specifying the alternative hypothesis, must be one of
|
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
group |
a numeric vector, character vector or factor as grouping variable. |
split |
a numeric vector, character vector or factor as split variable. |
sort.var |
logical: if |
digits |
an integer value indicating the number of decimal places to be used for displaying results. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
formula |
a formula of the form |
data |
a matrix or data frame containing the variables in the formula |
na.omit |
logical: if |
Cohen (1988, p.67) proposed to compute the standardized mean difference in a two-sample design
by dividing the mean difference by the unweighted pooled standard deviation (i.e.,
weighted = FALSE
).
Glass et al. (1981, p. 29) suggested to use the standard deviation of the control group
(e.g., ref = 0
if the control group is coded with 0) to compute the standardized
mean difference in a two-sample design (i.e., Glass's ) since the standard deviation of the control group
is unaffected by the treatment and will therefore more closely reflect the population
standard deviation.
Hedges (1981, p. 110) recommended to weight each group's standard deviation by its sample
size resulting in a weighted and pooled standard deviation (i.e., weighted = TRUE
,
default). According to Hedges and Olkin (1985, p. 81), the standardized mean difference
based on the weighted and pooled standard deviation has a positive small sample bias,
i.e., standardized mean difference is overestimated in small samples (i.e., sample size
less than 20 or less than 10 in each group). However, a correction factor can be applied
to remove the small sample bias (i.e., correct = TRUE
). Note that the function uses
a gamma function for computing the correction factor, while a approximation method is
used if computation based on the gamma function fails.
Note that the terminology is inconsistent because the standardized mean difference based on the weighted and pooled standard deviation is usually called Cohen's d, but sometimes called Hedges' g. Oftentimes, Cohen's d is called Hedges' d as soon as the small sample correction factor is applied. Cumming and Calin-Jageman (2017, p.171) recommended to avoid the term Hedges' g , but to report which standard deviation was used to standardized the mean difference (e.g., unweighted/weighted pooled standard deviation, or the standard deviation of the control group) and whether a small sample correction factor was applied.
As for the terminology according to Lakens (2013), in a two-sample design (i.e.,
paired = FALSE
) Cohen's is computed when using
weighted = TRUE
(default)
and Hedges's is computed when using
correct = TRUE
in addition. In a
paired-sample design (i.e., paired = TRUE
), Cohen's is computed when using
weighted = TRUE, default
, while Cohen's is computed when using
weighted = FALSE
and cor = TRUE, default
and Cohen's is computed when
using
weighted = FALSE
and cor = FALSE
. Corresponding Hedges' ,
,
and
are computed when using
correct = TRUE
in addition.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
sample |
type of sample, i.e., one-, two-, or, paired-sample |
data |
list with the input specified in |
args |
specification of function arguments |
result |
result table |
Takuya Yanagida [email protected]
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Academic Press.
Cumming, G., & Calin-Jageman, R. (2017). Introduction to the new statistics: Estimation, open science, & beyond. Routledge.
Glass. G. V., McGaw, B., & Smith, M. L. (1981). Meta-analysis in social research. Sage Publication.
Goulet-Pelletier, J.-C., & Cousineau, D. (2018) A review of effect sizes and their confidence intervals, Part I: The Cohen's d family. The Quantitative Methods for Psychology, 14, 242-265. https://doi.org/10.20982/tqmp.14.4.p242
Hedges, L. V. (1981). Distribution theory for Glass's estimator of effect size and related estimators. Journal of Educational Statistics, 6(3), 106-128.
Hedges, L. V. & Olkin, I. (1985). Statistical methods for meta-analysis. Academic Press.
Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 1-12. https://doi.org/10.3389/fpsyg.2013.00863
test.t
, test.z
, effsize
, cor.matrix
,
na.auxiliary
dat1 <- data.frame(group1 = c(1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1), group2 = c(1, 2, 1, 1, 1, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 2, 1, 2, 2, 2, 2, 1, 1, 1, 1, 2, 2, 2), group3 = c(1, 2, 1, 2, 1, 2, 2, 2, 1, 2, 2, 1, 1, 1, 1, 2, 2, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 1), x1 = c(3, 2, 5, 3, 6, 3, 2, 4, 6, 5, 3, 3, 5, 4, 4, 3, 5, 3, 2, 3, 3, 6, 6, 7, 5, 6, 6, 4), x2 = c(4, 4, 3, 6, 4, 7, 3, 5, 3, 3, 4, 2, 3, 6, 3, 5, 2, 6, 8, 3, 2, 5, 4, 5, 3, 2, 2, 4), x3 = c(7, 6, 5, 6, 4, 2, 8, 3, 6, 1, 2, 5, 8, 6, 2, 5, 3, 1, 6, 4, 5, 5, 3, 6, 3, 2, 2, 4)) #------------------------------------------------------------------------------- # One-sample design # Example 1: Cohen's d.z with two-sided 95% CI # population mean = 3 cohens.d(dat1$x1, mu = 3) # Example 2: Cohen's d.z (aka Hedges' g.z) with two-sided 95% CI # population mean = 3, with small sample correction factor cohens.d(dat1$x1, mu = 3, correct = TRUE) # Example 3: Cohen's d.z for more than one variable with two-sided 95% CI # population mean = 3 cohens.d(dat1[, c("x1", "x2", "x3")], mu = 3) # Example 4: Cohen's d.z with two-sided 95% CI # population mean = 3, by group1 separately cohens.d(dat1$x1, mu = 3, group = dat1$group1) # Example 5: Cohen's d.z for more than one variable with two-sided 95% CI # population mean = 3, by group1 separately cohens.d(dat1[, c("x1", "x2", "x3")], mu = 3, group = dat1$group1) # Example 6: Cohen's d.z with two-sided 95% CI # population mean = 3, split analysis by group1 cohens.d(dat1$x1, mu = 3, split = dat1$group1) # Example 7: Cohen's d.z for more than one variable with two-sided 95% CI # population mean = 3, split analysis by group1 cohens.d(dat1[, c("x1", "x2", "x3")], mu = 3, split = dat1$group1) # Example 8: Cohen's d.z with two-sided 95% CI # population mean = 3, by group1 separately1, split by group2 cohens.d(dat1$x1, mu = 3, group = dat1$group1, split = dat1$group2) # Example 9: Cohen's d.z for more than one variable with two-sided 95% CI # population mean = 3, by group1 separately1, split by group2 cohens.d(dat1[, c("x1", "x2", "x3")], mu = 3, group = dat1$group1, split = dat1$group2) #------------------------------------------------------------------------------- # Two-sample design # Example 10: Cohen's d.s with two-sided 95% CI # weighted pooled SD cohens.d(x1 ~ group1, data = dat1) # Example 11: Cohen's d.s with two-sided 99% CI # weighted pooled SD cohens.d(x1 ~ group1, data = dat1, conf.level = 0.99) # Example 12: Cohen's d.s with one-sided 99% CI # weighted pooled SD cohens.d(x1 ~ group1, data = dat1, alternative = "greater") # Example 13: Cohen's d.s with two-sided 99% CI # weighted pooled SD cohens.d(x1 ~ group1, data = dat1, conf.level = 0.99) # Example 14: Cohen's d.s with one-sided 95%% CI # weighted pooled SD cohens.d(x1 ~ group1, data = dat1, alternative = "greater") # Example 15: Cohen's d.s for more than one variable with two-sided 95% CI # weighted pooled SD cohens.d(cbind(x1, x2, x3) ~ group1, data = dat1) # Example 16: Cohen's d with two-sided 95% CI # unweighted SD cohens.d(x1 ~ group1, data = dat1, weighted = FALSE) # Example 17: Cohen's d.s (aka Hedges' g.s) with two-sided 95% CI # weighted pooled SD, with small sample correction factor cohens.d(x1 ~ group1, data = dat1, correct = TRUE) # Example 18: Cohen's d (aka Hedges' g) with two-sided 95% CI # Unweighted SD, with small sample correction factor cohens.d(x1 ~ group1, data = dat1, weighted = FALSE, correct = TRUE) # Example 19: Cohen's d (aka Glass's delta) with two-sided 95% CI # SD of reference group 1 cohens.d(x1 ~ group1, data = dat1, ref = 1) # Example 20: Cohen's d.s with two-sided 95% CI # weighted pooled SD, by group2 separately cohens.d(x1 ~ group1, data = dat1, group = dat1$group2) # Example 21: Cohen's d.s for more than one variable with two-sided 95% CI # weighted pooled SD, by group2 separately cohens.d(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2) # Example 22: Cohen's d.s with two-sided 95% CI # weighted pooled SD, split analysis by group2 cohens.d(x1 ~ group1, data = dat1, split = dat1$group2) # Example 23: Cohen's d.s for more than one variable with two-sided 95% CI # weighted pooled SD, split analysis by group2 cohens.d(cbind(x1, x2, x3) ~ group1, data = dat1, split = dat1$group2) # Example 24: Cohen's d.s with two-sided 95% CI # weighted pooled SD, by group2 separately, split analysis by group3 cohens.d(x1 ~ group1, data = dat1, group = dat1$group2, split = dat1$group3) # Example 25: Cohen's d.s for more than one variable with two-sided 95% CI # weighted pooled SD, by group2 separately, split analysis by group3 cohens.d(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2, split = dat1$group3) #------------------------------------------------------------------------------- # Paired-sample design # Example 26: Cohen's d.z with two-sided 95% CI # SD of the difference scores cohens.d(dat1$x1, dat1$x2, paired = TRUE) # Example 27: Cohen's d.z with two-sided 99% CI # SD of the difference scores cohens.d(dat1$x1, dat1$x2, paired = TRUE, conf.level = 0.99) # Example 28: Cohen's d.z with one-sided 95% CI # SD of the difference scores cohens.d(dat1$x1, dat1$x2, paired = TRUE, alternative = "greater") # Example 29: Cohen's d.rm with two-sided 95% CI # controlling for the correlation between measures cohens.d(dat1$x1, dat1$x2, paired = TRUE, weighted = FALSE) # Example 30: Cohen's d.av with two-sided 95% CI # without controlling for the correlation between measures cohens.d(dat1$x1, dat1$x2, paired = TRUE, weighted = FALSE, cor = FALSE) # Example 31: Cohen's d.z (aka Hedges' g.z) with two-sided 95% CI # SD of the differnece scores cohens.d(dat1$x1, dat1$x2, paired = TRUE, correct = TRUE) # Example 32: Cohen's d.rm (aka Hedges' g.rm) with two-sided 95% CI # controlling for the correlation between measures cohens.d(dat1$x1, dat1$x2, paired = TRUE, weighted = FALSE, correct = TRUE) # Example 33: Cohen's d.av (aka Hedges' g.av) with two-sided 95% CI # without controlling for the correlation between measures cohens.d(dat1$x1, dat1$x2, paired = TRUE, weighted = FALSE, cor = FALSE, correct = TRUE) # Example 34: Cohen's d.z with two-sided 95% CI # SD of the difference scores, by group1 separately cohens.d(dat1$x1, dat1$x2, paired = TRUE, group = dat1$group1) # Example 35: Cohen's d.z with two-sided 95% CI # SD of the difference scores, split analysis by group1 cohens.d(dat1$x1, dat1$x2, paired = TRUE, split = dat1$group1) # Example 36: Cohen's d.z with two-sided 95% CI # SD of the difference scores, by group1 separately, split analysis by group2 cohens.d(dat1$x1, dat1$x2, paired = TRUE, group = dat1$group1, split = dat1$group2)
dat1 <- data.frame(group1 = c(1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1, 2, 2, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 1), group2 = c(1, 2, 1, 1, 1, 2, 1, 2, 1, 2, 1, 2, 2, 2, 1, 2, 1, 2, 2, 2, 2, 1, 1, 1, 1, 2, 2, 2), group3 = c(1, 2, 1, 2, 1, 2, 2, 2, 1, 2, 2, 1, 1, 1, 1, 2, 2, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 1), x1 = c(3, 2, 5, 3, 6, 3, 2, 4, 6, 5, 3, 3, 5, 4, 4, 3, 5, 3, 2, 3, 3, 6, 6, 7, 5, 6, 6, 4), x2 = c(4, 4, 3, 6, 4, 7, 3, 5, 3, 3, 4, 2, 3, 6, 3, 5, 2, 6, 8, 3, 2, 5, 4, 5, 3, 2, 2, 4), x3 = c(7, 6, 5, 6, 4, 2, 8, 3, 6, 1, 2, 5, 8, 6, 2, 5, 3, 1, 6, 4, 5, 5, 3, 6, 3, 2, 2, 4)) #------------------------------------------------------------------------------- # One-sample design # Example 1: Cohen's d.z with two-sided 95% CI # population mean = 3 cohens.d(dat1$x1, mu = 3) # Example 2: Cohen's d.z (aka Hedges' g.z) with two-sided 95% CI # population mean = 3, with small sample correction factor cohens.d(dat1$x1, mu = 3, correct = TRUE) # Example 3: Cohen's d.z for more than one variable with two-sided 95% CI # population mean = 3 cohens.d(dat1[, c("x1", "x2", "x3")], mu = 3) # Example 4: Cohen's d.z with two-sided 95% CI # population mean = 3, by group1 separately cohens.d(dat1$x1, mu = 3, group = dat1$group1) # Example 5: Cohen's d.z for more than one variable with two-sided 95% CI # population mean = 3, by group1 separately cohens.d(dat1[, c("x1", "x2", "x3")], mu = 3, group = dat1$group1) # Example 6: Cohen's d.z with two-sided 95% CI # population mean = 3, split analysis by group1 cohens.d(dat1$x1, mu = 3, split = dat1$group1) # Example 7: Cohen's d.z for more than one variable with two-sided 95% CI # population mean = 3, split analysis by group1 cohens.d(dat1[, c("x1", "x2", "x3")], mu = 3, split = dat1$group1) # Example 8: Cohen's d.z with two-sided 95% CI # population mean = 3, by group1 separately1, split by group2 cohens.d(dat1$x1, mu = 3, group = dat1$group1, split = dat1$group2) # Example 9: Cohen's d.z for more than one variable with two-sided 95% CI # population mean = 3, by group1 separately1, split by group2 cohens.d(dat1[, c("x1", "x2", "x3")], mu = 3, group = dat1$group1, split = dat1$group2) #------------------------------------------------------------------------------- # Two-sample design # Example 10: Cohen's d.s with two-sided 95% CI # weighted pooled SD cohens.d(x1 ~ group1, data = dat1) # Example 11: Cohen's d.s with two-sided 99% CI # weighted pooled SD cohens.d(x1 ~ group1, data = dat1, conf.level = 0.99) # Example 12: Cohen's d.s with one-sided 99% CI # weighted pooled SD cohens.d(x1 ~ group1, data = dat1, alternative = "greater") # Example 13: Cohen's d.s with two-sided 99% CI # weighted pooled SD cohens.d(x1 ~ group1, data = dat1, conf.level = 0.99) # Example 14: Cohen's d.s with one-sided 95%% CI # weighted pooled SD cohens.d(x1 ~ group1, data = dat1, alternative = "greater") # Example 15: Cohen's d.s for more than one variable with two-sided 95% CI # weighted pooled SD cohens.d(cbind(x1, x2, x3) ~ group1, data = dat1) # Example 16: Cohen's d with two-sided 95% CI # unweighted SD cohens.d(x1 ~ group1, data = dat1, weighted = FALSE) # Example 17: Cohen's d.s (aka Hedges' g.s) with two-sided 95% CI # weighted pooled SD, with small sample correction factor cohens.d(x1 ~ group1, data = dat1, correct = TRUE) # Example 18: Cohen's d (aka Hedges' g) with two-sided 95% CI # Unweighted SD, with small sample correction factor cohens.d(x1 ~ group1, data = dat1, weighted = FALSE, correct = TRUE) # Example 19: Cohen's d (aka Glass's delta) with two-sided 95% CI # SD of reference group 1 cohens.d(x1 ~ group1, data = dat1, ref = 1) # Example 20: Cohen's d.s with two-sided 95% CI # weighted pooled SD, by group2 separately cohens.d(x1 ~ group1, data = dat1, group = dat1$group2) # Example 21: Cohen's d.s for more than one variable with two-sided 95% CI # weighted pooled SD, by group2 separately cohens.d(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2) # Example 22: Cohen's d.s with two-sided 95% CI # weighted pooled SD, split analysis by group2 cohens.d(x1 ~ group1, data = dat1, split = dat1$group2) # Example 23: Cohen's d.s for more than one variable with two-sided 95% CI # weighted pooled SD, split analysis by group2 cohens.d(cbind(x1, x2, x3) ~ group1, data = dat1, split = dat1$group2) # Example 24: Cohen's d.s with two-sided 95% CI # weighted pooled SD, by group2 separately, split analysis by group3 cohens.d(x1 ~ group1, data = dat1, group = dat1$group2, split = dat1$group3) # Example 25: Cohen's d.s for more than one variable with two-sided 95% CI # weighted pooled SD, by group2 separately, split analysis by group3 cohens.d(cbind(x1, x2, x3) ~ group1, data = dat1, group = dat1$group2, split = dat1$group3) #------------------------------------------------------------------------------- # Paired-sample design # Example 26: Cohen's d.z with two-sided 95% CI # SD of the difference scores cohens.d(dat1$x1, dat1$x2, paired = TRUE) # Example 27: Cohen's d.z with two-sided 99% CI # SD of the difference scores cohens.d(dat1$x1, dat1$x2, paired = TRUE, conf.level = 0.99) # Example 28: Cohen's d.z with one-sided 95% CI # SD of the difference scores cohens.d(dat1$x1, dat1$x2, paired = TRUE, alternative = "greater") # Example 29: Cohen's d.rm with two-sided 95% CI # controlling for the correlation between measures cohens.d(dat1$x1, dat1$x2, paired = TRUE, weighted = FALSE) # Example 30: Cohen's d.av with two-sided 95% CI # without controlling for the correlation between measures cohens.d(dat1$x1, dat1$x2, paired = TRUE, weighted = FALSE, cor = FALSE) # Example 31: Cohen's d.z (aka Hedges' g.z) with two-sided 95% CI # SD of the differnece scores cohens.d(dat1$x1, dat1$x2, paired = TRUE, correct = TRUE) # Example 32: Cohen's d.rm (aka Hedges' g.rm) with two-sided 95% CI # controlling for the correlation between measures cohens.d(dat1$x1, dat1$x2, paired = TRUE, weighted = FALSE, correct = TRUE) # Example 33: Cohen's d.av (aka Hedges' g.av) with two-sided 95% CI # without controlling for the correlation between measures cohens.d(dat1$x1, dat1$x2, paired = TRUE, weighted = FALSE, cor = FALSE, correct = TRUE) # Example 34: Cohen's d.z with two-sided 95% CI # SD of the difference scores, by group1 separately cohens.d(dat1$x1, dat1$x2, paired = TRUE, group = dat1$group1) # Example 35: Cohen's d.z with two-sided 95% CI # SD of the difference scores, split analysis by group1 cohens.d(dat1$x1, dat1$x2, paired = TRUE, split = dat1$group1) # Example 36: Cohen's d.z with two-sided 95% CI # SD of the difference scores, by group1 separately, split analysis by group2 cohens.d(dat1$x1, dat1$x2, paired = TRUE, group = dat1$group1, split = dat1$group2)
This function computes a correlation matrix based on Pearson product-moment
correlation coefficient, Spearman's rank-order correlation coefficient,
Kendall's Tau-b correlation coefficient, Kendall-Stuart's Tau-c correlation
coefficient, tetrachoric correlation coefficient, or polychoric correlation
coefficient and computes significance values (p-values) for testing the
hypothesis H0: = 0 for all pairs of variables.
cor.matrix(..., data = NULL, method = c("pearson", "spearman", "kendall-b", "kendall-c", "tetra", "poly"), na.omit = FALSE, group = NULL, sig = FALSE, alpha = 0.05, print = c("all", "cor", "n", "stat", "df", "p"), tri = c("both", "lower", "upper"), p.adj = c("none", "bonferroni", "holm", "hochberg", "hommel", "BH", "BY", "fdr"), continuity = TRUE, digits = 2, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
cor.matrix(..., data = NULL, method = c("pearson", "spearman", "kendall-b", "kendall-c", "tetra", "poly"), na.omit = FALSE, group = NULL, sig = FALSE, alpha = 0.05, print = c("all", "cor", "n", "stat", "df", "p"), tri = c("both", "lower", "upper"), p.adj = c("none", "bonferroni", "holm", "hochberg", "hommel", "BH", "BY", "fdr"), continuity = TRUE, digits = 2, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame. Alternatively, an expression indicating
the variable names in |
data |
a data frame when specifying one or more variables in the
argument |
method |
a character vector indicating which correlation coefficient
is to be computed, i.e. |
na.omit |
logical: if |
group |
either a character string indicating the variable name of
the grouping variable in |
sig |
logical: if |
alpha |
a numeric value between 0 and 1 indicating the significance
level at which correlation coefficients are printed boldface
when |
print |
a character string or character vector indicating which results
to show on the console, i.e. |
tri |
a character string indicating which triangular of the matrix
to show on the console, i.e., |
p.adj |
a character string indicating an adjustment method for multiple
testing based on |
continuity |
logical: if |
digits |
an integer value indicating the number of decimal places to be used for displaying correlation coefficients. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying p-values. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Note that unlike the cor.test
function, this
function does not compute an exact p-value for Spearman's rank-order
correlation coefficient or Kendall's Tau-b correlation coefficient, but uses
the asymptotic t approximation.
Statistically significant correlation coefficients can be shown in boldface on
the console when specifying sig = TRUE
. However, this option is not supported
when using R Markdown, i.e., the argument sig
will switch to FALSE
.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame used for the current analysis |
args |
specification of function arguments |
result |
list with result tables, i.e., |
This function uses the polychoric()
function in the psych
package by William Revelle to estimate tetrachoric and polychoric correlation
coefficients.
Takuya Yanagida [email protected]
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
Revelle, W. (2018) psych: Procedures for personality and psychological research. Northwestern University, Evanston, Illinois, USA, https://CRAN.R-project.org/package=psych Version = 1.8.12.
write.result
, cohens.d
, effsize
,
multilevel.icc
, na.auxiliary
, size.cor
.
# Example 1a: Pearson product-moment correlation coefficient between 'Ozone' and 'Solar.R# cor.matrix(airquality[, c("Ozone", "Solar.R")]) # Example 1b: Alternative specification using the 'data' argument cor.matrix(Ozone, Solar.R, data = airquality) # Example 2a: Pearson product-moment correlation matrix using pairwise deletion cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")]) # Example 2b: Alternative specification using the 'data' argument cor.matrix(Ozone:Wind, data = airquality) # Example 3: Spearman's rank-order correlation matrix cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], method = "spearman") # Example 4: Pearson product-moment correlation matrix # highlight statistically significant result at alpha = 0.05 cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], sig = TRUE) # Example 5: Pearson product-moment correlation matrix # highlight statistically significant result at alpha = 0.05 cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], sig = TRUE, alpha = 0.10) # Example 6: Pearson product-moment correlation matrix # print sample size and significance values cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], print = "all") # Example 7: Pearson product-moment correlation matrix using listwise deletion, # print sample size and significance values cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], na.omit = TRUE, print = "all") # Example 8: Pearson product-moment correlation matrix # print sample size and significance values with Bonferroni correction cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], na.omit = TRUE, print = "all", p.adj = "bonferroni") # Example 9a: Pearson product-moment correlation matrix for 'mpg', 'cyl', and 'disp' # results for group "0" and "1" separately cor.matrix(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs) # Example 9b: Alternative specification using the 'data' argument cor.matrix(mpg:disp, data = mtcars, group = "vs") ## Not run: # Example 10a: Write results into a text file cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], print = "all", write = "Correlation.txt") # Example 10b: Write results into an Excel file cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], print = "all", write = "Correlation.xlsx") result <- cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], print = "all", output = FALSE) write.result(result, "Correlation.xlsx") ## End(Not run)
# Example 1a: Pearson product-moment correlation coefficient between 'Ozone' and 'Solar.R# cor.matrix(airquality[, c("Ozone", "Solar.R")]) # Example 1b: Alternative specification using the 'data' argument cor.matrix(Ozone, Solar.R, data = airquality) # Example 2a: Pearson product-moment correlation matrix using pairwise deletion cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")]) # Example 2b: Alternative specification using the 'data' argument cor.matrix(Ozone:Wind, data = airquality) # Example 3: Spearman's rank-order correlation matrix cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], method = "spearman") # Example 4: Pearson product-moment correlation matrix # highlight statistically significant result at alpha = 0.05 cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], sig = TRUE) # Example 5: Pearson product-moment correlation matrix # highlight statistically significant result at alpha = 0.05 cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], sig = TRUE, alpha = 0.10) # Example 6: Pearson product-moment correlation matrix # print sample size and significance values cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], print = "all") # Example 7: Pearson product-moment correlation matrix using listwise deletion, # print sample size and significance values cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], na.omit = TRUE, print = "all") # Example 8: Pearson product-moment correlation matrix # print sample size and significance values with Bonferroni correction cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], na.omit = TRUE, print = "all", p.adj = "bonferroni") # Example 9a: Pearson product-moment correlation matrix for 'mpg', 'cyl', and 'disp' # results for group "0" and "1" separately cor.matrix(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs) # Example 9b: Alternative specification using the 'data' argument cor.matrix(mpg:disp, data = mtcars, group = "vs") ## Not run: # Example 10a: Write results into a text file cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], print = "all", write = "Correlation.txt") # Example 10b: Write results into an Excel file cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], print = "all", write = "Correlation.xlsx") result <- cor.matrix(airquality[, c("Ozone", "Solar.R", "Wind")], print = "all", output = FALSE) write.result(result, "Correlation.xlsx") ## End(Not run)
This function creates a two-way and three-way cross tabulation with absolute frequencies and row-wise, column-wise and total percentages.
crosstab(..., data = NULL, print = c("no", "all", "row", "col", "total"), freq = TRUE, split = FALSE, na.omit = TRUE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
crosstab(..., data = NULL, print = c("no", "all", "row", "col", "total"), freq = TRUE, split = FALSE, na.omit = TRUE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame with two or three columns. Alternatively,
an expression indicating the variable names in |
data |
a data frame when specifying one or more variables in the
argument |
print |
a character string or character vector indicating which
percentage(s) to be printed on the console, i.e., no percentages
( |
freq |
logical: if |
split |
logical: if |
na.omit |
logical: if |
digits |
an integer indicating the number of decimal places digits to be used for displaying percentages. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
matrix or data frame specified in |
args |
specification of function arguments |
result |
list with result tables, i.e., |
Takuya Yanagida [email protected]
write.result
, freq
, descript
,
multilevel.descript
, na.descript
.
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
#---------------------------------------------------------------------------- # Two-Dimensional Table # Example 1a: Cross Tabulation for 'vs' and 'am' crosstab(mtcars[, c("vs", "am")]) # Example 1b: Alternative specification using the 'data' argument crosstab(vs, am, data = mtcars) # Example 2: Cross Tabulation, print all percentages crosstab(mtcars[, c("vs", "am")], print = "all") # Example 3: Cross Tabulation, print row-wise percentages crosstab(mtcars[, c("vs", "am")], print = "row") # Example 4: Cross Tabulation, print col-wise percentages crosstab(mtcars[, c("vs", "am")], print = "col") # Example 5: Cross Tabulation, print total percentages crosstab(mtcars[, c("vs", "am")], print = "total") # Example 6: Cross Tabulation, print all percentages, split output table crosstab(mtcars[, c("vs", "am")], print = "all", split = TRUE) #---------------------------------------------------------------------------- # Three-Dimensional Table # Example 7a: Cross Tabulation for 'vs', 'am', ane 'gear' crosstab(mtcars[, c("vs", "am", "gear")]) # Example 7b: Alternative specification using the 'data' argument crosstab(vs:gear, data = mtcars) # Example 8: Cross Tabulation, print all percentages crosstab(mtcars[, c("vs", "am", "gear")], print = "all") # Example 9: Cross Tabulation, print all percentages, split output table crosstab(mtcars[, c("vs", "am", "gear")], print = "all", split = TRUE) ## Not run: # Example 10a: Write results into a text file crosstab(mtcars[, c("vs", "am")], print = "all", write = "Crosstab.txt") # Example 10b: Write results into an Excel file crosstab(mtcars[, c("vs", "am")], print = "all", write = "Crosstab.xlsx") result <- crosstab(mtcars[, c("vs", "am")], print = "all", output = FALSE) write.result(result, "Crosstab.xlsx") ## End(Not run)
#---------------------------------------------------------------------------- # Two-Dimensional Table # Example 1a: Cross Tabulation for 'vs' and 'am' crosstab(mtcars[, c("vs", "am")]) # Example 1b: Alternative specification using the 'data' argument crosstab(vs, am, data = mtcars) # Example 2: Cross Tabulation, print all percentages crosstab(mtcars[, c("vs", "am")], print = "all") # Example 3: Cross Tabulation, print row-wise percentages crosstab(mtcars[, c("vs", "am")], print = "row") # Example 4: Cross Tabulation, print col-wise percentages crosstab(mtcars[, c("vs", "am")], print = "col") # Example 5: Cross Tabulation, print total percentages crosstab(mtcars[, c("vs", "am")], print = "total") # Example 6: Cross Tabulation, print all percentages, split output table crosstab(mtcars[, c("vs", "am")], print = "all", split = TRUE) #---------------------------------------------------------------------------- # Three-Dimensional Table # Example 7a: Cross Tabulation for 'vs', 'am', ane 'gear' crosstab(mtcars[, c("vs", "am", "gear")]) # Example 7b: Alternative specification using the 'data' argument crosstab(vs:gear, data = mtcars) # Example 8: Cross Tabulation, print all percentages crosstab(mtcars[, c("vs", "am", "gear")], print = "all") # Example 9: Cross Tabulation, print all percentages, split output table crosstab(mtcars[, c("vs", "am", "gear")], print = "all", split = TRUE) ## Not run: # Example 10a: Write results into a text file crosstab(mtcars[, c("vs", "am")], print = "all", write = "Crosstab.txt") # Example 10b: Write results into an Excel file crosstab(mtcars[, c("vs", "am")], print = "all", write = "Crosstab.xlsx") result <- crosstab(mtcars[, c("vs", "am")], print = "all", output = FALSE) write.result(result, "Crosstab.xlsx") ## End(Not run)
This function computes summary statistics for one or more than one variables, optionally by a grouping and/or split variable.
descript(..., data = NULL, print = c("all", "default", "n", "nNA", "pNA", "m", "se.m", "var", "sd", "min", "p25", "med", "p75", "max", "range", "iqr", "skew", "kurt"), group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
descript(..., data = NULL, print = c("all", "default", "n", "nNA", "pNA", "m", "se.m", "var", "sd", "min", "p25", "med", "p75", "max", "range", "iqr", "skew", "kurt"), group = NULL, split = NULL, sort.var = FALSE, na.omit = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a numeric vector, matrix or data frame with numeric variables,
i.e., factors and character variables are excluded from |
data |
a data frame when specifying one or more variables in the
argument |
print |
a character vector indicating which statistical measures to be
printed on the console, i.e. |
group |
a numeric vector, character vector or factor as grouping variable.
Alternatively, a character string indicating the variable name
of the grouping variable in |
split |
a numeric vector, character vector or factor as split variable.
Alternatively, a character string indicating the variable name
of the split variable in |
sort.var |
logical: if |
na.omit |
logical: if |
digits |
an integer value indicating the number of decimal places to be used. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
list with the input specified in |
args |
specification of function arguments |
result |
result table(s) |
Takuya Yanagida [email protected]
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
ci.mean
, ci.mean.diff
, ci.median
,
ci.prop
, ci.prop.diff
, ci.var
,
ci.sd
, freq
, crosstab
,
multilevel.descript
, na.descript
.
# Example 1a: Descriptive statistics for 'mpg' descript(mtcars$mpg) # Example 1b: Alternative specification using the 'data' argument descript(mpg, data = mtcars) # Example 2: Descriptive statistics, print results with 3 digits descript(mtcars$mpg, digits = 3) # Example 3a: Descriptive statistics for x1, print all available statistical measures descript(mtcars$mpg, print = "all") # Example 3b: Descriptive statistics for x1, print default plus median descript(mtcars$mpg, print = c("default", "med")) # Example 4a: Descriptive statistics for 'mpg', 'cyl', and 'disp' descript(mtcars[, c("mpg", "cyl", "disp")]) # Example 4b: Alternative specification using the 'data' argument descript(mpg:disp, data = mtcars) # Example 5a: Descriptive statistics, analysis by 'vs' separately descript(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs) # Example 5b: Alternative specification using the 'data' argument descript(mpg:disp, data = mtcars, group = "vs") # Example 6: Descriptive statistics, analysis by 'vs' separately, sort by variables descript(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs, sort.var = TRUE) # Example 7: Descriptive statistics, split analysis by 'am' descript(mtcars[, c("mpg", "cyl", "disp")], split = mtcars$am) # Example 8a: Descriptive statistics,analysis by 'vs' separately, split analysis by 'am' descript(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs, split = mtcars$am) # Example 8b: Alternative specification using the 'data' argument descript(mpg:disp, data = mtcars, group = "vs", split = "am") ## Not run: # Example 11a: Write results into a text file descript(mtcars[, c("mpg", "cyl", "disp")], write = "Descript.txt") # Example 11b: Write results into an Excel file descript(mtcars[, c("mpg", "cyl", "disp")], write = "Descript.xlsx") result <- descript(mtcars[, c("mpg", "cyl", "disp")], output = FALSE) write.result(result, "Descript.xlsx") ## End(Not run)
# Example 1a: Descriptive statistics for 'mpg' descript(mtcars$mpg) # Example 1b: Alternative specification using the 'data' argument descript(mpg, data = mtcars) # Example 2: Descriptive statistics, print results with 3 digits descript(mtcars$mpg, digits = 3) # Example 3a: Descriptive statistics for x1, print all available statistical measures descript(mtcars$mpg, print = "all") # Example 3b: Descriptive statistics for x1, print default plus median descript(mtcars$mpg, print = c("default", "med")) # Example 4a: Descriptive statistics for 'mpg', 'cyl', and 'disp' descript(mtcars[, c("mpg", "cyl", "disp")]) # Example 4b: Alternative specification using the 'data' argument descript(mpg:disp, data = mtcars) # Example 5a: Descriptive statistics, analysis by 'vs' separately descript(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs) # Example 5b: Alternative specification using the 'data' argument descript(mpg:disp, data = mtcars, group = "vs") # Example 6: Descriptive statistics, analysis by 'vs' separately, sort by variables descript(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs, sort.var = TRUE) # Example 7: Descriptive statistics, split analysis by 'am' descript(mtcars[, c("mpg", "cyl", "disp")], split = mtcars$am) # Example 8a: Descriptive statistics,analysis by 'vs' separately, split analysis by 'am' descript(mtcars[, c("mpg", "cyl", "disp")], group = mtcars$vs, split = mtcars$am) # Example 8b: Alternative specification using the 'data' argument descript(mpg:disp, data = mtcars, group = "vs", split = "am") ## Not run: # Example 11a: Write results into a text file descript(mtcars[, c("mpg", "cyl", "disp")], write = "Descript.txt") # Example 11b: Write results into an Excel file descript(mtcars[, c("mpg", "cyl", "disp")], write = "Descript.xlsx") result <- descript(mtcars[, c("mpg", "cyl", "disp")], output = FALSE) write.result(result, "Descript.xlsx") ## End(Not run)
The function df.duplicated
extracts duplicated rows and the function
df.unique
extracts unique rows from a matrix or data frame.
df.duplicated(..., data, first = TRUE, keep.all = TRUE, from.last = FALSE, keep.row.names = TRUE, check = TRUE) df.unique(..., data, keep.all = TRUE, from.last = FALSE, keep.row.names = TRUE, check = TRUE)
df.duplicated(..., data, first = TRUE, keep.all = TRUE, from.last = FALSE, keep.row.names = TRUE, check = TRUE) df.unique(..., data, keep.all = TRUE, from.last = FALSE, keep.row.names = TRUE, check = TRUE)
... |
an expression indicating the variable names in |
data |
a data frame. |
first |
logical: if |
keep.all |
logical: if |
from.last |
logical: if |
keep.row.names |
logical: if |
check |
logical: if |
Note that df.unique(x)
is equivalent to unique(x)
. That is, the
main difference between the df.unique()
and the unique()
function is
that the df.unique()
function provides the ...
argument to
specify a variable or multiple variables which are used to determine unique rows.
Returns duplicated or unique rows of the data frame in ...
or data
.
Takuya Yanagida [email protected]
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
df.merge
,
df.move
, df.rbind
,
df.rename
, df.sort
,
df.subset
dat <- data.frame(x1 = c(1, 1, 2, 1, 4), x2 = c(1, 1, 2, 1, 6), x3 = c(2, 2, 3, 2, 6), x4 = c(1, 1, 2, 2, 4), x5 = c(1, 1, 4, 4, 3)) #------------------------------------------------------------------------------- # df.duplicated() function # Example 1: Extract duplicated rows based on all variables df.duplicated(., data = dat) # Example 2: Extract duplicated rows based on x4 df.duplicated(x4, data = dat) # Example 3: Extract duplicated rows based on x2 and x3 df.duplicated(x2, x3, data = dat) # Example 4: Extract duplicated rows based on all variables # exclude first of identical rows df.duplicated(., data = dat, first = FALSE) # Example 5: Extract duplicated rows based on x2 and x3 # do not return all variables df.duplicated(x2, x3, data = dat, keep.all = FALSE) # Example 6: Extract duplicated rows based on x4 # consider duplication from the reversed side df.duplicated(x4, data = dat, first = FALSE, from.last = TRUE) # Example 7: Extract duplicated rows based on x2 and x3 # set row names to NULL df.duplicated(x2, x3, data = dat, keep.row.names = FALSE) #------------------------------------------------------------------------------- # df.unique() function # Example 8: Extract unique rows based on all variables df.unique(., data = dat) # Example 9: Extract unique rows based on x4 df.unique(x4, data = dat) # Example 10: Extract unique rows based on x1, x2, and x3 df.unique(x1, x2, x3, data = dat) # Example 11: Extract unique rows based on x2 and x3 # do not return all variables df.unique(x2, x3, data = dat, keep.all = FALSE) # Example 12: Extract unique rows based on x4 # consider duplication from the reversed side df.unique(x4, data = dat, from.last = TRUE) # Example 13: Extract unique rows based on x2 and x3 # set row names to NULL df.unique(x2, x3, data = dat, keep.row.names = FALSE)
dat <- data.frame(x1 = c(1, 1, 2, 1, 4), x2 = c(1, 1, 2, 1, 6), x3 = c(2, 2, 3, 2, 6), x4 = c(1, 1, 2, 2, 4), x5 = c(1, 1, 4, 4, 3)) #------------------------------------------------------------------------------- # df.duplicated() function # Example 1: Extract duplicated rows based on all variables df.duplicated(., data = dat) # Example 2: Extract duplicated rows based on x4 df.duplicated(x4, data = dat) # Example 3: Extract duplicated rows based on x2 and x3 df.duplicated(x2, x3, data = dat) # Example 4: Extract duplicated rows based on all variables # exclude first of identical rows df.duplicated(., data = dat, first = FALSE) # Example 5: Extract duplicated rows based on x2 and x3 # do not return all variables df.duplicated(x2, x3, data = dat, keep.all = FALSE) # Example 6: Extract duplicated rows based on x4 # consider duplication from the reversed side df.duplicated(x4, data = dat, first = FALSE, from.last = TRUE) # Example 7: Extract duplicated rows based on x2 and x3 # set row names to NULL df.duplicated(x2, x3, data = dat, keep.row.names = FALSE) #------------------------------------------------------------------------------- # df.unique() function # Example 8: Extract unique rows based on all variables df.unique(., data = dat) # Example 9: Extract unique rows based on x4 df.unique(x4, data = dat) # Example 10: Extract unique rows based on x1, x2, and x3 df.unique(x1, x2, x3, data = dat) # Example 11: Extract unique rows based on x2 and x3 # do not return all variables df.unique(x2, x3, data = dat, keep.all = FALSE) # Example 12: Extract unique rows based on x4 # consider duplication from the reversed side df.unique(x4, data = dat, from.last = TRUE) # Example 13: Extract unique rows based on x2 and x3 # set row names to NULL df.unique(x2, x3, data = dat, keep.row.names = FALSE)
This function merges data frames by a common column (i.e., matching variable).
df.merge(..., by, all = TRUE, check = TRUE, output = TRUE)
df.merge(..., by, all = TRUE, check = TRUE, output = TRUE)
... |
a sequence of matrices or data frames and/or matrices to be merged to one. |
by |
a character string indicating the column used for merging (i.e., matching variable), see 'Details'. |
all |
logical: if |
check |
logical: if |
output |
logical: if |
There are following requirements for merging multiple data frames: First, each data frame
has the same matching variable specified in the by
argument. Second, matching variable
in the data frames have all the same class. Third, there are no duplicated values in the
matching variable in each data frame. Fourth, there are no missing values in the matching
variables. Last, there are no duplicated variable names across the data frames except for
the matching variable.
Note that it is possible to specify data frames matrices and/or in the argument ...
.
However, the function always returns a data frame.
Returns a merged data frame.
Takuya Yanagida [email protected]
df.duplicated
,
df.move
, df.rbind
,
df.rename
, df.sort
,
df.subset
adat <- data.frame(id = c(1, 2, 3), x1 = c(7, 3, 8)) bdat <- data.frame(id = c(1, 2), x2 = c(5, 1)) cdat <- data.frame(id = c(2, 3), y3 = c(7, 9)) ddat <- data.frame(id = 4, y4 = 6) # Merge adat, bdat, cdat, and data by the variable id df.merge(adat, bdat, cdat, ddat, by = "id") # Do not show output on the console df.merge(adat, bdat, cdat, ddat, by = "id", output = FALSE) adat <- data.frame(id = c(1, 2, 3), x1 = c(7, 3, 8)) bdat <- data.frame(id = c(1, 2), x2 = c(5, 1)) cdat <- data.frame(id = c(2, 3), y3 = c(7, 9)) ddat <- data.frame(id = 4, y4 = 6) # Example 1: Merge adat, bdat, cdat, and data by the variable id df.merge(adat, bdat, cdat, ddat, by = "id") # Example 2: Do not show output on the console df.merge(adat, bdat, cdat, ddat, by = "id", output = FALSE) ## Not run: #------------------------------------------------------------------------------- # Error messages adat <- data.frame(id = c(1, 2, 3), x1 = c(7, 3, 8)) bdat <- data.frame(code = c(1, 2, 3), x2 = c(5, 1, 3)) cdat <- data.frame(id = factor(c(1, 2, 3)), x3 = c(5, 1, 3)) ddat <- data.frame(id = c(1, 2, 2), x2 = c(5, 1, 3)) edat <- data.frame(id = c(1, NA, 3), x2 = c(5, 1, 3)) fdat <- data.frame(id = c(1, 2, 3), x1 = c(5, 1, 3)) # Error 1: Data frames do not have the same matching variable specified in 'by'. df.merge(adat, bdat, by = "id") # Error 2: Matching variable in the data frames do not all have the same class. df.merge(adat, cdat, by = "id") # Error 3: There are duplicated values in the matching variable specified in 'by'. df.merge(adat, ddat, by = "id") # Error 4: There are missing values in the matching variable specified in 'by'. df.merge(adat, edat, by = "id") # Error 5: There are duplicated variable names across data frames. df.merge(adat, fdat, by = "id") ## End(Not run)
adat <- data.frame(id = c(1, 2, 3), x1 = c(7, 3, 8)) bdat <- data.frame(id = c(1, 2), x2 = c(5, 1)) cdat <- data.frame(id = c(2, 3), y3 = c(7, 9)) ddat <- data.frame(id = 4, y4 = 6) # Merge adat, bdat, cdat, and data by the variable id df.merge(adat, bdat, cdat, ddat, by = "id") # Do not show output on the console df.merge(adat, bdat, cdat, ddat, by = "id", output = FALSE) adat <- data.frame(id = c(1, 2, 3), x1 = c(7, 3, 8)) bdat <- data.frame(id = c(1, 2), x2 = c(5, 1)) cdat <- data.frame(id = c(2, 3), y3 = c(7, 9)) ddat <- data.frame(id = 4, y4 = 6) # Example 1: Merge adat, bdat, cdat, and data by the variable id df.merge(adat, bdat, cdat, ddat, by = "id") # Example 2: Do not show output on the console df.merge(adat, bdat, cdat, ddat, by = "id", output = FALSE) ## Not run: #------------------------------------------------------------------------------- # Error messages adat <- data.frame(id = c(1, 2, 3), x1 = c(7, 3, 8)) bdat <- data.frame(code = c(1, 2, 3), x2 = c(5, 1, 3)) cdat <- data.frame(id = factor(c(1, 2, 3)), x3 = c(5, 1, 3)) ddat <- data.frame(id = c(1, 2, 2), x2 = c(5, 1, 3)) edat <- data.frame(id = c(1, NA, 3), x2 = c(5, 1, 3)) fdat <- data.frame(id = c(1, 2, 3), x1 = c(5, 1, 3)) # Error 1: Data frames do not have the same matching variable specified in 'by'. df.merge(adat, bdat, by = "id") # Error 2: Matching variable in the data frames do not all have the same class. df.merge(adat, cdat, by = "id") # Error 3: There are duplicated values in the matching variable specified in 'by'. df.merge(adat, ddat, by = "id") # Error 4: There are missing values in the matching variable specified in 'by'. df.merge(adat, edat, by = "id") # Error 5: There are duplicated variable names across data frames. df.merge(adat, fdat, by = "id") ## End(Not run)
This function moves variables to a different position in the data frame, i.e.,
changes the column positions in the data frame. By default, variables specified
in the first argument ...
are moved to the first position in the data
frame specified in the argument data
.
df.move(..., data = NULL, before = NULL, after = NULL, first = TRUE, check = FALSE)
df.move(..., data = NULL, before = NULL, after = NULL, first = TRUE, check = FALSE)
... |
an expression indicating the variable names in |
data |
a data frame. |
before |
a character string indicating a variable in |
after |
a character string indicating a variable in |
first |
logical: if |
check |
logical: if |
Returns the data frame in data
with columns in a different place.
Takuya Yanagida [email protected]
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
df.duplicated
, df.merge
,
df.rbind
,
df.rename
, df.sort
,
df.subset
# Example 1: Move variables 'hp' and 'am' to the first position df.move(hp, am, data = mtcars) # Example 2: Move variables 'hp' and 'am' to the last position df.move(hp, am, data = mtcars, first = FALSE) # Example 3: Move variables 'hp' and 'am' to the left-hand side of 'disp' df.move(hp, am, data = mtcars, before = "disp") # Example 4: Move variables 'hp' and 'am' to the right-hand side of 'disp' df.move(hp, am, data = mtcars, after = "disp")
# Example 1: Move variables 'hp' and 'am' to the first position df.move(hp, am, data = mtcars) # Example 2: Move variables 'hp' and 'am' to the last position df.move(hp, am, data = mtcars, first = FALSE) # Example 3: Move variables 'hp' and 'am' to the left-hand side of 'disp' df.move(hp, am, data = mtcars, before = "disp") # Example 4: Move variables 'hp' and 'am' to the right-hand side of 'disp' df.move(hp, am, data = mtcars, after = "disp")
This function takes a sequence of data frames and combines them by rows, while filling in missing
columns with NA
s.
df.rbind(...)
df.rbind(...)
... |
a sequence of data frame to be row bind together. This argument can be a
list of data frames, in which case all other arguments are ignored.
Any |
This is an enhancement to rbind
that adds in columns that are not present in all inputs,
accepts a sequence of data frames, and operates substantially faster.
Column names and types in the output will appear in the order in which they were encountered.
Unordered factor columns will have their levels unified and character data bound with factors will be converted to character. POSIXct data will be converted to be in the same time zone. Array and matrix columns must have identical dimensions after the row count. Aside from these there are no general checks that each column is of consistent data type.
Returns a single data frame
This function is a copy of the rbind.fill()
function in the plyr
package by Hadley Wickham.
Hadley Wickham
Wickham, H. (2011). The split-apply-combine strategy for data analysis. Journal of Statistical Software, 40, 1-29. https://doi.org/10.18637/jss.v040.i01
Wickham, H. (2019). plyr: Tools for Splitting, Applying and Combining Data. R package version 1.8.5.
df.duplicated
, df.merge
,
df.move
,
df.rename
, df.sort
,
df.subset
adat <- data.frame(id = c(1, 2, 3), a = c(7, 3, 8), b = c(4, 2, 7)) bdat <- data.frame(id = c(4, 5, 6), a = c(2, 4, 6), c = c(4, 2, 7)) cdat <- data.frame(id = c(7, 8, 9), a = c(1, 4, 6), d = c(9, 5, 4)) # Example 1 df.rbind(adat, bdat, cdat)
adat <- data.frame(id = c(1, 2, 3), a = c(7, 3, 8), b = c(4, 2, 7)) bdat <- data.frame(id = c(4, 5, 6), a = c(2, 4, 6), c = c(4, 2, 7)) cdat <- data.frame(id = c(7, 8, 9), a = c(1, 4, 6), d = c(9, 5, 4)) # Example 1 df.rbind(adat, bdat, cdat)
This function renames columns in a matrix or variables in a data frame by specifying a character string or character vector indicating the columns or variables to be renamed and a character string or character vector indicating the corresponding replacement values.
df.rename(x, from, to, check = TRUE)
df.rename(x, from, to, check = TRUE)
x |
a matrix or data frame. |
from |
a character string or character vector indicating the column(s) or variable(s) to be renamed. |
to |
a character string or character vector indicating the corresponding replacement values for
the column(s) or variable(s) specified in the argument |
check |
logical: if |
Returns a matrix or data frame with renamed columns or variables.
Takuya Yanagida [email protected]
df.duplicated
, df.merge
,
df.move
, df.rbind
,
df.sort
,
df.subset
dat <- data.frame(a = c(3, 1, 6), b = c(4, 2, 5), c = c(7, 3, 1)) # Example 1: Rename variable b in the data frame 'dat' to y df.rename(dat, from = "b", to = "y") # Example 2: Rename variable a, b, and c in the data frame 'dat' to x, y, and z df.rename(dat, from = c("a", "b", "c"), to = c("x", "y", "z"))
dat <- data.frame(a = c(3, 1, 6), b = c(4, 2, 5), c = c(7, 3, 1)) # Example 1: Rename variable b in the data frame 'dat' to y df.rename(dat, from = "b", to = "y") # Example 2: Rename variable a, b, and c in the data frame 'dat' to x, y, and z df.rename(dat, from = c("a", "b", "c"), to = c("x", "y", "z"))
This function arranges a data frame in increasing or decreasing order according to one or more variables.
df.sort(x, ..., decreasing = FALSE, check = TRUE)
df.sort(x, ..., decreasing = FALSE, check = TRUE)
x |
a data frame. |
... |
a sorting variable or a sequence of sorting variables which are specified without
quotes |
decreasing |
logical: if |
check |
logical: if |
Returns data frame x
sorted according to the variables specified in ...
, a matrix will
be coerced to a data frame.
Takuya Yanagida [email protected]
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
Knuth, D. E. (1998) The Art of Computer Programming, Volume 3: Sorting and Searching (2nd ed.). Addison-Wesley.
df.duplicated
, df.merge
,
df.move
, df.rbind
,
df.rename
,
df.subset
dat <- data.frame(x = c(5, 2, 5, 5, 7, 2), y = c(1, 6, 2, 3, 2, 3), z = c(2, 1, 6, 3, 7, 4)) # Example 1: Sort data frame 'dat' by "x" in increasing order df.sort(dat, x) # Example 2: Sort data frame 'dat' by "x" in decreasing order df.sort(dat, x, decreasing = TRUE) # Example 3: Sort data frame 'dat' by "x" and "y" in increasing order df.sort(dat, x, y) # Example 4: Sort data frame 'dat' by "x" and "y" in decreasing order df.sort(dat, x, y, decreasing = TRUE)
dat <- data.frame(x = c(5, 2, 5, 5, 7, 2), y = c(1, 6, 2, 3, 2, 3), z = c(2, 1, 6, 3, 7, 4)) # Example 1: Sort data frame 'dat' by "x" in increasing order df.sort(dat, x) # Example 2: Sort data frame 'dat' by "x" in decreasing order df.sort(dat, x, decreasing = TRUE) # Example 3: Sort data frame 'dat' by "x" and "y" in increasing order df.sort(dat, x, y) # Example 4: Sort data frame 'dat' by "x" and "y" in decreasing order df.sort(dat, x, y, decreasing = TRUE)
This function returns subsets of data frames which meet conditions.
df.subset(..., data, subset = NULL, drop = TRUE, check = TRUE)
df.subset(..., data, subset = NULL, drop = TRUE, check = TRUE)
... |
an expression indicating variables to select from the data frame
specified in |
data |
a data frame that contains the variables specified in the
argument |
subset |
character string with a logical expression indicating rows to
keep, e.g., |
drop |
logical: if |
check |
logical: if |
The argument ...
is used to specify an expression indicating the
variables to select from the data frame specified in data
, e.g.,
df.subset(x1, x2, x3, data = dat)
. There are seven operators which
can be used in the expression ...
:
.
) OperatorThe dot operator is used to select
all variables from the data frame specified in data
. For example,
df.subset(., data = dat)
selects all variables in dat
. Note
that this operator is similar to the function everything()
from the
tidyselect package.
+
) OperatorThe plus operator is used to select
variables matching a prefix from the data frame specified in data
. For
example, df.subset(+x, data = dat)
selects all variables with the
prefix x
. Note that this operator is equivalent to the function
starts_with()
from the tidyselect package.
-
) OperatorThe minus operator is used to select
variables matching a suffix from the data frame specified in data
. For
example, df.subset(-y, data = dat)
selects all variables with the
suffix y
. Note that this operator is equivalent to the function
ends_with()
from the tidyselect package.
~
) OperatorThe tilde operator is used to select
variables containing a word from the data frame specified in data
. For
example, df.subset(?al, data = dat)
selects all variables with the word
al
. Note that this operator is equivalent to the function
contains()
from the tidyselect package.
:
) operatorThe colon operator is used to select
a range of consecutive variables from the data frame specified in data
.
For example, df.subset(x:z, data = dat)
selects all variables from
x
to z
. Note that this operator is equivalent to the :
operator from the select
function in the dplyr package.
::
) OperatorThe double colon operator
is used to select numbered variables from the data frame specified in
data
. For example, df.subset(x1::x3, data = dat)
selects the
variables x1
, x2
, and x3
. Note that this operator is
similar to the function num_range()
from the tidyselect
package.
!
) OperatorThe exclamation point
operator is used to drop variables from the data frame specified in data
or for taking the complement of a set of variables. For example,
df.subset(., !x, data = dat)
selects all variables but x
in
dat
., df.subset(., !~x, data = dat)
selects all variables but
variables with the prefix x
, or df.subset(x:z, !x1:x3, data = dat)
selects all variables from x
to z
but excludes all variables
from x1
to x3
. Note that this operator is equivalent to the !
operator from the select
function in the dplyr package.
Note that operators can be combined within the same function call. For example,
df.subset(+x, -y, !x2:x4, z, data = dat)
selects all variables with
the prefix x
and with the suffix y
but excludes variables from
x2
to x4
and select variable z
.
Returns a data frame containing the variables and rows selected in the argument
...
and rows selected in the argument subset
.
Takuya Yanagida [email protected]
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
df.duplicated
, df.merge
,
df.move
, df.rbind
,
df.rename
, df.sort
## Not run: #------------------------------------------------------------------------------- # Select single variables # Example 1: Select 'Sepal.Length' and 'Petal.Width' df.subset(Sepal.Length, Petal.Width, data = iris) #------------------------------------------------------------------------------- # Select all variables using the . operator # Example 2a: Select all variables, select rows with 'Species' equal 'setosa' # Note that single quotation marks ('') are needed to specify 'setosa' df.subset(., data = iris, subset = "Species == 'setosa'") # Example 2b: Select all variables, select rows with 'Petal.Length' smaller 1.2 df.subset(., data = iris, subset = "Petal.Length < 1.2") #------------------------------------------------------------------------------- # Select variables matching a prefix using the + operator # Example 3: Select variables with prefix 'Petal' df.subset(+Petal, data = iris) #------------------------------------------------------------------------------- # Select variables matching a suffix using the - operator # Example 4: Select variables with suffix 'Width' df.subset(-Width, data = iris) #------------------------------------------------------------------------------- # Select variables containing a word using the ~ operator # Example 5: Select variables containing 'al' df.subset(~al, data = iris) #------------------------------------------------------------------------------- # Select consecutive variables using the : operator # Example 6: Select all variables from 'Sepal.Width' to 'Petal.Width' df.subset(Sepal.Width:Petal.Width, data = iris) #------------------------------------------------------------------------------- # Select numbered variables using the :: operator # Example 7: Select all variables from 'x1' to 'x3' and 'y1' to 'y3' df.subset(x1::x3, y1::y3, data = anscombe) #------------------------------------------------------------------------------- # Drop variables using the ! operator # Example 8a: Select all variables but 'Sepal.Width' df.subset(., !Sepal.Width, data = iris) # Example 8b: Select all variables but 'Sepal.Width' to 'Petal.Width' df.subset(., !Sepal.Width:Petal.Width, data = iris) #---------------------------------------------------------------------------- # Combine +, - , !, and : operators # Example 9: Select variables with prefix 'x' and suffix '3', but exclude # variables from 'x2' to 'x3' df.subset(+x, -3, !x2:x3, data = anscombe) ## End(Not run)
## Not run: #------------------------------------------------------------------------------- # Select single variables # Example 1: Select 'Sepal.Length' and 'Petal.Width' df.subset(Sepal.Length, Petal.Width, data = iris) #------------------------------------------------------------------------------- # Select all variables using the . operator # Example 2a: Select all variables, select rows with 'Species' equal 'setosa' # Note that single quotation marks ('') are needed to specify 'setosa' df.subset(., data = iris, subset = "Species == 'setosa'") # Example 2b: Select all variables, select rows with 'Petal.Length' smaller 1.2 df.subset(., data = iris, subset = "Petal.Length < 1.2") #------------------------------------------------------------------------------- # Select variables matching a prefix using the + operator # Example 3: Select variables with prefix 'Petal' df.subset(+Petal, data = iris) #------------------------------------------------------------------------------- # Select variables matching a suffix using the - operator # Example 4: Select variables with suffix 'Width' df.subset(-Width, data = iris) #------------------------------------------------------------------------------- # Select variables containing a word using the ~ operator # Example 5: Select variables containing 'al' df.subset(~al, data = iris) #------------------------------------------------------------------------------- # Select consecutive variables using the : operator # Example 6: Select all variables from 'Sepal.Width' to 'Petal.Width' df.subset(Sepal.Width:Petal.Width, data = iris) #------------------------------------------------------------------------------- # Select numbered variables using the :: operator # Example 7: Select all variables from 'x1' to 'x3' and 'y1' to 'y3' df.subset(x1::x3, y1::y3, data = anscombe) #------------------------------------------------------------------------------- # Drop variables using the ! operator # Example 8a: Select all variables but 'Sepal.Width' df.subset(., !Sepal.Width, data = iris) # Example 8b: Select all variables but 'Sepal.Width' to 'Petal.Width' df.subset(., !Sepal.Width:Petal.Width, data = iris) #---------------------------------------------------------------------------- # Combine +, - , !, and : operators # Example 9: Select variables with prefix 'x' and suffix '3', but exclude # variables from 'x2' to 'x3' df.subset(+x, -3, !x2:x3, data = anscombe) ## End(Not run)
This function conducts dominance analysis (Budescu, 1993; Azen & Budescu, 2003)
for linear models estimated by using the lm()
function to determine the
relative importance of predictor variables. By default, the function reports
general dominance, but conditional and complete dominance can be requested by
specifying the argument print
.
dominance(model, print = c("all", "gen", "cond", "comp"), digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
dominance(model, print = c("all", "gen", "cond", "comp"), digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
model |
a fitted model of class |
print |
a character string or character vector indicating which results
to show on the console, i.e. |
digits |
an integer value indicating the number of decimal places to be
used for displaying results. Note that the percentage relative
importance of predictors are printed with |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Dominance analysis (Budescu, 1993; Azen & Budescu, 2003) is used to determine
the relative importance of predictor variables in a statistical model by examining
the additional contribution of predictors in R-squared relative to each
other in all of the possible subset models with
being
the number of predictors. Three levels of dominance can be established through
pairwise comparison of all predictors in a regression model:
A predictor completely dominates another
predictor if its additional contribution in R-Squared is higher than that
of the other predictor across all possible subset models that do not include both
predictors. For example, in a regression model with four predictors,
completely dominates
if the additional contribution in R-squared
for
is higher compared to
in (1) the null model without any
predictors, (2) the model including
, (3) the model including
, and (4) the model including both
and
. Note
that complete dominance cannot be established if one predictor's additional
contribution is greater than the other's for some, but not all of the subset
models. In this case, dominance is undetermined and the result will be
NA
A predictor conditionally dominates another
predictor if its average additional contribution in R-squared is higher
within each model size than that of the other predictor. For example, in a
regression model with four predictors, conditionally dominates
if the average additional contribution in R-squared is higher compared
to
in (1) the null model without any predictors, (2) the four models
including one predictor, (3) the six models including two predictors, and (4)
the four models including three predictors.
A predictor generally dominates another predictor
if its overall averaged additional contribution in R-squared is higher
than that of the other predictor. For example, in a regression model with four
predictors, generally dominates
if the average across the
four conditional values (i.e., null model, model with one predictor, model with
two predictors, and model with three predictors) is higher than that of
.
Note that the general dominance measures represent the proportional contribution
that each predictor makes to the R-squared since their sum across all
predictors equals the R-squared of the full model.
The three levels of dominance are related to each other in a hierarchical fashion: Complete dominance implies conditional dominance, which in turn implies general dominance. However, the converse may not hold for more than three predictors. That is, general dominance does not imply conditional dominance, and conditional dominance does not necessarily imply complete dominance.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
model |
model specified in |
args |
specification of function arguments |
result |
list with results, i.e., |
This function is based on the domir
function from the domir
package (Luchman, 2023).
Takuya Yanagida [email protected]
Azen, R., & Budescu, D. V. (2003). The dominance analysis approach for comparing predictors in multiple regression. Psychological Methods, 8(2), 129–148. https://doi.org/10.1037/1082-989X.8.2.129
Budescu, D. V. (1993). Dominance analysis: A new approach to the problem of relative importance of predictors in multiple regression. Psychological Bulletin, 114(3), 542–551. https://doi.org/10.1037/0033-2909.114.3.542
Luchman J (2023). domir: Tools to support relative importance analysis. R package version 1.0.1, https://CRAN.R-project.org/package=domir.
dominance.manual
, std.coef
, write.result
#---------------------------------------------------------------------------- # Example 1: Dominance analysis for a linear model mod <- lm(mpg ~ cyl + disp + hp, data = mtcars) dominance(mod) # Print all results dominance(mod, print = "all") ## Not run: #---------------------------------------------------------------------------- # Example 2: Write results into a text file dominance(mod, write = "Dominance.txt", output = FALSE) #---------------------------------------------------------------------------- # Example 3: Write results into an Excel file dominance(mod, write = "Dominance.xlsx", output = FALSE) result <- dominance(mod, print = "all", output = FALSE) write.result(result, "Dominance.xlsx") ## End(Not run)
#---------------------------------------------------------------------------- # Example 1: Dominance analysis for a linear model mod <- lm(mpg ~ cyl + disp + hp, data = mtcars) dominance(mod) # Print all results dominance(mod, print = "all") ## Not run: #---------------------------------------------------------------------------- # Example 2: Write results into a text file dominance(mod, write = "Dominance.txt", output = FALSE) #---------------------------------------------------------------------------- # Example 3: Write results into an Excel file dominance(mod, write = "Dominance.xlsx", output = FALSE) result <- dominance(mod, print = "all", output = FALSE) write.result(result, "Dominance.xlsx") ## End(Not run)
This function conducts dominance analysis (Budescu, 1993; Azen & Budescu, 2003) based on a (model-implied) correlation matrix of the manifest or latent variables. Note that the function only provides general dominance.
dominance.manual(x, out = NULL, digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
dominance.manual(x, out = NULL, digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
x |
a matrix or data frame with the (model-implied) correlation matrix
of the manifest or latent variables. Note that column names need
to represent the variables names in |
out |
a character string representing the outcome variable. By default, the first row and column represents the outcome variable. |
digits |
an integer value indicating the number of decimal places to be
used for displaying results. Note that the percentage relative
importance of predictors are printed with |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
correlation matrix specified in |
args |
specification of function arguments |
result |
results table for the general dominance |
This function implements the function provided in Appendix 1 of Gu (2022) and
copied the function combinations()
from the gtools
package
(Bolker, Warnes, & Lumley, 2022).
Takuya Yanagida [email protected]
Azen, R., & Budescu, D. V. (2003). The dominance analysis approach for comparing predictors in multiple regression. Psychological Methods, 8(2), 129–148. https://doi.org/10.1037/1082-989X.8.2.129
Bolker, B., Warnes, G., & Lumley, T. (2022). gtools: Various R Programming Tools. R package version 3.9.4, https://CRAN.R-project.org/package=gtools
Budescu, D. V. (1993). Dominance analysis: A new approach to the problem of relative importance of predictors in multiple regression. Psychological Bulletin, 114(3), 542–551. https://doi.org/10.1037/0033-2909.114.3.542
Gu, X. (2022). Assessing the relative importance of predictors in latent regression models. Structural Equation Modeling: A Multidisciplinary Journal, 4, 569-583. https://doi.org/10.1080/10705511.2021.2025377
dominance
, std.coef
, write.result
## Not run: #---------------------------------------------------------------------------- # Linear model # Example 1a: Dominance analysis, 'mpg' predicted by 'cyl', 'disp', and 'hp' dominance.manual(cor(mtcars[, c("mpg", "cyl", "disp", "hp")])) # Example 1b: Equivalent results using the dominance() function mod <- lm(mpg ~ cyl + disp + hp, data = mtcars) dominance(mod) # Example 1c: Dominance analysis, 'hp' predicted by 'mpg', 'cyl', and 'disp' dominance.manual(cor(mtcars[, c("mpg", "cyl", "disp", "hp")]), out = "hp") # Example 1d: Write results into a text file dominance.manual(cor(mtcars[, c("mpg", "cyl", "disp", "hp")]), write = "Dominance_Manual.txt") #---------------------------------------------------------------------------- # Example 2: Structural equation modeling library(lavaan) #............. # Latent variables # Model specification model <- '# Measurement model ind60 =~ x1 + x2 + x3 dem60 =~ y1 + y2 + y3 + y4 dem65 =~ y5 + y6 + y7 + y8 # regressions ind60 ~ dem60 + dem65' # Model estimation fit <- sem(model, data = PoliticalDemocracy) # Model-implied correlation matrix of the latent variables fit.cor <- lavInspect(fit, what = "cor.lv") # Dominance analysis dominance.manual(fit.cor) #............. # Example 3: Latent and manifest variables # Model specification, convert manifest to latent variable model <- '# Measurement model ind60 =~ x1 + x2 + x3 dem60 =~ y1 + y2 + y3 + y4 # Manifest as latent variable ly5 =~ 1*y5 y5 ~~ 0*y5 # Regressions ind60 ~ dem60 + ly5' # Model estimation fit <- sem(model, data = PoliticalDemocracy) # Model-implied correlation matrix of the latent variables fit.cor <- lavInspect(fit, what = "cor.lv") # Dominance analysis dominance.manual(fit.cor) #---------------------------------------------------------------------------- # Example 4: Multilevel modeling # Model specification model <- 'level: 1 fw =~ y1 + y2 + y3 # Manifest as latent variables lx1 =~ 1*x1 lx2 =~ 1*x2 lx3 =~ 1*x3 x1 ~~ 0*x1 x2 ~~ 0*x2 x3 ~~ 0*x3 # Regression fw ~ lx1 + lx2 + lx3 level: 2 fb =~ y1 + y2 + y3 # Manifest as latent variables lw1 =~ 1*w1 lw2 =~ 1*w2 # Regression fb ~ lw1 + lw2' # Model estimation fit <- sem(model, data = Demo.twolevel, cluster = "cluster") # Model-implied correlation matrix of the latent variables fit.cor <- lavInspect(fit, what = "cor.lv") # Dominance analysis Within dominance.manual(fit.cor$within) # Dominance analysis Between dominance.manual(fit.cor$cluster) #---------------------------------------------------------------------------- # Example 5: Mplus # # In Mplus, the model-impied correlation matrix of the latent variables # can be requested by OUTPUT: TECH4 and imported into R by using the # MplusAuomtation package, for example: library(MplusAutomation) # Read Mplus output output <- readModels() # Extract model-implied correlation matrix of the latent variables fit.cor <- output$tech4$latCorEst ## End(Not run)
## Not run: #---------------------------------------------------------------------------- # Linear model # Example 1a: Dominance analysis, 'mpg' predicted by 'cyl', 'disp', and 'hp' dominance.manual(cor(mtcars[, c("mpg", "cyl", "disp", "hp")])) # Example 1b: Equivalent results using the dominance() function mod <- lm(mpg ~ cyl + disp + hp, data = mtcars) dominance(mod) # Example 1c: Dominance analysis, 'hp' predicted by 'mpg', 'cyl', and 'disp' dominance.manual(cor(mtcars[, c("mpg", "cyl", "disp", "hp")]), out = "hp") # Example 1d: Write results into a text file dominance.manual(cor(mtcars[, c("mpg", "cyl", "disp", "hp")]), write = "Dominance_Manual.txt") #---------------------------------------------------------------------------- # Example 2: Structural equation modeling library(lavaan) #............. # Latent variables # Model specification model <- '# Measurement model ind60 =~ x1 + x2 + x3 dem60 =~ y1 + y2 + y3 + y4 dem65 =~ y5 + y6 + y7 + y8 # regressions ind60 ~ dem60 + dem65' # Model estimation fit <- sem(model, data = PoliticalDemocracy) # Model-implied correlation matrix of the latent variables fit.cor <- lavInspect(fit, what = "cor.lv") # Dominance analysis dominance.manual(fit.cor) #............. # Example 3: Latent and manifest variables # Model specification, convert manifest to latent variable model <- '# Measurement model ind60 =~ x1 + x2 + x3 dem60 =~ y1 + y2 + y3 + y4 # Manifest as latent variable ly5 =~ 1*y5 y5 ~~ 0*y5 # Regressions ind60 ~ dem60 + ly5' # Model estimation fit <- sem(model, data = PoliticalDemocracy) # Model-implied correlation matrix of the latent variables fit.cor <- lavInspect(fit, what = "cor.lv") # Dominance analysis dominance.manual(fit.cor) #---------------------------------------------------------------------------- # Example 4: Multilevel modeling # Model specification model <- 'level: 1 fw =~ y1 + y2 + y3 # Manifest as latent variables lx1 =~ 1*x1 lx2 =~ 1*x2 lx3 =~ 1*x3 x1 ~~ 0*x1 x2 ~~ 0*x2 x3 ~~ 0*x3 # Regression fw ~ lx1 + lx2 + lx3 level: 2 fb =~ y1 + y2 + y3 # Manifest as latent variables lw1 =~ 1*w1 lw2 =~ 1*w2 # Regression fb ~ lw1 + lw2' # Model estimation fit <- sem(model, data = Demo.twolevel, cluster = "cluster") # Model-implied correlation matrix of the latent variables fit.cor <- lavInspect(fit, what = "cor.lv") # Dominance analysis Within dominance.manual(fit.cor$within) # Dominance analysis Between dominance.manual(fit.cor$cluster) #---------------------------------------------------------------------------- # Example 5: Mplus # # In Mplus, the model-impied correlation matrix of the latent variables # can be requested by OUTPUT: TECH4 and imported into R by using the # MplusAuomtation package, for example: library(MplusAutomation) # Read Mplus output output <- readModels() # Extract model-implied correlation matrix of the latent variables fit.cor <- output$tech4$latCorEst ## End(Not run)
This function computes effect sizes for one or more than one categorical variable, i.e., (adjusted) phi coefficient, (bias-corrected) Cramer's V, (bias-corrected) Tschuprow's T, (adjusted) Pearson's contingency coefficient, Cohen's w), and Fei. By default, the function computes Fei based on a chi-square goodness-of-fit test for one categorical variable, phi coefficient based on a chi-square test of independence for two dichotomous variables, and Cramer's V based on a chi-square test of independence for two variables with at least one polytomous variable.
effsize(..., data = NULL, type = c("phi", "cramer", "tschuprow", "cont", "w", "fei"), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, adjust = TRUE, indep = TRUE, p = NULL, digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
effsize(..., data = NULL, type = c("phi", "cramer", "tschuprow", "cont", "w", "fei"), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, adjust = TRUE, indep = TRUE, p = NULL, digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a vector, factor, matrix or data frame. Alternatively, an
expression indicating the variable names in |
data |
a data frame when specifying one or more variables in the
argument |
type |
a character string indicating the type of effect size, i.e.,
|
alternative |
a character string specifying the alternative hypothesis,
must be one of |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
adjust |
logical: if |
indep |
logical: if |
p |
a numeric vector specifying the expected proportions in each category of the categorical variable when conducting a chi-square goodness-of-fit test. By default, the expected proportions in each category are assumed to be equal. |
digits |
an integer value indicating the number of decimal places digits to be used for displaying the results. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output
into either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
This function is based on modified copies of the functions chisq_to_phi
,
chisq_to_cramers_v
, chisq_to_tschuprows_t
, chisq_to_pearsons_c
,
chisq_to_cohens_w
, and chisq_to_fei
from the effectsize
package (Ben-Shachar, Lüdecke & Makowski, 2020).
Takuya Yanagida [email protected]
Bergsma, W. (2013). A bias correction for Cramer's V and Tschuprow's T. Journal of the Korean Statistical Society, 42, 323-328. https://doi.org/10.1016/j.jkss.2012.10.002
Ben-Shachar M. S., Lüdecke D., Makowski D. (2020). effectsize: Estimation of Effect Size Indices and Standardized Parameters. Journal of Open Source Software, 5 (56), 2815. https://doi.org/10.21105/joss.02815
Ben-Shachar, M. S., Patil, I., Theriault, R., Wiernik, B. M., Lüdecke, D. (2023). Phi, Fei, Fo, Fum: Effect sizes for categorical data that use the chi-squared statistic. Mathematics, 11, 1982. https://doi.org/10.3390/math11091982
Cureton, E. E. (1959). Note on Phi/Phi max. Psychometrika, 24, 89-91.
Davenport, E. C., & El-Sanhurry, N. A. (1991). Phi/Phimax: Review and synthesis. Educational and Psychological Measurement, 51, 821-828. https://doi.org/10.1177/001316449105100403
Sakoda, J.M. (1977). Measures of association for multivariate contingency tables. Proceedings of the Social Statistics Section of the American Statistical Association (Part III), 777-780.
# Example 1a: Phi coefficient for 'vs' and 'am' effsize(mtcars[, c("vs", "am")]) # Example 1a: Alternative specification using the 'data' argument effsize(vs, am, data = mtcars) # Example 2: Bias-corrected Cramer's V for 'gear' and 'carb' effsize(gear, carb, data = mtcars) # Example 3: Cramer's V (without bias-correction) for 'gear' and 'carb' effsize(gear, carb, data = mtcars, adjust = FALSE) # Example 4: Adjusted Pearson's contingency coefficient for 'gear' and 'carb' effsize(gear, carb, data = mtcars, type = "cont") # Example 5: Fei for 'gear' effsize(gear, data = mtcars) # Example 6a: Bias-corrected Cramer's V for 'cyl' and 'vs', 'am', 'gear', and 'carb' effsize(mtcars[, c("cyl", "vs", "am", "gear", "carb")]) # Example 6b: Alternative specification using the 'data' argument effsize(cyl, vs:carb, data = mtcars) ## Not run: # Example 7b: Write Results into a text file effsize(cyl, vs:carb, data = mtcars, write = "Cramer.txt") # Example 7b: Write Results into a Excel file effsize(cyl, vs:carb, data = mtcars, write = "Cramer.xlsx") ## End(Not run)
# Example 1a: Phi coefficient for 'vs' and 'am' effsize(mtcars[, c("vs", "am")]) # Example 1a: Alternative specification using the 'data' argument effsize(vs, am, data = mtcars) # Example 2: Bias-corrected Cramer's V for 'gear' and 'carb' effsize(gear, carb, data = mtcars) # Example 3: Cramer's V (without bias-correction) for 'gear' and 'carb' effsize(gear, carb, data = mtcars, adjust = FALSE) # Example 4: Adjusted Pearson's contingency coefficient for 'gear' and 'carb' effsize(gear, carb, data = mtcars, type = "cont") # Example 5: Fei for 'gear' effsize(gear, data = mtcars) # Example 6a: Bias-corrected Cramer's V for 'cyl' and 'vs', 'am', 'gear', and 'carb' effsize(mtcars[, c("cyl", "vs", "am", "gear", "carb")]) # Example 6b: Alternative specification using the 'data' argument effsize(cyl, vs:carb, data = mtcars) ## Not run: # Example 7b: Write Results into a text file effsize(cyl, vs:carb, data = mtcars, write = "Cramer.txt") # Example 7b: Write Results into a Excel file effsize(cyl, vs:carb, data = mtcars, write = "Cramer.xlsx") ## End(Not run)
This function computes a frequency table with absolute and percentage frequencies for one or more than one variable.
freq(..., data = NULL, print = c("no", "all", "perc", "v.perc"), freq = TRUE, split = FALSE, labels = TRUE, val.col = FALSE, round = 3, exclude = 15, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
freq(..., data = NULL, print = c("no", "all", "perc", "v.perc"), freq = TRUE, split = FALSE, labels = TRUE, val.col = FALSE, round = 3, exclude = 15, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a vector, factor, matrix or data frame. Alternatively, an
expression indicating the variable names in |
data |
a data frame when specifying one or more variables in the
argument |
print |
a character string indicating which percentage(s) to be
printed on the console, i.e., no percentages ( |
freq |
logical: if |
split |
logical: if |
labels |
logical: if |
val.col |
logical: if |
round |
an integer value indicating the number of decimal places to be used for rounding numeric variables. |
exclude |
an integer value indicating the maximum number of unique
values for variables to be included in the analysis when
specifying more than one variable in |
digits |
an integer value indicating the number of decimal places to be used for displaying percentages. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
By default, the function displays the absolute and percentage frequencies when
specifying one variable in the argument ...
, while the function displays
only the absolute frequencies when more than one variable is specified. The
function displays valid percentage frequencies only in the presence of missing
values and excludes variables with all values missing from the analysis. Note
that it is possible to mix numeric variables, factors, and character variables
in the data frame specified in the argument ...
. By default, numeric
variables are rounded to three digits before computing the frequency table.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame used for the current analysis |
args |
specification of function arguments |
result |
list with result tables, i.e., |
Takuya Yanagida [email protected]
Becker, R. A., Chambers, J. M., & Wilks, A. R. (1988). The New S Language. Wadsworth & Brooks/Cole.
write.result
, crosstab
, descript
,
multilevel.descript
, na.descript
.
# Example 1a: Frequency table for 'cyl' freq(mtcars$cyl) # Example 1b: Alternative specification using the 'data' argument freq(cyl, data = mtcars) # Example 2: Frequency table, values shown in columns freq(mtcars$cyl, val.col = TRUE) # Example 3: Frequency table, use 3 digit for displaying percentages freq(mtcars$cyl, digits = 3) # Example 4a: Frequency table for 'cyl', 'gear', and 'carb' freq(mtcars[, c("cyl", "gear", "carb")]) # Example 4b: Alternative specification using the 'data' argument freq(cyl, gear, carb, data = mtcars) # Example 5: Frequency table, with percentage frequencies freq(mtcars[, c("cyl", "gear", "carb")], print = "all") # Example 6: Frequency table, split output table freq(mtcars[, c("cyl", "gear", "carb")], split = TRUE) # Example 7: Frequency table, exclude variables with more than 5 unique values freq(mtcars, exclude = 5) ## Not run: # Example 8a: Write results into a text file freq(mtcars[, c("cyl", "gear", "carb")], split = TRUE, write = "Frequencies.txt") # Example 8b: Write results into an Excel file freq(mtcars[, c("cyl", "gear", "carb")], split = TRUE, write = "Frequencies.xlsx") result <- freq(mtcars[, c("cyl", "gear", "carb")], split = TRUE, output = FALSE) write.result(result, "Frequencies.xlsx") ## End(Not run)
# Example 1a: Frequency table for 'cyl' freq(mtcars$cyl) # Example 1b: Alternative specification using the 'data' argument freq(cyl, data = mtcars) # Example 2: Frequency table, values shown in columns freq(mtcars$cyl, val.col = TRUE) # Example 3: Frequency table, use 3 digit for displaying percentages freq(mtcars$cyl, digits = 3) # Example 4a: Frequency table for 'cyl', 'gear', and 'carb' freq(mtcars[, c("cyl", "gear", "carb")]) # Example 4b: Alternative specification using the 'data' argument freq(cyl, gear, carb, data = mtcars) # Example 5: Frequency table, with percentage frequencies freq(mtcars[, c("cyl", "gear", "carb")], print = "all") # Example 6: Frequency table, split output table freq(mtcars[, c("cyl", "gear", "carb")], split = TRUE) # Example 7: Frequency table, exclude variables with more than 5 unique values freq(mtcars, exclude = 5) ## Not run: # Example 8a: Write results into a text file freq(mtcars[, c("cyl", "gear", "carb")], split = TRUE, write = "Frequencies.txt") # Example 8b: Write results into an Excel file freq(mtcars[, c("cyl", "gear", "carb")], split = TRUE, write = "Frequencies.xlsx") result <- freq(mtcars[, c("cyl", "gear", "carb")], split = TRUE, output = FALSE) write.result(result, "Frequencies.xlsx") ## End(Not run)
This function computes confidence intervals for the indirect effect based on the
asymptotic normal method, distribution of the product method and the Monte Carlo
method. By default, the function uses the distribution of the product method
for computing the two-sided 95% asymmetric confidence intervals for the indirect
effect product of coefficient estimator .
indirect(a, b, se.a, se.b, print = c("all", "asymp", "dop", "mc"), se = c("sobel", "aroian", "goodman"), nrep = 100000, alternative = c("two.sided", "less", "greater"), seed = NULL, conf.level = 0.95, digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
indirect(a, b, se.a, se.b, print = c("all", "asymp", "dop", "mc"), se = c("sobel", "aroian", "goodman"), nrep = 100000, alternative = c("two.sided", "less", "greater"), seed = NULL, conf.level = 0.95, digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
a |
a numeric value indicating the coefficient |
b |
a numeric value indicating the coefficient |
se.a |
a positive numeric value indicating the standard error of
|
se.b |
a positive numeric value indicating the standard error of
|
print |
a character string or character vector indicating which confidence
intervals (CI) to show on the console, i.e. |
se |
a character string indicating which standard error (SE) to compute
for the asymptotic normal method, i.e., |
nrep |
an integer value indicating the number of Monte Carlo repetitions. |
alternative |
a character string specifying the alternative hypothesis, must be
one of |
seed |
a numeric value specifying the seed of the random number generator when using the Monte Carlo method. |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
digits |
an integer value indicating the number of decimal places to be used for displaying |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
In statistical mediation analysis (MacKinnon & Tofighi, 2013), the indirect effect
refers to the effect of the independent variable on the outcome variable
transmitted by the mediator variable
. The magnitude of the indirect
effect
is quantified by the product of the the coefficient
(i.e., effect of
on
) and the coefficient
(i.e., effect of
on
adjusted for
). In practice, researchers are often
interested in confidence limit estimation for the indirect effect. This function
offers three different methods for computing the confidence interval for the
product of coefficient estimator
:
(1) Asymptotic normal method
In the asymptotic normal method, the standard error for the product of the
coefficient estimator is computed which is used to create
a symmetrical confidence interval based on the z-value of the standard normal
(
) distribution assuming that the indirect effect is normally distributed.
Note that the function provides three formulas for computing the standard error
by specifying the argument
se
:
"sobel"
Approximate standard error by Sobel (1982) using the multivariate delta method based on a first order Taylor series approximation:
"aroian"
Exact standard error by Aroian (1947) based on a first and second order Taylor series approximation:
"goodman"
Unbiased standard error by Goodman (1960):
Note that the unbiased standard error is often negative and is hence undefined for zero or small effects or small sample sizes.
The asymptotic normal method is known to have low statistical power because
the distribution of the product is not normally distributed.
(Kisbu-Sakarya, MacKinnon, & Miocevic, 2014). In the null case, where both random
variables have mean equal to zero, the distribution is symmetric with kurtosis of
six. When the product of the means of the two random variables is nonzero, the
distribution is skewed (up to a maximum value of
1.5) and has a excess
kurtosis (up to a maximum value of 6). However, the product approaches a normal
distribution as one or both of the ratios of the means to standard errors of each
random variable get large in absolute value (MacKinnon, Lockwood & Williams, 2004).
(2) Distribution of the product method
The distribution of the product method (MacKinnon et al., 2002) relies on an
analytical approximation of the distribution of the product of two normally
distributed variables. The method uses the standardized and
coefficients to compute
and then uses the critical values for the
distribution of the product (Meeker, Cornwell, & Aroian, 1981) to create
asymmetric confidence intervals. The distribution of the product approaches
the gamma distribution (Aroian, 1947). The analytical solution for the distribution
of the product is provided by the Bessel function used to the solution of
differential equations and is approximately proportional to the Bessel function
of the second kind with a purely imaginary argument (Craig, 1936).
(3) Monte Carlo method
The Monte Carlo (MC) method (MacKinnon et al., 2004) relies on the assumption
that the parameters and
have a joint normal sampling distribution.
Based on the parametric assumption, a sampling distribution of the product
using random samples with population values equal to the sample
estimates
,
,
, and
is generated. Percentiles of the sampling distribution are identified to serve as
limits for a
% asymmetric confidence interval about the sample
(Preacher & Selig, 2012). Note that parametric assumptions
are invoked for
and
, but no parametric assumptions
are made about the distribution of
.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
list with the input specified in |
args |
specification of function arguments |
result |
list with result tables, i.e., |
The function was adapted from the medci()
function in the RMediation
package by Davood Tofighi and David P. MacKinnon (2016).
Takuya Yanagida [email protected]
Aroian, L. A. (1947). The probability function of the product of two normally distributed variables. Annals of Mathematical Statistics, 18, 265-271. https://doi.org/10.1214/aoms/1177730442
Craig,C.C. (1936). On the frequency function of xy. Annals of Mathematical Statistics, 7, 1–15. https://doi.org/10.1214/aoms/1177732541
Goodman, L. A. (1960). On the exact variance of products. Journal of the American Statistical Association, 55, 708-713. https://doi.org/10.1080/01621459.1960.10483369
Kisbu-Sakarya, Y., MacKinnon, D. P., & Miocevic M. (2014). The distribution of the product explains normal theory mediation confidence interval estimation. Multivariate Behavioral Research, 49, 261–268. https://doi.org/10.1080/00273171.2014.903162
MacKinnon, D. P., Lockwood, C. M., Hoffman, J. M., West, S. G., & Sheets, V. (2002). Comparison of methods to test mediation and other intervening variable effects. Psychological Methods, 7, 83–104. https://doi.org/10.1037/1082-989x.7.1.83
MacKinnon, D. P., Lockwood, C. M., & Williams, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39, 99-128. https://doi.org/10.1207/s15327906mbr3901_4
MacKinnon, D. P., & Tofighi, D. (2013). Statistical mediation analysis. In J. A. Schinka, W. F. Velicer, & I. B. Weiner (Eds.), Handbook of psychology: Research methods in psychology (pp. 717-735). John Wiley & Sons, Inc..
Meeker, W. Q., Jr., Cornwell, L. W., & Aroian, L. A. (1981). The product of two normally distributed random variables. In W. J. Kennedy & R. E. Odeh (Eds.), Selected tables in mathematical statistics (Vol. 7, pp. 1–256). Providence, RI: American Mathematical Society.
Preacher, K. J., & Selig, J. P. (2012). Advantages of Monte Carlo confidence intervals for indirect effects. Communication Methods and Measures, 6, 77–98. http://dx.doi.org/10.1080/19312458.2012.679848
Sobel, M. E. (1982). Asymptotic confidence intervals for indirect effects in structural equation models. In S. Leinhardt (Ed.), Sociological methodology 1982 (pp. 290-312). Washington, DC: American Sociological Association.
Tofighi, D. & MacKinnon, D. P. (2011). RMediation: An R package for mediation analysis confidence intervals. Behavior Research Methods, 43, 692-700. https://doi.org/10.3758/s13428-011-0076-x
# Example 1: Distribution of the Product Method indirect(a = 0.35, b = 0.27, se.a = 0.12, se.b = 0.18) # Example 2: Monte Carlo Method indirect(a = 0.35, b = 0.27, se.a = 0.12, se.b = 0.18, print = "mc") # Example 3: Asymptotic Normal Method indirect(a = 0.35, b = 0.27, se.a = 0.12, se.b = 0.18, print = "asymp") ## Not run: # Example 4: Write results into a text file indirect(a = 0.35, b = 0.27, se.a = 0.12, se.b = 0.18, write = "Indirect.txt") ## End(Not run)
# Example 1: Distribution of the Product Method indirect(a = 0.35, b = 0.27, se.a = 0.12, se.b = 0.18) # Example 2: Monte Carlo Method indirect(a = 0.35, b = 0.27, se.a = 0.12, se.b = 0.18, print = "mc") # Example 3: Asymptotic Normal Method indirect(a = 0.35, b = 0.27, se.a = 0.12, se.b = 0.18, print = "asymp") ## Not run: # Example 4: Write results into a text file indirect(a = 0.35, b = 0.27, se.a = 0.12, se.b = 0.18, write = "Indirect.txt") ## End(Not run)
This function computes point estimate and confidence interval for the (ordinal) coefficient alpha (aka Cronbach's alpha) along with the corrected item-total correlation and coefficient alpha if item deleted.
item.alpha(..., data = NULL, exclude = NULL, std = FALSE, ordered = FALSE, na.omit = FALSE, print = c("all", "alpha", "item"), digits = 2, conf.level = 0.95, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
item.alpha(..., data = NULL, exclude = NULL, std = FALSE, ordered = FALSE, na.omit = FALSE, print = c("all", "alpha", "item"), digits = 2, conf.level = 0.95, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix, data frame, variance-covariance or correlation
matrix. Note that raw data is needed to compute ordinal
coefficient alpha, i.e., |
data |
a data frame when specifying one or more variables in the
argument |
exclude |
a character vector indicating items to be excluded from the analysis. |
std |
logical: if |
ordered |
logical: if |
na.omit |
logical: if |
print |
a character vector indicating which results to show, i.e.
|
digits |
an integer value indicating the number of decimal places to be used for displaying coefficient alpha and item-total correlations. |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Ordinal coefficient alpha was introduced by Zumbo, Gadermann and Zeisser (2007)
which is obtained by applying the formula for computing coefficient alpha to the
polychoric correlation matrix instead of the variance-covariance or product-moment
correlation matrix. Note that Chalmers (2018) highlighted that the ordinal
coefficient alpha should be interpreted only as a hypothetical estimate of an
alternative reliability, whereby a test's ordinal categorical response options
have be modified to include an infinite number of ordinal response options and
concludes that coefficient alpha should not be reported as a measure of a test's
reliability. However, Zumbo and Kroc (2019) argued that Chalmers' critique of
ordinal coefficient alpha is unfounded and that ordinal coefficient alpha may
be the most appropriate quantifier of reliability when using Likert-type measurement
to study a latent continuous random variable.
Confidence intervals are computed using the procedure by Feldt, Woodruff and Salih
(1987). When computing confidence intervals using pairwise deletion, the average
sample size from all pairwise samples is used. Note that there are at least 10
other procedures for computing the confidence interval (see Kelley and
Pornprasertmanit, 2016), which are implemented in the ci.reliability()
function in the MBESSS package by Ken Kelley (2019).
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame used for the current analysis |
args |
specification of function arguments |
result |
list with result tables, i.e., |
Takuya Yanagida [email protected]
Chalmers, R. P. (2018). On misconceptions and the limited usefulness of ordinal alpha. Educational and Psychological Measurement, 78, 1056-1071. https://doi.org/10.1177/0013164417727036
Cronbach, L.J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297-334. https://doi.org/10.1007/BF02310555
Cronbach, L.J. (2004). My current thoughts on coefficient alpha and successor procedures. Educational and Psychological Measurement, 64, 391-418. https://doi.org/10.1177/0013164404266386
Feldt, L. S., Woodruff, D. J., & Salih, F. A. (1987). Statistical inference for coefficient alpha. Applied Psychological Measurement, 11 93-103. https://doi.org/10.1177/014662168701100107
Kelley, K., & Pornprasertmanit, S. (2016). Confidence intervals for population reliability coefficients: Evaluation of methods, recommendations, and software for composite measures. Psychological Methods, 21, 69-92. https://doi.org/10.1037/a0040086.
Ken Kelley (2019). MBESS: The MBESS R Package. R package version 4.6.0. https://CRAN.R-project.org/package=MBESS
Zumbo, B. D., & Kroc, E. (2019). A measurement is a choice and Stevens' scales of measurement do not help make it: A response to Chalmers. Educational and Psychological Measurement, 79, 1184-1197. https://doi.org/10.1177/0013164419844305
Zumbo, B. D., Gadermann, A. M., & Zeisser, C. (2007). Ordinal versions of coefficients alpha and theta for Likert rating scales. Journal of Modern Applied Statistical Methods, 6, 21-29. https://doi.org/10.22237/jmasm/1177992180
write.result
, item.cfa
, item.omega
,
item.reverse
, item.scores
dat <- data.frame(item1 = c(4, 2, 3, 4, 1, 2, 4, 2), item2 = c(4, 3, 3, 3, 2, 2, 4, 1), item3 = c(3, 2, 4, 2, 1, 3, 4, 1), item4 = c(4, 1, 2, 3, 2, 3, 4, 2)) # Example 1a: Compute unstandardized coefficient alpha and item statistics item.alpha(dat) # Example 1b: Alternative specification using the 'data' argument item.alpha(., data = dat) # Example 2: Compute standardized coefficient alpha and item statistics item.alpha(dat, std = TRUE) # Example 3: Compute unstandardized coefficient alpha item.alpha(dat, print = "alpha") # Example 4: Compute item statistics item.alpha(dat, print = "item") # Example 5: Compute unstandardized coefficient alpha and item statistics while excluding item3 item.alpha(dat, exclude = "item3") # Example 6: Compute variance-covariance matrix dat.cov <- cov(dat) # Compute unstandardized coefficient alpha based on the variance-covariance matrix item.alpha(dat.cov) # Compute correlation matrix dat.cor <- cor(dat) # Example 7: Compute standardized coefficient alpha based on the correlation matrix item.alpha(dat.cor) # Example 8: Compute ordinal coefficient alpha item.alpha(dat, ordered = TRUE) ## Not run: # Example 9a: Write results into a text file result <- item.alpha(dat, write = "Alpha.txt") # Example 9b: Write results into a Excel file result <- item.alpha(dat, write = "Alpha.xlsx") result <- item.alpha(dat, output = FALSE) write.result(result, "Alpha.xlsx") ## End(Not run)
dat <- data.frame(item1 = c(4, 2, 3, 4, 1, 2, 4, 2), item2 = c(4, 3, 3, 3, 2, 2, 4, 1), item3 = c(3, 2, 4, 2, 1, 3, 4, 1), item4 = c(4, 1, 2, 3, 2, 3, 4, 2)) # Example 1a: Compute unstandardized coefficient alpha and item statistics item.alpha(dat) # Example 1b: Alternative specification using the 'data' argument item.alpha(., data = dat) # Example 2: Compute standardized coefficient alpha and item statistics item.alpha(dat, std = TRUE) # Example 3: Compute unstandardized coefficient alpha item.alpha(dat, print = "alpha") # Example 4: Compute item statistics item.alpha(dat, print = "item") # Example 5: Compute unstandardized coefficient alpha and item statistics while excluding item3 item.alpha(dat, exclude = "item3") # Example 6: Compute variance-covariance matrix dat.cov <- cov(dat) # Compute unstandardized coefficient alpha based on the variance-covariance matrix item.alpha(dat.cov) # Compute correlation matrix dat.cor <- cor(dat) # Example 7: Compute standardized coefficient alpha based on the correlation matrix item.alpha(dat.cor) # Example 8: Compute ordinal coefficient alpha item.alpha(dat, ordered = TRUE) ## Not run: # Example 9a: Write results into a text file result <- item.alpha(dat, write = "Alpha.txt") # Example 9b: Write results into a Excel file result <- item.alpha(dat, write = "Alpha.xlsx") result <- item.alpha(dat, output = FALSE) write.result(result, "Alpha.xlsx") ## End(Not run)
This function is a wrapper function for conducting confirmatory factor analysis
with continuous and/or ordered-categorical indicators by calling the cfa
function in the R package lavaan.
item.cfa(..., data = NULL, model = NULL, rescov = NULL, hierarch = FALSE, meanstructure = TRUE, ident = c("marker", "var", "effect"), parameterization = c("delta", "theta"), ordered = NULL, cluster = NULL, estimator = c("ML", "MLM", "MLMV", "MLMVS", "MLF", "MLR", "GLS", "WLS", "DWLS", "WLSM", "WLSMV", "ULS", "ULSM", "ULSMV", "DLS", "PML"), missing = c("listwise", "pairwise", "fiml", "two.stage", "robust.two.stage", "doubly.robust"), print = c("all", "summary", "coverage", "descript", "fit", "est", "modind", "resid"), mod.minval = 6.63, resid.minval = 0.1, digits = 3, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
item.cfa(..., data = NULL, model = NULL, rescov = NULL, hierarch = FALSE, meanstructure = TRUE, ident = c("marker", "var", "effect"), parameterization = c("delta", "theta"), ordered = NULL, cluster = NULL, estimator = c("ML", "MLM", "MLMV", "MLMVS", "MLF", "MLR", "GLS", "WLS", "DWLS", "WLSM", "WLSMV", "ULS", "ULSM", "ULSMV", "DLS", "PML"), missing = c("listwise", "pairwise", "fiml", "two.stage", "robust.two.stage", "doubly.robust"), print = c("all", "summary", "coverage", "descript", "fit", "est", "modind", "resid"), mod.minval = 6.63, resid.minval = 0.1, digits = 3, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame. If |
data |
a data frame when specifying one or more variables in the
argument |
model |
a character vector specifying a measurement model with
one factor, or a list of character vectors for specifying
a measurement model with more than one factor, e.g.,
|
rescov |
a character vector or a list of character vectors for
specifying residual covariances, e.g.
|
hierarch |
logical: if |
meanstructure |
logical: if |
ident |
a character string indicating the method used for
identifying and scaling latent variables, i.e.,
|
parameterization |
a character string indicating the method used for
identifying and scaling latent variables when indicators
are ordered, i.e., |
ordered |
if |
cluster |
either a character string indicating the variable name
of the cluster variable in |
estimator |
a character string indicating the estimator to be used
(see 'Details'). By default, |
missing |
a character string indicating how to deal with missing
data, i.e., |
print |
a character string or character vector indicating which
results to show on the console, i.e. |
mod.minval |
numeric value to filter modification indices and only
show modifications with a modification index value equal
or higher than this minimum value. By default, modification
indices equal or higher 6.63 are printed. Note that a
modification index value of 6.63 is equivalent to a
significance level of |
resid.minval |
numeric value indicating the minimum absolute residual correlation coefficients and standardized means to highlight in boldface. By default, absolute residual correlation coefficients and standardized means equal or higher 0.1 are highlighted. Note that highlighting can be disabled by setting the minimum value to 1. |
digits |
an integer value indicating the number of decimal places to be used for displaying results. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
The R package lavaan provides seven estimators
that affect the estimation, namely "ML"
, "GLS"
, "WLS"
,
"DWLS"
, "ULS"
, "DLS"
, and "PML"
. All other options
for the argument estimator
combine these estimators with various standard
error and chi-square test statistic computation. Note that the estimators also
differ in how missing values can be dealt with (e.g., listwise deletion,
pairwise deletion, or full information maximum likelihood, FIML).
"ML"
: Maximum likelihood parameter estimates with conventional standard errors
and conventional test statistic. For both complete and incomplete data
using pairwise deletion or FIML.
"MLM"
: Maximum likelihood parameter estimates with conventional
robust standard errors and a Satorra-Bentler scaled test statistic that
are robust to non-normality. For complete data only.
"MLMV"
: Maximum likelihood parameter estimates with conventional
robust standard errors and a mean and a variance adjusted test statistic
using a scale-shifted approach that are robust to non-normality. For complete
data only.
"MLMVS"
: Maximum likelihood parameter estimates with conventional
robust standard errors and a mean and a variance adjusted test statistic
using the Satterthwaite approach that are robust to non-normality. For complete
data only.
"MLF"
: Maximum likelihood parameter estimates with standard
errors approximated by first-order derivatives and conventional test statistic.
For both complete and incomplete data using pairwise deletion or FIML.
"MLR"
: Maximum likelihood parameter estimates with Huber-White
robust standard errors a test statistic which is asymptotically equivalent
to the Yuan-Bentler T2* test statistic that are robust to non-normality
and non-independence of observed when specifying a cluster variable using
the argument cluster
. For both complete and incomplete data using
pairwise deletion or FIML.
"GLS"
: Generalized least squares parameter estimates with
conventional standard errors and conventional test statistic that uses a
normal-theory based weight matrix. For complete data only.
and conventional chi-square test. For both complete and incomplete data.
"WLS"
: Weighted least squares parameter estimates (sometimes
called ADF estimation) with conventional standard errors and conventional
test statistic that uses a full weight matrix. For complete data only.
"DWLS"
: Diagonally weighted least squares parameter estimates
which uses the diagonal of the weight matrix for estimation with conventional
standard errors and conventional test statistic. For both complete and
incomplete data using pairwise deletion.
"WLSM"
: Diagonally weighted least squares parameter estimates
which uses the diagonal of the weight matrix for estimation, but uses the
full weight matrix for computing the conventional robust standard errors
and a Satorra-Bentler scaled test statistic. For both complete and incomplete
data using pairwise deletion.
"WLSMV"
: Diagonally weighted least squares parameter estimates
which uses the diagonal of the weight matrix for estimation, but uses the
full weight matrix for computing the conventional robust standard errors
and a mean and a variance adjusted test statistic using a scale-shifted
approach. For both complete and incomplete data using pairwise deletion.
"ULS"
: Unweighted least squares parameter estimates with
conventional standard errors and conventional test statistic. For both
complete and incomplete data using pairwise deletion.
"ULSM"
: Unweighted least squares parameter estimates with
conventional robust standard errors and a Satorra-Bentler scaled test
statistic. For both complete and incomplete data using pairwise deletion.
"ULSMV"
: Unweighted least squares parameter estimates with
conventional robust standard errors and a mean and a variance adjusted
test statistic using a scale-shifted approach. For both complete and
incomplete data using pairwise deletion.
"DLS"
: Distributionally-weighted least squares parameter
estimates with conventional robust standard errors and a Satorra-Bentler
scaled test statistic. For complete data only.
"PML"
: Pairwise maximum likelihood parameter estimates
with Huber-White robust standard errors and a mean and a variance adjusted
test statistic using the Satterthwaite approach. For both complete and
incomplete data using pairwise deletion.
The R package lavaan provides six methods for dealing with missing data:
"listwise"
: Listwise deletion, i.e., all cases with missing
values are removed from the data before conducting the analysis. This is
only valid if the data are missing completely at random (MCAR).
"pairwise"
: Pairwise deletion, i.e., each element of a
variance-covariance matrix is computed using cases that have data needed
for estimating that element. This is only valid if the data are missing
completely at random (MCAR).
"fiml"
: Full information maximum likelihood (FIML) method,
i.e., likelihood is computed case by case using all available data from
that case. FIML method is only applicable for following estimators:
"ML"
, "MLF"
, and "MLR"
.
"two.stage"
: Two-stage maximum likelihood estimation, i.e.,
sample statistics is estimated using EM algorithm in the first step. Then,
these estimated sample statistics are used as input for a regular analysis.
Standard errors and test statistics are adjusted correctly to reflect the
two-step procedure. Two-stage method is only applicable for following
estimators: "ML"
, "MLF"
, and "MLR"
.
"robust.two.stage"
: Robust two-stage maximum likelihood
estimation, i.e., two-stage maximum likelihood estimation with standard
errors and a test statistic that are robust against non-normality. Robust
two-stage method is only applicable for following estimators: "ML"
,
"MLF"
, and "MLR"
.
"doubly.robust"
: Doubly-robust method only applicable for
pairwise maximum likelihood estimation (i.e., estimator = "PML"
.
In line with the R package lavaan, this functions provides several checks for model convergence and model identification:
Degrees of freedom
: An error message is printed if the number
of degrees of freedom is negative, i.e., the model is not identified.
Model convergence
: An error message is printed if the
optimizer has not converged, i.e., results are most likely unreliable.
Standard errors
: An error message is printed if the standard
errors could not be computed, i.e., the model might not be identified.
Variance-covariance matrix of the estimated parameters
: A
warning message is printed if the variance-covariance matrix of the
estimated parameters is not positive definite, i.e., the smallest eigenvalue
of the matrix is smaller than zero or very close to zero.
Negative variances of observed variables
: A warning message
is printed if the estimated variances of the observed variables are
negative.
Variance-covariance matrix of observed variables
: A warning
message is printed if the estimated variance-covariance matrix of the
observed variables is not positive definite, i.e., the smallest eigenvalue
of the matrix is smaller than zero or very close to zero.
Negative variances of latent variables
: A warning message
is printed if the estimated variances of the latent variables are
negative.
Variance-covariance matrix of latent variables
: A warning
message is printed if the estimated variance-covariance matrix of the
latent variables is not positive definite, i.e., the smallest eigenvalue
of the matrix is smaller than zero or very close to zero.
Note that unlike the R package lavaan, the item.cfa
function does
not provide any results when the degrees of freedom is negative, the model
has not converged, or standard errors could not be computed.
The item.cfa
function provides the chi-square
test, incremental fit indices (i.e., CFI and TLI), and absolute fit indices
(i.e., RMSEA, and SRMR) to evaluate overall model fit. However, different
versions of the CFI, TLI, and RMSEA are provided depending on the estimator.
Unlike the R package lavaan, the different versions are labeled with
Standard
, Scaled
, and Robust
in the output:
"Standard"
: CFI, TLI, and RMSEA without any non-normality
corrections. These fit measures based on the normal theory maximum
likelihood test statistic are sensitive to deviations from multivariate
normality of endogenous variables. Simulation studies by Brosseau-Liard
et al. (2012), and Brosseau-Liard and Savalei (2014) showed that the
uncorrected fit indices are affected by non-normality, especially at small
and medium sample sizes (e.g., n < 500).
"Scaled"
: Population-corrected robust CFI, TLI, and RMSEA
with ad hoc non-normality corrections that simply replace the maximum
likelihood test statistic with a robust test statistic (e.g., mean-adjusted
chi-square). These fit indices change the population value being estimated
depending on the degree of non-normality present in the data. Brosseau-Liard
et al. (2012) demonstrated that the ad hoc corrected RMSEA increasingly
accepts poorly fitting models as non-normality in the data increases, while
the effect of the ad hoc correction on the CFI and TLI is less predictable
with non-normality making fit appear worse, better, or nearly unchanged
(Brosseau-Liard & Savalei, 2014).
"Robust"
: Sample-corrected robust CFI, TLI, and RMSEA
with non-normality corrections based on formula provided by Li and Bentler
(2006) and Brosseau-Liard and Savalei (2014). These fit indices do not
change the population value being estimated and can be interpreted the
same way as the uncorrected fit indices when the data would have been
normal.
In conclusion, the use of sample-corrected fit indices (Robust
)
instead of population-corrected fit indices (Scaled
) is recommended.
Note that when sample size is very small (e.g., n < 200), non-normality
correction does not appear to adjust fit indices sufficiently to counteract
the effect of non-normality (Brosseau-Liard & Savalei, 2014).
The item.cfa
function provides modification indices and the residual correlation matrix when
requested by using the print
argument. Modification indices (aka score
tests) are univariate Lagrange Multipliers (LM) representing a chi-square
statistic with a single degree of freedom. LM approximates the amount by which
the chi-square test statistic would decrease if a fixed or constrained parameter
is freely estimated (Kline, 2023). However, (standardized) expected parameter
change (EPC) values should also be inspected since modification indices are
sensitive to sample size. EPC values are an estimate of how much the parameter
would be expected to change if it were freely estimated (Brown, 2023). The residual
correlation matrix is computed by separately converting the sample covariance
and model-implied covariance matrices to correlation matrices before calculation
differences between observed and predicted covariances (i.e., type = "cor.bollen"
).
As a rule of thumb, absolute correlation residuals greater than .10 indicate
possible evidence for poor local fit, whereas smaller correlation residuals
than 0.05 indicate negligible degree of model misfit (Maydeu-Olivares, 2017).
There is no reliable connection between the size of diagnostic statistics
(i.e., modification indices and residuals) and the type or amount of model
misspecification since (1) diagnostic statistics are themselves affected by
misspecification, (2) misspecification in one part of the model distorts estimates
in other parts of the model (i.e., error propagation), and (3) equivalent models
have identical residuals but contradict the pattern of causal effects (Kline, 2023).
Note that according to Kline' (2023) "any report of the results without information
about the residuals is deficient" (p. 172).
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
matrix or data frame specified in |
args |
specification of function arguments |
model |
specified model |
model.fit |
fitted lavaan object ( |
check |
results of the convergence and model identification check |
result |
list with result tables, i.e., |
The function uses the functions cfa
, lavInspect
, lavTech
,
modindices
, parameterEstimates
, and standardizedsolution
provided in the R package lavaan by Yves Rosseel (2012).
Takuya Yanagida [email protected]
Brosseau-Liard, P. E., Savalei, V., & Li. L. (2012). An investigation of the sample performance of two nonnormality corrections for RMSEA, Multivariate Behavioral Research, 47, 904-930. https://doi.org/10.1080/00273171.2014.933697
Brosseau-Liard, P. E., & Savalei, V. (2014) Adjusting incremental fit indices for nonnormality. Multivariate Behavioral Research, 49, 460-470. https://doi.org/10.1080/00273171.2014.933697
Brown, T. A. (2023). Confirmatory factor analysis. In R. H. Hoyle (Ed.), Handbook of structural equation modeling (2nd ed.) (pp. 361–379). The Guilford Press.
Kline, R. B. (2023). Principles and practice of structural equation modeling (5th ed.). Guilford Press.
Li, L., & Bentler, P. M. (2006). Robust statistical tests for evaluating the hypothesis of close fit of misspecified mean and covariance structural models. UCLA Statistics Preprint #506. University of California.
Maydeu-Olivares, A. (2017). Assessing the size of model misfit in structural equation models. Psychometrika, 82(3), 533–558. https://doi.org/10.1007/s11336-016-9552-7
Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48, 1-36. https://doi.org/10.18637/jss.v048.i02
item.alpha
, item.omega
, item.scores
## Not run: # Load data set "HolzingerSwineford1939" in the lavaan package data("HolzingerSwineford1939", package = "lavaan") #---------------------------------------------------------------------------- # Measurement model with one factor # Example 1a: Specification using the argument 'x' item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")]) # Example 1b: Alternative specification using the 'data' argument item.cfa(x1:x3, data = HolzingerSwineford1939) # Example 1c: Alternative specification using the argument 'model' item.cfa(HolzingerSwineford1939, model = c("x1", "x2", "x3")) # Example 1d: Alternative specification using the 'data' and 'model' argument item.cfa(., data = HolzingerSwineford1939, model = c("x1", "x2", "x3")) # Example 1e: Alternative specification using the argument 'model' item.cfa(HolzingerSwineford1939, model = list(visual = c("x1", "x2", "x3"))) # Example 1f: Alternative specification using the 'data' and 'model' argument item.cfa(., data = HolzingerSwineford1939, model = list(visual = c("x1", "x2", "x3"))) #---------------------------------------------------------------------------- # Measurement model with three factors # Example 2: Specification using the argument 'model' item.cfa(HolzingerSwineford1939, model = list(visual = c("x1", "x2", "x3"), textual = c("x4", "x5", "x6"), speed = c("x7", "x8", "x9"))) #---------------------------------------------------------------------------- # Residual covariances # Example 3a: One residual covariance item.cfa(HolzingerSwineford1939, model = list(visual = c("x1", "x2", "x3"), textual = c("x4", "x5", "x6"), speed = c("x7", "x8", "x9")), rescov = c("x1", "x2")) # Example 3b: Two residual covariances item.cfa(HolzingerSwineford1939, model = list(visual = c("x1", "x2", "x3"), textual = c("x4", "x5", "x6"), speed = c("x7", "x8", "x9")), rescov = list(c("x1", "x2"), c("x4", "x5"))) #---------------------------------------------------------------------------- # Second-order factor model based on three first-order factors # Example 4 item.cfa(HolzingerSwineford1939, model = list(visual = c("x1", "x2", "x3"), textual = c("x4", "x5", "x6"), speed = c("x7", "x8", "x9")), hierarch = TRUE) #---------------------------------------------------------------------------- # Measurement model with ordered-categorical indicators # Example 5 item.cfa(round(HolzingerSwineford1939[, c("x4", "x5", "x6")]), ordered = TRUE) #---------------------------------------------------------------------------- # Cluster-robust standard errors # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") # Example 6a: Specification using a variable in 'x' item.cfa(Demo.twolevel[, c("y4", "y5", "y6", "cluster")], cluster = "cluster") # Example 6b: Specification of the cluster variable in 'cluster' item.cfa(Demo.twolevel[, c("y4", "y5", "y6")], cluster = Demo.twolevel$cluster) # Example 6c: Alternative specification using the 'data' argument item.cfa(y4:y6, data = Demo.twolevel, cluster = "cluster") #---------------------------------------------------------------------------- # Print argument # Example 7a: Request all results item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")], print = "all") # Example 7b: Request modification indices with value equal or higher than 5 item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3", "x4")], print = "modind", mod.minval = 5) #---------------------------------------------------------------------------- # lavaan summary of the estimated model # Example 8 mod <- item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")], output = FALSE) lavaan::summary(mod$model.fit, standardized = TRUE, fit.measures = TRUE) #---------------------------------------------------------------------------- # Write Results # Example 9a: Write results into a text file item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")], write = "CFA.txt") # Example 9b: Write results into an Excel file item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")], write = "CFA.xlsx") result <- item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")], output = FALSE) write.result(result, "CFA.xlsx") ## End(Not run)
## Not run: # Load data set "HolzingerSwineford1939" in the lavaan package data("HolzingerSwineford1939", package = "lavaan") #---------------------------------------------------------------------------- # Measurement model with one factor # Example 1a: Specification using the argument 'x' item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")]) # Example 1b: Alternative specification using the 'data' argument item.cfa(x1:x3, data = HolzingerSwineford1939) # Example 1c: Alternative specification using the argument 'model' item.cfa(HolzingerSwineford1939, model = c("x1", "x2", "x3")) # Example 1d: Alternative specification using the 'data' and 'model' argument item.cfa(., data = HolzingerSwineford1939, model = c("x1", "x2", "x3")) # Example 1e: Alternative specification using the argument 'model' item.cfa(HolzingerSwineford1939, model = list(visual = c("x1", "x2", "x3"))) # Example 1f: Alternative specification using the 'data' and 'model' argument item.cfa(., data = HolzingerSwineford1939, model = list(visual = c("x1", "x2", "x3"))) #---------------------------------------------------------------------------- # Measurement model with three factors # Example 2: Specification using the argument 'model' item.cfa(HolzingerSwineford1939, model = list(visual = c("x1", "x2", "x3"), textual = c("x4", "x5", "x6"), speed = c("x7", "x8", "x9"))) #---------------------------------------------------------------------------- # Residual covariances # Example 3a: One residual covariance item.cfa(HolzingerSwineford1939, model = list(visual = c("x1", "x2", "x3"), textual = c("x4", "x5", "x6"), speed = c("x7", "x8", "x9")), rescov = c("x1", "x2")) # Example 3b: Two residual covariances item.cfa(HolzingerSwineford1939, model = list(visual = c("x1", "x2", "x3"), textual = c("x4", "x5", "x6"), speed = c("x7", "x8", "x9")), rescov = list(c("x1", "x2"), c("x4", "x5"))) #---------------------------------------------------------------------------- # Second-order factor model based on three first-order factors # Example 4 item.cfa(HolzingerSwineford1939, model = list(visual = c("x1", "x2", "x3"), textual = c("x4", "x5", "x6"), speed = c("x7", "x8", "x9")), hierarch = TRUE) #---------------------------------------------------------------------------- # Measurement model with ordered-categorical indicators # Example 5 item.cfa(round(HolzingerSwineford1939[, c("x4", "x5", "x6")]), ordered = TRUE) #---------------------------------------------------------------------------- # Cluster-robust standard errors # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") # Example 6a: Specification using a variable in 'x' item.cfa(Demo.twolevel[, c("y4", "y5", "y6", "cluster")], cluster = "cluster") # Example 6b: Specification of the cluster variable in 'cluster' item.cfa(Demo.twolevel[, c("y4", "y5", "y6")], cluster = Demo.twolevel$cluster) # Example 6c: Alternative specification using the 'data' argument item.cfa(y4:y6, data = Demo.twolevel, cluster = "cluster") #---------------------------------------------------------------------------- # Print argument # Example 7a: Request all results item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")], print = "all") # Example 7b: Request modification indices with value equal or higher than 5 item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3", "x4")], print = "modind", mod.minval = 5) #---------------------------------------------------------------------------- # lavaan summary of the estimated model # Example 8 mod <- item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")], output = FALSE) lavaan::summary(mod$model.fit, standardized = TRUE, fit.measures = TRUE) #---------------------------------------------------------------------------- # Write Results # Example 9a: Write results into a text file item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")], write = "CFA.txt") # Example 9b: Write results into an Excel file item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")], write = "CFA.xlsx") result <- item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")], output = FALSE) write.result(result, "CFA.xlsx") ## End(Not run)
This function is a wrapper function for evaluating configural, metric, scalar,
and strict between-group or longitudinal (partial) measurement invariance using
confirmatory factor analysis with continuous indicators by calling the cfa
function in the R package lavaan. By default, the function evaluates
configural, metric, and scalar measurement invariance by providing a table
with model fit information (i.e., chi-square test, fit indices based on a proper
null model, and information criteria) and model comparison (i.e., chi-square
difference test, change in fit indices, and change in information criteria).
Additionally, variance-covariance coverage of the data, descriptive statistics,
parameter estimates, modification indices, and residual correlation matrix can
be requested by specifying the argument print
.
item.invar(..., data = NULL, model = NULL, rescov = NULL, rescov.long = TRUE, group = NULL, long = FALSE, cluster = NULL, invar = c("config", "metric", "scalar", "strict"), partial = NULL, ident = c("marker", "var", "effect"), estimator = c("ML", "MLM", "MLMV", "MLMVS", "MLF", "MLR", "GLS", "WLS", "DWLS", "WLSM", "WLSMV", "ULS", "ULSM", "ULSMV", "DLS", "PML"), missing = c("listwise", "pairwise", "fiml", "two.stage", "robust.two.stage", "doubly.robust"), null.model = TRUE, print = c("all", "summary", "coverage", "descript", "fit", "est", "modind", "resid"), print.fit = c("all", "standard", "scaled", "robust"), mod.minval = 6.63, resid.minval = 0.1, digits = 3, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
item.invar(..., data = NULL, model = NULL, rescov = NULL, rescov.long = TRUE, group = NULL, long = FALSE, cluster = NULL, invar = c("config", "metric", "scalar", "strict"), partial = NULL, ident = c("marker", "var", "effect"), estimator = c("ML", "MLM", "MLMV", "MLMVS", "MLF", "MLR", "GLS", "WLS", "DWLS", "WLSM", "WLSMV", "ULS", "ULSM", "ULSMV", "DLS", "PML"), missing = c("listwise", "pairwise", "fiml", "two.stage", "robust.two.stage", "doubly.robust"), null.model = TRUE, print = c("all", "summary", "coverage", "descript", "fit", "est", "modind", "resid"), print.fit = c("all", "standard", "scaled", "robust"), mod.minval = 6.63, resid.minval = 0.1, digits = 3, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame. If |
data |
a data frame when specifying one or more variables in the
argument |
model |
a character vector specifying a measurement model with one
factor, or a list of character vectors for specifying a
measurement model with more than one factor for evaluating
between-group measurement invariance when |
rescov |
a character vector or a list of character vectors for specifying
residual covariances, e.g., |
rescov.long |
logical: if |
group |
either a character string indicating the variable name of
the grouping variable in the matrix or data frame specified
in |
long |
logical: if |
cluster |
either a character string indicating the variable name
of the cluster variable in |
invar |
a character string indicating the level of measurement
invariance to be evaluated, i.e., |
partial |
a character string or character vector containing the labels
of the parameters which should be free in all groups or across
time to specify a partial measurement invariance model. Note
that the labels of the parameters need to match the labels
shown in the output, i.e., |
ident |
a character string indicating the method used for identifying
and scaling latent variables, i.e., |
estimator |
a character string indicating the estimator to be used
(see 'Details' in the help page of the |
missing |
a character string indicating how to deal with missing data,
i.e., |
null.model |
logical: if |
print |
a character string or character vector indicating which results
to show on the console, i.e. |
print.fit |
a character string or character vector indicating which
version of the CFI, TLI, and RMSEA to show on the console
when using a robust estimation method involving a scaling
correction factor, i.e., |
mod.minval |
numeric value to filter modification indices and only show
modifications with a modification index value equal or higher
than this minimum value. By default, modification indices
equal or higher 6.63 are printed. Note that a modification
index value of 6.63 is equivalent to a significance level
of |
resid.minval |
numeric value indicating the minimum absolute residual correlation coefficients and standardized means to highlight in boldface. By default, absolute residual correlation coefficients and standardized means equal or higher 0.1 are highlighted. Note that highlighting can be disabled by setting the minimum value to 1. |
digits |
an integer value indicating the number of decimal places
to be used for displaying results. Note that information
criteria and chi-square test statistic are printed with
|
p.digits |
an integer value indicating the number of decimal places
to be used for displaying p-values, covariance coverage
(i.e., |
as.na |
a numeric vector indicating user-defined missing values, i.e.,
these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame including all variables used in the analysis, i.e., indicators for the factor, grouping variable and cluster variable |
args |
specification of function arguments |
model |
list with specified model for the configural, metric, scalar, and strict invariance model |
model.fit |
list with fitted lavaan object of the configural, metric, scalar, and strict invariance model |
check |
list with the results of the convergence and model identification check for the configural, metric, scalar, and strict invariance model |
result |
list with result tables, i.e., |
The function uses the functions cfa
, fitmeasures
,lavInspect
,
lavTech
, lavTestLRT
, lavTestScore
, modindices
,
parameterEstimates
, parTable
, and standardizedsolution
provided in the R package lavaan by Yves Rosseel (2012).
Takuya Yanagida [email protected]
Brosseau-Liard, P. E., & Savalei, V. (2014) Adjusting incremental fit indices for nonnormality. Multivariate Behavioral Research, 49, 460-470. https://doi.org/10.1080/00273171.2014.933697
Li, L., & Bentler, P. M. (2006). Robust statistical tests for evaluating the hypothesis of close fit of misspecified mean and covariance structural models. UCLA Statistics Preprint #506. University of California.
Little, T. D. (2013). Longitudinal structural equation modeling. Guilford Press.
Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48, 1-36. https://doi.org/10.18637/jss.v048.i02
item.cfa
, multilevel.invar
, write.result
## Not run: # Load data set "HolzingerSwineford1939" in the lavaan package data("HolzingerSwineford1939", package = "lavaan") #------------------------------------------------------------------------------- # Between-Group Measurement Invariance Evaluation #.................. # Measurement model with one factor # Example 1a: Specification of the grouping variable in 'x' item.invar(HolzingerSwineford1939[, c("x1", "x2", "x3", "x4", "sex")], group = "sex") # Example 1b: Specification of the grouping variable in 'group' item.invar(HolzingerSwineford1939[, c("x1", "x2", "x3", "x4")], group = HolzingerSwineford1939$sex) # Example 1c: Alternative specification using the 'data' argument item.invar(x1:x4, data = HolzingerSwineford1939, group = "sex") # Example 1d: Alternative specification using the argument 'model' item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex") # Example 1e: Alternative specification using the 'data' and 'model' argument item.invar(., data = HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex") #.................. # Measurement model with two factors item.invar(HolzingerSwineford1939, model = list(c("x1", "x2", "x3", "x4"), c("x5", "x6", "x7", "x8")), group = "sex") #.................. # Configural, metric, scalar, and strict measurement invariance # Example 2: Evaluate configural, metric, scalar, and strict measurement invariance item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", invar = "strict") #.................. # Partial measurement invariance # Example 3: Free second factor loading (L2) and third intercept (T3) item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", partial = c("L2", "T3"), print = c("fit", "est")) #.................. # Residual covariances # Example 4a: One residual covariance item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), rescov = c("x3", "x4"), group = "sex") # Example 4b: Two residual covariances item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), rescov = list(c("x1", "x2"), c("x3", "x4")), group = "sex") #.................. # Scaled test statistic and cluster-robust standard errors # Example 5a: Specify cluster variable using a variable name in 'x' item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", cluster = "agemo") # Example 5b: Specify vector of the cluster variable in the argument 'cluster' item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", cluster = HolzingerSwineford1939$agemo) #.................. # Default Null model # Example 6: Specify default null model for computing incremental fit indices item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", null.model = FALSE) #.................. # Print argument # Example 7a: Request all results item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", print = "all") # Example 7b: Request fit indices with ad hoc non-normality correction item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", print.fit = "scaled") # Example 7c: Request modification indices with value equal or higher than 10 # and highlight residual correlations equal or higher than 0.3 item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", print = c("modind", "resid"), mod.minval = 10, resid.minval = 0.3) #.................. # Model syntax and lavaan summary of the estimated model # Example 8 mod <- item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", output = FALSE) # lavaan model syntax scalar invariance model cat(mod$model$scalar) # lavaan summary of the scalar invariance model lavaan::summary(mod$model.fit$scalar, standardized = TRUE, fit.measures = TRUE) #------------------------------------------------------------------------------- # Longitudinal Measurement Invariance Evaluation # Example 9: Two time points with three indicators at each time point item.invar(HolzingerSwineford1939, model = list(c("x1", "x2", "x3"), c("x5", "x6", "x7")), long = TRUE) #------------------------------------------------ # Write Results # Example 10a: Write results into a text file item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", print = "all", write = "Invariance.txt", output = FALSE) # Example 10b: Write results into an Excel file item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", print = "all", write = "Invariance.xlsx", output = FALSE) result <- item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", print = "all", output = FALSE) write.result(result, "Invariance.xlsx") ## End(Not run)
## Not run: # Load data set "HolzingerSwineford1939" in the lavaan package data("HolzingerSwineford1939", package = "lavaan") #------------------------------------------------------------------------------- # Between-Group Measurement Invariance Evaluation #.................. # Measurement model with one factor # Example 1a: Specification of the grouping variable in 'x' item.invar(HolzingerSwineford1939[, c("x1", "x2", "x3", "x4", "sex")], group = "sex") # Example 1b: Specification of the grouping variable in 'group' item.invar(HolzingerSwineford1939[, c("x1", "x2", "x3", "x4")], group = HolzingerSwineford1939$sex) # Example 1c: Alternative specification using the 'data' argument item.invar(x1:x4, data = HolzingerSwineford1939, group = "sex") # Example 1d: Alternative specification using the argument 'model' item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex") # Example 1e: Alternative specification using the 'data' and 'model' argument item.invar(., data = HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex") #.................. # Measurement model with two factors item.invar(HolzingerSwineford1939, model = list(c("x1", "x2", "x3", "x4"), c("x5", "x6", "x7", "x8")), group = "sex") #.................. # Configural, metric, scalar, and strict measurement invariance # Example 2: Evaluate configural, metric, scalar, and strict measurement invariance item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", invar = "strict") #.................. # Partial measurement invariance # Example 3: Free second factor loading (L2) and third intercept (T3) item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", partial = c("L2", "T3"), print = c("fit", "est")) #.................. # Residual covariances # Example 4a: One residual covariance item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), rescov = c("x3", "x4"), group = "sex") # Example 4b: Two residual covariances item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), rescov = list(c("x1", "x2"), c("x3", "x4")), group = "sex") #.................. # Scaled test statistic and cluster-robust standard errors # Example 5a: Specify cluster variable using a variable name in 'x' item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", cluster = "agemo") # Example 5b: Specify vector of the cluster variable in the argument 'cluster' item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", cluster = HolzingerSwineford1939$agemo) #.................. # Default Null model # Example 6: Specify default null model for computing incremental fit indices item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", null.model = FALSE) #.................. # Print argument # Example 7a: Request all results item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", print = "all") # Example 7b: Request fit indices with ad hoc non-normality correction item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", print.fit = "scaled") # Example 7c: Request modification indices with value equal or higher than 10 # and highlight residual correlations equal or higher than 0.3 item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", print = c("modind", "resid"), mod.minval = 10, resid.minval = 0.3) #.................. # Model syntax and lavaan summary of the estimated model # Example 8 mod <- item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", output = FALSE) # lavaan model syntax scalar invariance model cat(mod$model$scalar) # lavaan summary of the scalar invariance model lavaan::summary(mod$model.fit$scalar, standardized = TRUE, fit.measures = TRUE) #------------------------------------------------------------------------------- # Longitudinal Measurement Invariance Evaluation # Example 9: Two time points with three indicators at each time point item.invar(HolzingerSwineford1939, model = list(c("x1", "x2", "x3"), c("x5", "x6", "x7")), long = TRUE) #------------------------------------------------ # Write Results # Example 10a: Write results into a text file item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", print = "all", write = "Invariance.txt", output = FALSE) # Example 10b: Write results into an Excel file item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", print = "all", write = "Invariance.xlsx", output = FALSE) result <- item.invar(HolzingerSwineford1939, model = c("x1", "x2", "x3", "x4"), group = "sex", print = "all", output = FALSE) write.result(result, "Invariance.xlsx") ## End(Not run)
This function computes point estimate and confidence interval for the coefficient omega (McDonald, 1978), hierarchical omega (Kelley & Pornprasertmanit, 2016), and categorical omega (Green & Yang, 2009) along with standardized factor loadings and omega if item deleted.
item.omega(..., data = NULL, rescov = NULL, type = c("omega", "hierarch", "categ"), exclude = NULL, std = FALSE, na.omit = FALSE, print = c("all", "omega", "item"), digits = 2, conf.level = 0.95, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
item.omega(..., data = NULL, rescov = NULL, type = c("omega", "hierarch", "categ"), exclude = NULL, std = FALSE, na.omit = FALSE, print = c("all", "omega", "item"), digits = 2, conf.level = 0.95, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame. Note that at least three items are
needed for computing omega. Alternatively, an expression
indicating the variable names in |
data |
a data frame when specifying one or more variables in the
argument |
rescov |
a character vector or a list of character vectors for specifying
residual covariances when computing coefficient omega, e.g.
|
type |
a character string indicating the type of omega to be computed, i.e.,
|
exclude |
a character vector indicating items to be excluded from the analysis. |
std |
logical: if |
na.omit |
logical: if |
print |
a character vector indicating which results to show, i.e.
|
digits |
an integer value indicating the number of decimal places to be used for displaying omega and standardized factor loadings. |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Omega is computed by estimating a confirmatory factor analysis model using the
cfa()
function in the lavaan package by Yves Rosseel (2019). Maximum
likelihood ("ML"
) estimator is used for computing coefficient omega and
hierarchical omega, while diagonally weighted least squares estimator ("DWLS"
)
is used for computing categorical omega.
Approximate confidence intervals are computed using the procedure by Feldt, Woodruff
and Salih (1987). Note that there are at least 10 other procedures for computing
the confidence interval (see Kelley and Pornprasertmanit, 2016), which are implemented
in the ci.reliability()
function in the MBESSS package by Ken Kelley (2019).
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame used for the current analysis |
args |
specification of function arguments |
model.fit |
fitted lavaan object ( |
result |
list with result tables, i.e., |
Computation of the hierarchical and categorical omega is based on
the ci.reliability()
function in the MBESS package by Ken Kelley
(2019).
Takuya Yanagida [email protected]
Feldt, L. S., Woodruff, D. J., & Salih, F. A. (1987). Statistical inference for coefficient alpha. Applied Psychological Measurement, 11 93-103.
Green, S. B., & Yang, Y. (2009). Reliability of summed item scores using structural equation modeling: An alternative to coefficient alpha. Psychometrika, 74, 155-167. https://doi.org/10.1007/s11336-008-9099-3
Kelley, K., & Pornprasertmanit, S. (2016). Confidence intervals for population reliability coefficients: Evaluation of methods, recommendations, and software for composite measures. Psychological Methods, 21, 69-92. http://dx.doi.org/10.1037/a0040086
Ken Kelley (2019). MBESS: The MBESS R Package. R package version 4.6.0. https://CRAN.R-project.org/package=MBESS
McDonald, R. P. (1978). Generalizability in factorable domains: Domain validity and generalizability. Educational and Psychological Measurement, 38, 75-79.
write.result
, item.alpha
, item.cfa
,
item.reverse
, item.scores
## Not run: dat <- data.frame(item1 = c(5, 2, 3, 4, 1, 2, 4, 2), item2 = c(5, 3, 3, 5, 2, 2, 5, 1), item3 = c(4, 2, 4, 5, 1, 3, 5, 1), item4 = c(5, 1, 2, 5, 2, 3, 4, 2)) # Example 1a: Compute unstandardized coefficient omega and item statistics item.omega(dat) # Example 1b: Alternative specification using the 'data' argument item.omega(., data = dat) # Example 2: Compute unstandardized coefficient omega with a residual covariance # and item statistics item.omega(dat, rescov = c("item1", "item2")) # Example 3: Compute unstandardized coefficient omega with residual covariances # and item statistics item.omega(dat, rescov = list(c("item1", "item2"), c("item1", "item3"))) # Example 4: Compute unstandardized hierarchical omega and item statistics item.omega(dat, type = "hierarch") # Example 5: Compute categorical omega and item statistics item.omega(dat, type = "categ") # Example 6: Compute standardized coefficient omega and item statistics item.omega(dat, std = TRUE) # Example 7: Compute unstandardized coefficient omega item.omega(dat, print = "omega") # Example 8: Compute item statistics item.omega(dat, print = "item") # Example 9: Compute unstandardized coefficient omega and item statistics while excluding item3 item.omega(dat, exclude = "item3") # Example 10: Summary of the CFA model used to compute coefficient omega lavaan::summary(item.omega(dat, output = FALSE)$model.fit, fit.measures = TRUE, standardized = TRUE) # Example 11a: Write results into a text file item.omega(dat, write = "Omega.txt") # Example 11b: Write results into a Excel file item.omega(dat, write = "Omega.xlsx") result <- item.omega(dat, output = FALSE) write.result(result, "Omega.xlsx") ## End(Not run)
## Not run: dat <- data.frame(item1 = c(5, 2, 3, 4, 1, 2, 4, 2), item2 = c(5, 3, 3, 5, 2, 2, 5, 1), item3 = c(4, 2, 4, 5, 1, 3, 5, 1), item4 = c(5, 1, 2, 5, 2, 3, 4, 2)) # Example 1a: Compute unstandardized coefficient omega and item statistics item.omega(dat) # Example 1b: Alternative specification using the 'data' argument item.omega(., data = dat) # Example 2: Compute unstandardized coefficient omega with a residual covariance # and item statistics item.omega(dat, rescov = c("item1", "item2")) # Example 3: Compute unstandardized coefficient omega with residual covariances # and item statistics item.omega(dat, rescov = list(c("item1", "item2"), c("item1", "item3"))) # Example 4: Compute unstandardized hierarchical omega and item statistics item.omega(dat, type = "hierarch") # Example 5: Compute categorical omega and item statistics item.omega(dat, type = "categ") # Example 6: Compute standardized coefficient omega and item statistics item.omega(dat, std = TRUE) # Example 7: Compute unstandardized coefficient omega item.omega(dat, print = "omega") # Example 8: Compute item statistics item.omega(dat, print = "item") # Example 9: Compute unstandardized coefficient omega and item statistics while excluding item3 item.omega(dat, exclude = "item3") # Example 10: Summary of the CFA model used to compute coefficient omega lavaan::summary(item.omega(dat, output = FALSE)$model.fit, fit.measures = TRUE, standardized = TRUE) # Example 11a: Write results into a text file item.omega(dat, write = "Omega.txt") # Example 11b: Write results into a Excel file item.omega(dat, write = "Omega.xlsx") result <- item.omega(dat, output = FALSE) write.result(result, "Omega.xlsx") ## End(Not run)
This function reverse codes inverted items, i.e., items that are negatively worded.
item.reverse(..., data = NULL, min = NULL, max = NULL, keep = NULL, append = TRUE, name = ".r", as.na = NULL, table = FALSE, check = TRUE)
item.reverse(..., data = NULL, min = NULL, max = NULL, keep = NULL, append = TRUE, name = ".r", as.na = NULL, table = FALSE, check = TRUE)
... |
a numeric vector for reverse coding an item, matrix or data frame
for reverse coding more than one item. Alternatively, an expression
indicating the variable names in |
data |
a data frame when specifying one or more variables in the
argument |
min |
an integer indicating the minimum of the item (i.e., lowest possible scale value). |
max |
an integer indicating the maximum of the item (i.e., highest possible scale value). |
keep |
a numeric vector indicating values not to be reverse coded. |
append |
logical: if |
name |
a character string or character vector indicating the names
of the reverse coded item. By default, variables are named with the ending
|
as.na |
a numeric vector indicating user-defined missing values, i.e. these
values are converted to |
table |
logical: if |
check |
logical: if |
If arguments min
and/or max
are not specified, empirical minimum
and/or maximum is computed from the data Note, however, that reverse coding
might fail if the lowest or highest possible scale value is not represented in
the data That is, it is always preferable to specify the arguments min
and max
.
Returns a numeric vector or data frame with the same length or same number of
rows as ...
containing the reverse coded scale item(s).
Takuya Yanagida [email protected]
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. New York: John Wiley & Sons.
item.alpha
, item.omega
, rec
,
item.scores
dat <- data.frame(item1 = c(1, 5, 3, 1, 4, 4, 1, 5), item2 = c(1, 1.3, 1.7, 2, 2.7, 3.3, 4.7, 5), item3 = c(4, 2, 4, 5, 1, 3, 5, -99)) # Example 1a: Reverse code item1 and append to 'dat' dat$item1r <- item.reverse(dat$item1, min = 1, max = 5) # Example 1b: Alternative specification using the 'data' argument item.reverse(item1, data = dat, min = 1, max = 5) # Example 2: Reverse code item3 while keeping the value -99 dat$item3r <- item.reverse(dat$item3, min = 1, max = 5, keep = -99) # Example 3: Reverse code item3 while keeping the value -99 and check recoding dat$item3r <- item.reverse(dat$item3, min = 1, max = 5, keep = -99, table = TRUE) # Example 4a: Reverse code item1, item2, and item 3 and attach to 'dat' dat <- cbind(dat, item.reverse(dat[, c("item1", "item2", "item3")], min = 1, max = 5, keep = -99)) # Example 4b: Alternative specification using the 'data' argument item.reverse(item1:item3, data = dat, min = 1, max = 5, keep = -99)
dat <- data.frame(item1 = c(1, 5, 3, 1, 4, 4, 1, 5), item2 = c(1, 1.3, 1.7, 2, 2.7, 3.3, 4.7, 5), item3 = c(4, 2, 4, 5, 1, 3, 5, -99)) # Example 1a: Reverse code item1 and append to 'dat' dat$item1r <- item.reverse(dat$item1, min = 1, max = 5) # Example 1b: Alternative specification using the 'data' argument item.reverse(item1, data = dat, min = 1, max = 5) # Example 2: Reverse code item3 while keeping the value -99 dat$item3r <- item.reverse(dat$item3, min = 1, max = 5, keep = -99) # Example 3: Reverse code item3 while keeping the value -99 and check recoding dat$item3r <- item.reverse(dat$item3, min = 1, max = 5, keep = -99, table = TRUE) # Example 4a: Reverse code item1, item2, and item 3 and attach to 'dat' dat <- cbind(dat, item.reverse(dat[, c("item1", "item2", "item3")], min = 1, max = 5, keep = -99)) # Example 4b: Alternative specification using the 'data' argument item.reverse(item1:item3, data = dat, min = 1, max = 5, keep = -99)
This function computes (prorated) scale scores by averaging the (available) items that measure a single construct by default.
item.scores(..., data = NULL, fun = c("mean", "sum", "median", "var", "sd", "min", "max"), prorated = TRUE, p.avail = NULL, n.avail = NULL, append = TRUE, name = "scores", as.na = NULL, check = TRUE)
item.scores(..., data = NULL, fun = c("mean", "sum", "median", "var", "sd", "min", "max"), prorated = TRUE, p.avail = NULL, n.avail = NULL, append = TRUE, name = "scores", as.na = NULL, check = TRUE)
... |
a matrix or data frame with numeric vectors. Alternatively, an
expression indicating the variable names in |
data |
a data frame when specifying one or more variables in the
argument |
fun |
a character string indicating the function used to compute
scale scores, default: |
prorated |
logical: if |
p.avail |
a numeric value indicating the minimum proportion of available
item responses needed for computing a prorated scale score for
each case, e.g. |
n.avail |
an integer indicating the minimum number of available item
responses needed for computing a prorated scale score for each
case, e.g. |
append |
logical: if |
name |
a character string indicating the names of the variable appended
to the data frame specified in the argument |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
check |
logical: if |
Prorated mean scale scores are computed by averaging the available items, e.g., if a participant answers 4 out of 8 items, the prorated scale score is the average of the 4 responses. Averaging the available items is equivalent to substituting the mean of a participant's own observed items for each of the participant's missing items, i.e., person mean imputation (Mazza, Enders & Ruehlman, 2015) or ipsative mean imputation (Schafer & Graham, 2002).
Proration may be reasonable when (1) a relatively high proportion of the items (e.g., 0.8) and never fewer than half are used to form the scale score, (2) means of the items comprising a scale are similar and (3) the item-total correlations are similar (Enders, 2010; Graham, 2009; Graham, 2012). Results of simulation studies indicate that proration is prone to substantial bias when either the item means or the inter-item correlation vary (Lee, Bartholow, McCarthy, Pederson & Sher, 2014; Mazza et al., 2015).
Returns a numeric vector with the same length as nrow(x)
containing (prorated)
scale scores.
Takuya Yanagida [email protected]
Enders, C. K. (2010). Applied missing data analysis. New York, NY: Guilford Press.
Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549-576. https://doi.org/10.1146/annurev.psych.58.110405.085530
Graham, J. W. (2012). Missing data: Analysis and design. New York, NY: Springer
Lee, M. R., Bartholow, B. D., McCarhy, D. M., Pederson, S. L., & Sher, K. J. (2014). Two alternative approaches to conventional person-mean imputation scoring of the self-rating of the effects of alcohol scale (SRE). Psychology of Addictive Behaviors, 29, 231-236. https://doi.org/10.1037/adb0000015
Mazza, G. L., Enders, C. G., & Ruehlman, L. S. (2015). Addressing item-level missing data: A comparison of proration and full information maximum likelihood estimation. Multivariate Behavioral Research, 50, 504-519. https://doi.org/10.1080/00273171.2015.1068157
Schafer, J. L., & Graham, J. W. (2002). Missing data: Our view of the state of the art. Psychological Methods, 7, 147-177.' https://doi.org/10.1037/1082-989X.7.2.147
cluster.scores
, item.alpha
, item.cfa
,
item.omega
,
dat <- data.frame(item1 = c(3, 2, 4, 1, 5, 1, 3, NA), item2 = c(2, 2, NA, 2, 4, 2, NA, 1), item3 = c(1, 1, 2, 2, 4, 3, NA, NA), item4 = c(4, 2, 4, 4, NA, 2, NA, NA), item5 = c(3, NA, NA, 2, 4, 3, NA, 3)) # Example 1a: Prorated mean scale scores item.scores(dat) # Example 1b: Alternative specification using the 'data' argument item.scores(., data = dat) # Example 2: Prorated standard deviation scale scores item.scores(dat, fun = "sd") # Example 3: Sum scale scores without proration item.scores(dat, fun = "sum", prorated = FALSE) # Example 4: Prorated mean scale scores, # minimum proportion of available item responses = 0.8 item.scores(dat, p.avail = 0.8) # Example 5: Prorated mean scale scores, # minimum number of available item responses = 3 item.scores(dat, n.avail = 3)
dat <- data.frame(item1 = c(3, 2, 4, 1, 5, 1, 3, NA), item2 = c(2, 2, NA, 2, 4, 2, NA, 1), item3 = c(1, 1, 2, 2, 4, 3, NA, NA), item4 = c(4, 2, 4, 4, NA, 2, NA, NA), item5 = c(3, NA, NA, 2, 4, 3, NA, 3)) # Example 1a: Prorated mean scale scores item.scores(dat) # Example 1b: Alternative specification using the 'data' argument item.scores(., data = dat) # Example 2: Prorated standard deviation scale scores item.scores(dat, fun = "sd") # Example 3: Sum scale scores without proration item.scores(dat, fun = "sum", prorated = FALSE) # Example 4: Prorated mean scale scores, # minimum proportion of available item responses = 0.8 item.scores(dat, p.avail = 0.8) # Example 5: Prorated mean scale scores, # minimum number of available item responses = 3 item.scores(dat, n.avail = 3)
This function computes lagged values of variables by a specified number of observations. By default, the function returns lag-1 values of the vector, matrix, or data frame specified in the first argument.
lagged(..., data = NULL, id = NULL, obs = NULL, day = NULL, lag = 1, time = NULL, units = c("secs", "mins", "hours", "days", "weeks"), append = TRUE, name = ".lag", name.td = ".td", as.na = NULL, check = TRUE)
lagged(..., data = NULL, id = NULL, obs = NULL, day = NULL, lag = 1, time = NULL, units = c("secs", "mins", "hours", "days", "weeks"), append = TRUE, name = ".lag", name.td = ".td", as.na = NULL, check = TRUE)
... |
a vector for computing a lagged values for a variable, matrix
or data frame for computing lagged values for more than one
variable. Note that the subject ID variable ( |
data |
a data frame when specifying one or more variables in the
argument |
id |
either a character string indicating the variable name of the subject ID variable in '...' or a vector representing the subject IDs, see 'Details'. |
obs |
either a character string indicating the variable name of the observation number variable in '...' or a vector representing the observations. Note that duplicated values within the same subject ID are not allowed, see 'Details'. |
day |
either a character string indicating the variable name of the day number variable in '...' or a vector representing the days, see 'Details'. |
lag |
a numeric value specifying the lag, e.g. |
time |
a variable of class |
units |
a character string indicating the units in which the time
difference is represented, i.e., |
append |
logical: if |
name |
a character string or character vector indicating the names of
the lagged variables. By default, lagged variables are named
with the ending |
name.td |
a character string or character vector indicating the names of
the time difference variables when specifying a date and time
variables for the argument |
as.na |
a numeric vector indicating user-defined missing values, i.e.
these values are converted to |
check |
logical: if |
The function is used to create lagged version of the variable(s) specified via
the ...
argument:
id
If the id
argument is not specified
i.e., id = NULL
, all observations are assumed to come from the same
subject. If the dataset includes multiple subjects, then this variable needs
to be specified so that observations are not lagged across subjects
day
If the day
argument is not specified
i.e., day = NULL
, values of the variable to be lagged are allowed to be
lagged across days in case there are multiple observation days.
obs
If the obs
argument is not specified
i.e., obs = NULL
, consecutive observations from the same subjects are
assumed to be one lag apart.
Returns a numeric vector or data frame with the same length or same number of
rows as ...
containing the lagged variable(s).
This function is a based on the lagvar()
function in the esmpack
package by Wolfgang Viechtbauer and Mihail Constantin (2023).
Takuya Yanagida [email protected]
Viechtbauer W, Constantin M (2023). esmpack: Functions that facilitate preparation and management of ESM/EMA data. R package version 0.1-20.
center
, rec
, coding
, item.reverse
.
dat <- data.frame(subject = rep(1:2, each = 6), day = rep(1:2, each = 3), obs = rep(1:6, times = 2), time = as.POSIXct(c("2024-01-01 09:01:00", "2024-01-01 12:05:00", "2024-01-01 15:14:00", "2024-01-02 09:03:00", "2024-01-02 12:21:00", "2024-01-02 15:03:00", "2024-01-01 09:02:00", "2024-01-01 12:09:00", "2024-01-01 15:06:00", "2024-01-02 09:02:00", "2024-01-02 12:15:00", "2024-01-02 15:06:00")), pos = c(6, 7, 5, 8, NA, 7, 4, NA, 5, 4, 5, 3), neg = c(2, 3, 2, 5, 3, 4, 6, 4, 6, 4, NA, 8)) # Example 1a: Lagged variable for 'pos' lagged(dat$pos, id = dat$subject, day = dat$day) # Example 1b: Alternative specification lagged(dat[, c("pos", "subject", "day")], id = "subject", day = "day") # Example 1c: Alternative specification using the 'data' argument lagged(pos, data = dat, id = "subject", day = "day") # Example 2a: Lagged variable for 'pos' and 'neg' lagged(dat[, c("pos", "neg")], id = dat$subject, day = dat$day) # Example 2b: Alternative specification using the 'data' argument lagged(pos, neg, data = dat, id = "subject", day = "day") # Example 3: Lag-2 variables for 'pos' and 'neg' lagged(pos, neg, data = dat, id = "subject", day = "day", lag = 2) # Example 4: Lagged variable and time difference variable lagged(pos, neg, data = dat, id = "subject", day = "day", time = "time") # Example 5: Lagged variables and time difference variables, # name variables lagged(pos, neg, data = dat, id = "subject", day = "day", time = "time", name = c("p.lag1", "n.lag1"), name.td = c("p.diff", "n.diff")) # Example 6: NA observations excluded from the data frame dat.excl <- dat[!is.na(dat$pos), ] # Number of observation not taken into account, i.e., # - observation 4 used as lagged value for observation 6 for subject 1 # - observation 1 used as lagged value for observation 3 for subject 2 lagged(pos, data = dat.excl, id = "subject", day = "day") # Number of observation taken into account by specifying the 'ob' argument lagged(pos, data = dat.excl, id = "subject", day = "day", obs = "obs")
dat <- data.frame(subject = rep(1:2, each = 6), day = rep(1:2, each = 3), obs = rep(1:6, times = 2), time = as.POSIXct(c("2024-01-01 09:01:00", "2024-01-01 12:05:00", "2024-01-01 15:14:00", "2024-01-02 09:03:00", "2024-01-02 12:21:00", "2024-01-02 15:03:00", "2024-01-01 09:02:00", "2024-01-01 12:09:00", "2024-01-01 15:06:00", "2024-01-02 09:02:00", "2024-01-02 12:15:00", "2024-01-02 15:06:00")), pos = c(6, 7, 5, 8, NA, 7, 4, NA, 5, 4, 5, 3), neg = c(2, 3, 2, 5, 3, 4, 6, 4, 6, 4, NA, 8)) # Example 1a: Lagged variable for 'pos' lagged(dat$pos, id = dat$subject, day = dat$day) # Example 1b: Alternative specification lagged(dat[, c("pos", "subject", "day")], id = "subject", day = "day") # Example 1c: Alternative specification using the 'data' argument lagged(pos, data = dat, id = "subject", day = "day") # Example 2a: Lagged variable for 'pos' and 'neg' lagged(dat[, c("pos", "neg")], id = dat$subject, day = dat$day) # Example 2b: Alternative specification using the 'data' argument lagged(pos, neg, data = dat, id = "subject", day = "day") # Example 3: Lag-2 variables for 'pos' and 'neg' lagged(pos, neg, data = dat, id = "subject", day = "day", lag = 2) # Example 4: Lagged variable and time difference variable lagged(pos, neg, data = dat, id = "subject", day = "day", time = "time") # Example 5: Lagged variables and time difference variables, # name variables lagged(pos, neg, data = dat, id = "subject", day = "day", time = "time", name = c("p.lag1", "n.lag1"), name.td = c("p.diff", "n.diff")) # Example 6: NA observations excluded from the data frame dat.excl <- dat[!is.na(dat$pos), ] # Number of observation not taken into account, i.e., # - observation 4 used as lagged value for observation 6 for subject 1 # - observation 1 used as lagged value for observation 3 for subject 2 lagged(pos, data = dat.excl, id = "subject", day = "day") # Number of observation taken into account by specifying the 'ob' argument lagged(pos, data = dat.excl, id = "subject", day = "day", obs = "obs")
This function loads and attaches multiple add-on packages at once.
libraries(..., install = FALSE, quiet = TRUE, check = TRUE, output = TRUE)
libraries(..., install = FALSE, quiet = TRUE, check = TRUE, output = TRUE)
... |
the names of the packages to be loaded, given as names
(e.g., |
install |
logical: if |
quiet |
logical: if |
check |
logical: if |
output |
logical: logical: if |
Takuya Yanagida [email protected]
Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole.
## Not run: # Example 1: Load packages using the names of the packages misty::libraries(misty, lme4, lmerTest) # Example 2: Load packages using literal character strings misty::libraries("misty", "lme4", "lmerTest") # Example 3: Load packages using a character vector misty::libraries(c("misty", "lme4", "lmerTest")) # Example 4: Check packages, i.e., TRUE = all depends/imports/suggests installed misty::libraries(misty, lme4, lmerTest, output = FALSE)$result$restab # Example 5: Depends, FALSE = not installed, TRUE = installed misty::libraries(misty, lme4, lmerTest, output = FALSE)$result$depends # Example 6: Imports, FALSE = not installed, TRUE = installed misty::libraries(misty, lme4, lmerTest, output = FALSE)$result$imports # Example 6: Suggests, FALSE = not installed, TRUE = installed misty::libraries(misty, lme4, lmerTest, output = FALSE)$result$suggests ## End(Not run)
## Not run: # Example 1: Load packages using the names of the packages misty::libraries(misty, lme4, lmerTest) # Example 2: Load packages using literal character strings misty::libraries("misty", "lme4", "lmerTest") # Example 3: Load packages using a character vector misty::libraries(c("misty", "lme4", "lmerTest")) # Example 4: Check packages, i.e., TRUE = all depends/imports/suggests installed misty::libraries(misty, lme4, lmerTest, output = FALSE)$result$restab # Example 5: Depends, FALSE = not installed, TRUE = installed misty::libraries(misty, lme4, lmerTest, output = FALSE)$result$depends # Example 6: Imports, FALSE = not installed, TRUE = installed misty::libraries(misty, lme4, lmerTest, output = FALSE)$result$imports # Example 6: Suggests, FALSE = not installed, TRUE = installed misty::libraries(misty, lme4, lmerTest, output = FALSE)$result$suggests ## End(Not run)
This wrapper function creates a Mplus input file, runs the input file by using
the mplus.run()
function, and prints the Mplus output file by using the
mplus.print()
function.
mplus(x, file = "Mplus_Input.inp", data = NULL, comment = FALSE, replace.inp = TRUE, mplus.run = TRUE, show.out = FALSE, replace.out = c("always", "never", "modified"), Mplus = detect.mplus(), print = c("all", "input", "result"), input = c("all", "default", "data", "variable", "define", "analysis", "model", "montecarlo", "mod.pop", "mod.cov", "mod.miss", "message"), result = c("all", "default", "summary.analysis.short", "summary.data.short", "random.starts", "summary.fit", "mod.est", "fit", "class.count", "classif", "mod.result", "total.indirect"), exclude = NULL, variable = FALSE, not.input = TRUE, not.result = TRUE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
mplus(x, file = "Mplus_Input.inp", data = NULL, comment = FALSE, replace.inp = TRUE, mplus.run = TRUE, show.out = FALSE, replace.out = c("always", "never", "modified"), Mplus = detect.mplus(), print = c("all", "input", "result"), input = c("all", "default", "data", "variable", "define", "analysis", "model", "montecarlo", "mod.pop", "mod.cov", "mod.miss", "message"), result = c("all", "default", "summary.analysis.short", "summary.data.short", "random.starts", "summary.fit", "mod.est", "fit", "class.count", "classif", "mod.result", "total.indirect"), exclude = NULL, variable = FALSE, not.input = TRUE, not.result = TRUE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
x |
a character string containing the Mplus input text. |
file |
a character string indicating the name of the Mplus input
file with or without the file extension |
data |
a matrix or data frame from which the variables names for
the subsection |
comment |
logical: if |
replace.inp |
logical: if |
mplus.run |
logical: if |
show.out |
logical: if |
replace.out |
a character string for specifying three settings:
|
Mplus |
a character string for specifying the name or path of the Mplus executable to be used for running models. This covers situations where Mplus is not in the system's path, or where one wants to test different versions of the Mplus program. Note that there is no need to specify this argument for most users since it has intelligent defaults. |
print |
a character vector indicating which results to show, i.e.
|
input |
a character vector specifying Mplus input command sections
included in the output (see 'Details' in the |
result |
a character vector specifying Mplus result sections included
in the output (see 'Details' in the |
exclude |
a character vector specifying Mplus input command or result
sections excluded from the output (see 'Details' in the
|
variable |
logical: if |
not.input |
logical: if |
not.result |
logical: if |
write |
a character string naming a file for writing the output into
a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
NAMES
OptionThe NAMES
option in the VARIABLE
section used to assign names to the variables in the data set can be specified by using the
data
argument:
Write Mplus Data File
: In the first step, the Mplus data
file is written by using the write.mplus()
function, e.g.
write.mplus(ex3_1, file = "ex3_1.dat")
.
Specify Mplus Input
: In the second step, the Mplus input
is specified as a character string. The NAMES
option is left out
from the Mplus input text, e.g.,
input <- 'DATA: FILE IS ex3_1.dat;\nMODEL: y1 ON x1 x3;'
.
Run Mplus Input
: In the third step, the Mplus input is run
by using the mplus()
function. The argument data
needs to be specified given that the NAMES
option was left out from
the Mplus input text in the previous step, e.g.,
mplus(input, file = "ex3_1.inp", data = ex3_1)
.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
a character vector containing the Mplus input text |
args |
specification of function arguments |
input |
list with input command sections |
write |
write command sections |
result |
list with input command sections ( |
Takuya Yanagida
Muthen, L. K., & Muthen, B. O. (1998-2017). Mplus User's Guide (8th ed.). Muthen & Muthen.
read.mplus
, write.mplus
, mplus.update
,
mplus.print
, mplus.plot
, mplus.bayes
,
mplus.run
, mplus.lca
## Not run: #---------------------------------------------------------------------------- # Example 1: Write data, specify input, and run input # Write Mplus Data File write.mplus(ex3_1, file = "ex3_1.dat") # Specify Mplus input, specify NAMES option input1 <- ' DATA: FILE IS ex3_1.dat; VARIABLE: NAMES ARE y1 x1 x3; MODEL: y1 ON x1 x3; OUTPUT: SAMPSTAT; ' # Run Mplus input mplus(input1, file = "ex3_1.inp") #---------------------------------------------------------------------------- # Example 2: Alternative specification using the data argument # Specify Mplus input, leave out the NAMES option input2 <- ' DATA: FILE IS ex3_1.dat; MODEL: y1 ON x1 x3; OUTPUT: SAMPSTAT; ' # Run Mplus input, specify the data argument mplus(input2, file = "ex3_1.inp", data = ex3_1) ## End(Not run)
## Not run: #---------------------------------------------------------------------------- # Example 1: Write data, specify input, and run input # Write Mplus Data File write.mplus(ex3_1, file = "ex3_1.dat") # Specify Mplus input, specify NAMES option input1 <- ' DATA: FILE IS ex3_1.dat; VARIABLE: NAMES ARE y1 x1 x3; MODEL: y1 ON x1 x3; OUTPUT: SAMPSTAT; ' # Run Mplus input mplus(input1, file = "ex3_1.inp") #---------------------------------------------------------------------------- # Example 2: Alternative specification using the data argument # Specify Mplus input, leave out the NAMES option input2 <- ' DATA: FILE IS ex3_1.dat; MODEL: y1 ON x1 x3; OUTPUT: SAMPSTAT; ' # Run Mplus input, specify the data argument mplus(input2, file = "ex3_1.inp", data = ex3_1) ## End(Not run)
This function uses the h5file
function in the hdf5r package to
read a Mplus GH5 file that is requested by the command PLOT: TYPE IS PLOT2
in Mplus to compute point estimates (i.e., mean, median, and MAP), measures of dispersion
(i.e., standard deviation and mean absolute deviation), measures of shape (i.e.,
skewness and kurtosis), credible intervals (i.e., equal-tailed intervals and
highest density interval), convergence and efficiency diagnostics (i.e., potential
scale reduction factor R-hat, effective sample size, and Monte Carlo standard error),
probability of direction, and probability of being in the region of practical
equivalence for the posterior distribution for each parameter. By default, the
function computes the maximum of rank-normalized split-R-hat and rank normalized
folded-split-R-hat, Bulk effective sample size
(Bulk-ESS) for rank-normalized values using split chains, tail effective sample
size (Tail-ESS) defined as the minimum of the effective sample size for 0.025 and
0.975 quantiles, the Bulk Monte Carlo standard error (Bulk-MCSE) for the median
and Tail Monte Carlo standard error (Tail-MCSE) defined as the maximum of the MCSE
for 0.025 and 0.975 quantiles.
mplus.bayes(x, print = c("all", "default", "m", "med", "map", "sd", "mad", "skew", "kurt", "eti", "hdi", "rhat", "b.ess", "t.ess", "b.mcse", "t.mcse"), param = c("all", "on", "by", "with", "inter", "var", "r2", "new"), std = c("all", "none", "stdyx", "stdy", "std"), m.bulk = FALSE, split = TRUE, rank = TRUE, fold = TRUE, pd = FALSE, null = 0, rope = NULL, ess.tail = c(0.025, 0.975), mcse.tail = c(0.025, 0.975), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, digits = 2, r.digits = 3, ess.digits = 0, mcse.digits = 3, p.digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
mplus.bayes(x, print = c("all", "default", "m", "med", "map", "sd", "mad", "skew", "kurt", "eti", "hdi", "rhat", "b.ess", "t.ess", "b.mcse", "t.mcse"), param = c("all", "on", "by", "with", "inter", "var", "r2", "new"), std = c("all", "none", "stdyx", "stdy", "std"), m.bulk = FALSE, split = TRUE, rank = TRUE, fold = TRUE, pd = FALSE, null = 0, rope = NULL, ess.tail = c(0.025, 0.975), mcse.tail = c(0.025, 0.975), alternative = c("two.sided", "less", "greater"), conf.level = 0.95, digits = 2, r.digits = 3, ess.digits = 0, mcse.digits = 3, p.digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
x |
a character string indicating the name of the Mplus GH5 file
(HDF5 format) with or without the file extension |
print |
a character vector indicating which summary measures,
convergence, and efficiency diagnostics to be printed on
the console, i.e. |
param |
character vector indicating which parameters to print
for the summary measures, convergence, and efficiency
diagnostics, i.e., |
std |
a character vector indicating the standardized
parameters to print for the summary measures, convergence,
and efficiency diagnostics, i.e., |
m.bulk |
logical: if |
split |
logical: if |
rank |
logical: if |
fold |
logical: if |
pd |
logical: if |
null |
a numeric value considered as a null effect for the probability
of direction (default is |
rope |
a numeric vector with two elements indicating the ROPE's
lower and upper bounds. ROPE is also depending on the argument
|
ess.tail |
a numeric vector with two elements to specify the quantiles
for computing the tail ESS. The default setting is
|
mcse.tail |
a numeric vector with two elements to specify the quantiles
for computing the tail MCSE. The default setting is
|
alternative |
a character string specifying the alternative hypothesis
for the credible intervals, must be one of |
conf.level |
a numeric value between 0 and 1 indicating the confidence
level of the credible interval. The default setting is |
digits |
an integer value indicating the number of decimal places to be used for displaying point estimates, measures of dispersion, and credible intervals. |
r.digits |
an integer value indicating the number of decimal places to be used for displaying R-hat values. |
ess.digits |
an integer value indicating the number of decimal places to be used for displaying effective sample sizes. |
mcse.digits |
an integer value indicating the number of decimal places to be used for displaying Monte Carlo standard errors. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the probability of direction and the probability of being in the region of practical equivalence (ROPE). |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Convergence and efficiency diagnostics for Markov chains is based on following numeric measures:
Potential Scale Reduction (PSR) factor R-hat: The PSR factor
R-hat compares the between- and within-chain variance for a model
parameter, i.e., R-hat larger than 1 indicates that the between-chain
variance is greater than the within-chain variance and chains have not
mixed well. According to the default setting, the function computes the
improved R-hat as recommended by Vehtari et al. (2020) based on rank-normalizing
(i.e., rank = TRUE
) and folding (i.e., fold = TRUE
) the
posterior draws after splitting each MCMC chain in half (i.e.,
split = TRUE
). The traditional R-hat used in Mplus can be requested
by specifying split = FALSE
, rank = FALSE
, and
fold = FALSE
. Note that the traditional R-hat can catch many
problems of poor convergence, but fails if the chains have different
variances with the same mean parameter or if the chains have infinite
variance with one of the chains having a different location parameter to
the others (Vehtari et al., 2020). According to Gelman et al. (2014) a
R-hat value of 1.1 or smaller for all parameters can be considered evidence
for convergence. The Stan Development Team (2024) recommends running at
least four chains and a convergence criterion of less than 1.05 for the
maximum of rank normalized split-R-hat and rank normalized folded-split-R-hat.
Vehtari et al. (2020), however, recommended to only use the posterior
samples if R-hat is less than 1.01 because the R-hat can fall below 1.1
well before convergence in some scenarios (Brooks & Gelman, 1998; Vats &
Knudon, 2018).
Effective Sample Size (ESS): The ESS is the estimated number
of independent samples from the posterior distribution that would lead
to the same precision as the autocorrelated samples at hand. According
to the default setting, the function computes the ESS based on rank-normalized
split-R-hat and within-chain autocorrelation. The function provides the
estimated Bulk-ESS (B.ESS
) and the Tail-ESS (T.ESS
). The
Bulk-ESS is a useful measure for sampling efficiency in the bulk of the
distribution (i.e, efficiency of the posterior mean), and the Tail-ESS
is useful measure for sampling efficiency in the tails of the distribution
(e.g., efficiency of tail quantile estimates). Note that by default, the
Tail-ESS is the minimum of the effective sample sizes for 5% and 95%
quantiles (tail = c(0.025, 0.975)
). According to Kruschke (2015),
a rank-normalized ESS greater than 400 is usually sufficient to get a
stable estimate of the Monte Carlo standard error. However, a ESS of
at least 1000 is considered optimal (Zitzmann & Hecht, 2019).
Monte Carlo Standard Error (MCSE): The MCSE is defined as
the standard deviation of the chains divided by their effective sample
size and reflects uncertainty due to the stochastic algorithm of the
Markov Chain Monte Carlo method. The function provides the estimated
Bulk-MCSE (B.MCSE
) for the margin of error when using the MCMC
samples to estimate the posterior mean and the Tail-ESS (T.MCSE
)
for the margin of error when using the MCMC samples for interval
estimation.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
Mplus GH5 file |
args |
specification of function arguments |
data |
three-dimensional array parameter x iteration x chain of the posterior |
result |
result table with summary measures, convergence, and efficiency diagnostics |
This function is a modified copy of functions provided in the rstan package by Stan Development Team (2024) and bayestestR package by Makowski et al. (2019).
Takuya Yanagida
Brooks, S. P. and Gelman, A. (1998). General Methods for Monitoring Convergence of Iterative Simulations. Journal of Computational and Graphical Statistics, 7(4): 434–455. MR1665662.
Gelman, A., & Rubin, D.B. (1992). Inference from iterative simulation using multiple sequences. Statistical Science, 7, 457-472. https://doi.org/10.1214/ss/1177011136
Kruschke, J. (2015). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan. Academic Press.
Makowski, D., Ben-Shachar, M., & Lüdecke, D. (2019). bayestestR: Describing effects and their uncertainty, existence and significance within the Bayesian framework. Journal of Open Source Software, 4(40), 1541. https://doi.org/10.21105/joss.01541
Stan Development Team (2024). RStan: the R interface to Stan. R package version 2.32.6. https://mc-stan.org/.
Vats, D. and Knudson, C. (2018). Revisiting the Gelman-Rubin Diagnostic. arXiv:1812.09384.
Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., & Bürkner, P.-C. (2020). Rank-normalization, folding, and localization: An improved R-hat for assessing convergence of MCMC. Bayesian analysis, 16(2), 667-718. https://doi.org/110.1214/20-BA1221
Zitzmann, S., & Hecht, M. (2019). Going beyond convergence in Bayesian estimation: Why precision matters too and how to assess it. Structural Equation Modeling: A Multidisciplinary Journal, 26(4), 646–661. https://doi.org/10.1080/10705511.2018.1545232
read.mplus
, write.mplus
, mplus
,
mplus.update
, mplus.print
, mplus.plot
,
mplus.run
, mplus.lca
## Not run: #---------------------------------------------------------------------------- # Mplus Example 3.18: Moderated Mediation with a Plot of the Indirect Effect # Example 1: Default setting mplus.bayes("ex3.18.gh5") # Example 2: Print all parameters mplus.bayes("ex3.18.gh5", param = "all") # Example 3: Print parameters not in the analysis model mplus.bayes("ex3.18.gh5", param = "new") # Example 4a: Print all summary measures, convergence, and efficiency diagnostics mplus.bayes("ex3.18.gh5", print = "all") # Example 4a: Print default measures plus MAP mplus.bayes("ex3.18.gh5", print = c("default", "map")) # Example 5: Print traditional R-hat in line with Mplus mplus.bayes("ex3.18.gh5", split = FALSE, rank = FALSE, fold = FALSE) # Example 6: Print probability of direction and the probability of # being ROPE [-0.1, 0.1] mplus.bayes("ex3.18.gh5", pd = TRUE, rope = c(-0.1, 0.1)) # Example 7: Write Results into a text file mplus.bayes("ex3.18.gh5", write = "Bayes_Summary.txt") # Example 8b: Write Results into a Excel file mplus.bayes("ex3.18.gh5", write = "Bayes_Summary.xlsx") ## End(Not run)
## Not run: #---------------------------------------------------------------------------- # Mplus Example 3.18: Moderated Mediation with a Plot of the Indirect Effect # Example 1: Default setting mplus.bayes("ex3.18.gh5") # Example 2: Print all parameters mplus.bayes("ex3.18.gh5", param = "all") # Example 3: Print parameters not in the analysis model mplus.bayes("ex3.18.gh5", param = "new") # Example 4a: Print all summary measures, convergence, and efficiency diagnostics mplus.bayes("ex3.18.gh5", print = "all") # Example 4a: Print default measures plus MAP mplus.bayes("ex3.18.gh5", print = c("default", "map")) # Example 5: Print traditional R-hat in line with Mplus mplus.bayes("ex3.18.gh5", split = FALSE, rank = FALSE, fold = FALSE) # Example 6: Print probability of direction and the probability of # being ROPE [-0.1, 0.1] mplus.bayes("ex3.18.gh5", pd = TRUE, rope = c(-0.1, 0.1)) # Example 7: Write Results into a text file mplus.bayes("ex3.18.gh5", write = "Bayes_Summary.txt") # Example 8b: Write Results into a Excel file mplus.bayes("ex3.18.gh5", write = "Bayes_Summary.xlsx") ## End(Not run)
This function writes Mplus input files for conducting latent class analysis (LCA)
for continuous, count, ordered categorical, and unordered categorical variables.
LCA with continuous indicator variables are based on six different
variance-covariance structures, while LCA for all other variable types assume
local independence. By default, the function conducts LCA with continuous
variables and creates folders in the current working directory for each of the
six sets of analysis, writes Mplus input files for conducting LCA with
k = 1 to k = 6 classes into these folders, and writes the matrix
or data frame specified in x
into a Mplus data file in the current working
directory. Optionally, all models can be estimated by setting the argument
mplus.run
to TRUE
.
mplus.lca(x, ind = NULL, type = c("continuous", "count", "categorical", "nominal"), cluster = NULL, folder = c("A_Invariant-Theta_Diagonal-Sigma", "B_Varying-Theta_Diagonal-Sigma", "C_Invariant-Theta_Invariant-Unrestrictred-Sigma", "D_Invariant-Theta_Varying-Unrestricted-Sigma", "E_Varying-Theta_Invariant-Unrestricted-Sigma", "F_Varying-Theta_Varying-Unrestricted-Sigma"), file = "Data_LCA.dat", write = c("all", "folder", "data", "input"), useobservations = NULL, missing = -99, classes = 6, estimator = "MLR", starts = c(100, 50), stiterations = 10, lrtbootstrap = 1000, lrtstarts = c(0, 0, 100, 50), processors = c(8, 8), output = c("all", "SVALUES", "CINTERVAL", "TECH7", "TECH8", "TECH11", "TECH14"), replace.inp = FALSE, mplus.run = FALSE, Mplus = "Mplus", replace.out = c("always", "never", "modified"), check = TRUE)
mplus.lca(x, ind = NULL, type = c("continuous", "count", "categorical", "nominal"), cluster = NULL, folder = c("A_Invariant-Theta_Diagonal-Sigma", "B_Varying-Theta_Diagonal-Sigma", "C_Invariant-Theta_Invariant-Unrestrictred-Sigma", "D_Invariant-Theta_Varying-Unrestricted-Sigma", "E_Varying-Theta_Invariant-Unrestricted-Sigma", "F_Varying-Theta_Varying-Unrestricted-Sigma"), file = "Data_LCA.dat", write = c("all", "folder", "data", "input"), useobservations = NULL, missing = -99, classes = 6, estimator = "MLR", starts = c(100, 50), stiterations = 10, lrtbootstrap = 1000, lrtstarts = c(0, 0, 100, 50), processors = c(8, 8), output = c("all", "SVALUES", "CINTERVAL", "TECH7", "TECH8", "TECH11", "TECH14"), replace.inp = FALSE, mplus.run = FALSE, Mplus = "Mplus", replace.out = c("always", "never", "modified"), check = TRUE)
x |
a matrix or data frame. Note that all variable names must be no longer than 8 character. |
ind |
a character vector indicating the variables names of the
latent class indicators in |
type |
a character string indicating the variable type of the
latent class indicators, i.e., |
cluster |
a character string indicating the cluster variable in
the matrix or data frame specified in |
folder |
a character vector with six character strings for specifying
the names of the six folder representing different
variance-covariance structures for conducting LCA with
continuous indicator variables. There is only one folder
for LCA with all other variable types which is called
|
file |
a character string naming the Mplus data file with or
without the file extension '.dat', e.g., |
write |
a character string or character vector indicating whether
to create the six folders specified in the argument
|
useobservations |
a character string indicating the conditional statement to select observations. |
missing |
a numeric value or character string representing missing
values ( |
classes |
an integer value specifying the maximum number of classes for the latent class analysis. By default, LCA with a maximum of 6 classes is specified (i.e., k = 1 to k = 6). |
estimator |
a character string for specifying the |
starts |
a vector with two integer values for specifying the
|
stiterations |
an integer value specifying the |
lrtbootstrap |
an integer value for specifying the |
lrtstarts |
a vector with four integer values for specifying the
|
processors |
a vector of one or two integer values for specifying the
|
output |
a character string or character vector specifying the
|
replace.inp |
logical: if |
mplus.run |
logical: if |
Mplus |
a character string for specifying the name or path of the Mplus executable to be used for running models. This covers situations where Mplus is not in the system's path, or where one wants to test different versions of the Mplus program. Note that there is no need to specify this argument for most users since it has intelligent defaults. |
replace.out |
a character string for specifying three settings, i.e.,
|
check |
logical: if |
Latent class analysis (LCA) is a model-based clustering and classification method used to identify qualitatively different classes of observations which are unknown and must be inferred from the data. LCA can accommodate continuous, count, binary, ordered categorical, and unordered categorical indicators. LCA with continuous indicator variables are also known as latent profile analysis (LPA). In LPA, the within-profile variance-covariance structures represent different assumptions regarding the variance and covariance of the indicator variables both within and between latent profiles. As the best within-profile variance-covariance structure is not known a priori, all of the different structures must be investigated to identify the best model (Masyn, 2013). This function specifies six different variance-covariance structures labeled A to F (see Table 1 in Patterer et al, 2023):
The within-profile variance is constrained to be profile-invariant and covariances are constrained to be 0 in all profiles (i.e., equal variances across profiles and no covariances among indicator variables). This is the default setting in Mplus.
The within-profile variance is profile-varying and covariances are constrained to be 0 in all profiles (i.e., unequal variances across profiles and no covariances among indicator variables).
The within-profile variance is constrained to be profile-invariant and covariances are constrained to be equal in all profiles (i.e., equal variances and covariances across profiles).
The within-profile variance is constrained to be profile-invariant and covariances are profile-varying (i.e., equal variances across profiles and unequal covariances across profiles).
The within-profile variances are profile-varying and covariances are constrained to be equal in all profiles (i.e., unequal variances across profiles and equal covariances across profiles).
The within-class variance and covariances are both profile-varying (i.e., unequal variances and covariances across profiles).
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
matrix or data frame specified in the argument x |
args |
specification of function arguments |
result |
list with six entries for each of the variance-covariance structures and Mplus inputs based on different number of profiles in case of continuous indicators or list of Mplus inputs based on different number of classes in case of count, ordered or unordered categorical indicators. |
Takuya Yanagida [email protected]
Masyn, K. E. (2013). Latent class analysis and finite mixture modeling. In T. D. Little (Ed.), The Oxford handbook of quantitative methods: Statistical analysis (pp. 551–611). Oxford University Press.
Muthen, L. K., & Muthen, B. O. (1998-2017). Mplus User's Guide (8th ed.). Muthen & Muthen.
Patterer, A. S., Yanagida, T., Kühnel, J., & Korunka, C. (2023). Daily receiving and providing of social support at work: Identifying support exchange patterns in hierarchical data. Journal of Work and Organizational Psychology, 32(4), 489-505. https://doi.org/10.1080/1359432X.2023.2177537
read.mplus
, write.mplus
, mplus
,
mplus.update
, mplus.print
, mplus.plot
,
mplus.bayes
, mplus.run
## Not run: # Load data set "HolzingerSwineford1939" in the lavaan package data("HolzingerSwineford1939", package = "lavaan") #------------------------------------------------------------------------------- # Example 1: LCA with k = 1 to k = 8 profiles, continuous indicators # Input statements that contain parameter estimates # Vuong-Lo-Mendell-Rubin LRT and bootstrapped LRT mplus.lca(HolzingerSwineford1939, ind = c("x1", "x2", "x3", "x4"), classes = 8, output = c("SVALUES", "TECH11", "TECH14")) #------------------------------------------------------------------------------- # Example 22: LCA with k = 1 to k = 6 profiles, ordered categorical indicators # Select observations with ageyr <= 13 # Estimate all models in Mplus mplus.lca(round(HolzingerSwineford1939[, -5]), ind = c("x1", "x2", "x3", "x4"), type = "categorical", useobservations = "ageyr <= 13", mplus.run = TRUE) ## End(Not run)
## Not run: # Load data set "HolzingerSwineford1939" in the lavaan package data("HolzingerSwineford1939", package = "lavaan") #------------------------------------------------------------------------------- # Example 1: LCA with k = 1 to k = 8 profiles, continuous indicators # Input statements that contain parameter estimates # Vuong-Lo-Mendell-Rubin LRT and bootstrapped LRT mplus.lca(HolzingerSwineford1939, ind = c("x1", "x2", "x3", "x4"), classes = 8, output = c("SVALUES", "TECH11", "TECH14")) #------------------------------------------------------------------------------- # Example 22: LCA with k = 1 to k = 6 profiles, ordered categorical indicators # Select observations with ageyr <= 13 # Estimate all models in Mplus mplus.lca(round(HolzingerSwineford1939[, -5]), ind = c("x1", "x2", "x3", "x4"), type = "categorical", useobservations = "ageyr <= 13", mplus.run = TRUE) ## End(Not run)
This function uses the h5file
function in the hdf5r package to
read a Mplus GH5 file that is requested by the command PLOT: TYPE IS PLOT2
in Mplus to display trace plots, posterior distribution plots, autocorrelation
plots, posterior predictive check plots based on the "bayesian_data" section, and
the loop plot based on the "loop_data" section of the Mplus GH5 file. By default,
the function displays trace plots if the "bayesian_data" section is available in
the Mplus GH5 File. Otherwise, the function plots the loop plot if the "loop_data"
section is available in the Mplus GH5 file.
mplus.plot(x, plot = c("none", "trace", "post", "auto", "ppc", "loop"), param = c("all", "on", "by", "with", "inter", "var", "r2", "new"), std = c("all", "none", "stdyx", "stdy", "std"), burnin = TRUE, point = c("all", "none", "m", "med", "map"), ci = c("none", "eti", "hdi"), chain = 1, conf.level = 0.95, hist = TRUE, density = TRUE, area = TRUE, alpha = 0.4, fill = "gray85", nrow = NULL, ncol = NULL, scales = c("fixed", "free", "free_x", "free_y"), xlab = NULL, ylab = NULL, xlim = NULL, ylim = NULL, xbreaks = ggplot2::waiver(), ybreaks = ggplot2::waiver(), xexpand = ggplot2::waiver(), yexpand = ggplot2::waiver(), palette = "Set 2", binwidth = NULL, bins = NULL, density.col = "#0072B2", shape = 21, point.col = c("#CC79A7", "#D55E00", "#009E73"), linewidth = 0.6, linetype = "dashed", line.col = "black", bar.col = "black", bar.width = 0.8, plot.margin = NULL, legend.title.size = 10, legend.text.size = 10, legend.box.margin = NULL, saveplot = c("all", "none", "trace", "post", "auto", "ppc", "loop"), file = "Mplus_Plot.pdf", file.plot = c("_TRACE", "_POST", "_AUTO", "_PPC", "_LOOP"), width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 600, check = TRUE)
mplus.plot(x, plot = c("none", "trace", "post", "auto", "ppc", "loop"), param = c("all", "on", "by", "with", "inter", "var", "r2", "new"), std = c("all", "none", "stdyx", "stdy", "std"), burnin = TRUE, point = c("all", "none", "m", "med", "map"), ci = c("none", "eti", "hdi"), chain = 1, conf.level = 0.95, hist = TRUE, density = TRUE, area = TRUE, alpha = 0.4, fill = "gray85", nrow = NULL, ncol = NULL, scales = c("fixed", "free", "free_x", "free_y"), xlab = NULL, ylab = NULL, xlim = NULL, ylim = NULL, xbreaks = ggplot2::waiver(), ybreaks = ggplot2::waiver(), xexpand = ggplot2::waiver(), yexpand = ggplot2::waiver(), palette = "Set 2", binwidth = NULL, bins = NULL, density.col = "#0072B2", shape = 21, point.col = c("#CC79A7", "#D55E00", "#009E73"), linewidth = 0.6, linetype = "dashed", line.col = "black", bar.col = "black", bar.width = 0.8, plot.margin = NULL, legend.title.size = 10, legend.text.size = 10, legend.box.margin = NULL, saveplot = c("all", "none", "trace", "post", "auto", "ppc", "loop"), file = "Mplus_Plot.pdf", file.plot = c("_TRACE", "_POST", "_AUTO", "_PPC", "_LOOP"), width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 600, check = TRUE)
x |
a character string indicating the name of the Mplus
GH5 file (HDF5 format) with or without the file
extension |
plot |
a character string indicating the type of plot to
display, i.e., |
param |
character vector indicating which parameters to print
for the trace plots, posterior distribution plots,
and autocorrelation plots, i.e., |
std |
a character vector indicating the standardized
parameters to print for the trace plots, posterior
distribution plots, and autocorrelation plots, i.e.,
|
burnin |
logical: if |
point |
a character vector indicating the point estimate(s)
to be displayed in the posterior distribution plots,
i.e., |
ci |
a character string indicating the type of credible
interval to be displayed in the posterior distribution
plots, i.e., |
chain |
a numerical value indicating the chain to be used for the autocorrelation plots. By default, the first chain is used. |
conf.level |
a numeric value between 0 and 1 indicating the
confidence level of the credible interval (default is
|
hist |
logical: if |
density |
logical: if |
area |
logical: if |
alpha |
a numeric value between 0 and 1 for the |
fill |
a character string indicating the color for the
|
nrow |
a numeric value indicating the |
ncol |
a numeric value indicating the |
scales |
a character string indicating the |
xlab |
a character string indicating the |
ylab |
a character string indicating the |
xlim |
a numeric vector with two elements indicating the
|
ylim |
a numeric vector with two elements indicating the
|
xbreaks |
a numeric vector indicating the |
ybreaks |
a numeric vector indicating the |
xexpand |
a numeric vector with two elements indicating the
|
yexpand |
a numeric vector with two elements indicating the
|
palette |
a character string indicating the palette name (default
is |
binwidth |
a numeric value indicating the |
bins |
a numeric value indicating the |
density.col |
a character string indicating the |
shape |
a numeric value indicating the |
point.col |
a character vector with three elements indicating the
|
linewidth |
a numeric value indicating the |
linetype |
a numeric value indicating the |
line.col |
a character string indicating the |
bar.col |
a character string indicating the |
bar.width |
a character string indicating the |
plot.margin |
a numeric vector indicating the |
legend.title.size |
a numeric value indicating the |
legend.text.size |
a numeric value indicating the |
legend.box.margin |
a numeric vector indicating the |
saveplot |
a character vector indicating the plot to be saved,
i.e., |
file |
a character string indicating the |
file.plot |
a character vector with five elements for distinguishing
different types of plots. By default, the character
string specified in the argument |
width |
a numeric value indicating the |
height |
a numeric value indicating the |
units |
a character string indicating the |
dpi |
a numeric value indicating the |
check |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
Mplus GH5 file |
args |
specification of function arguments |
data |
list with posterior distribution of each parameter estimate
in wide and long format ( |
plot |
list with the trace plots ( |
Takuya Yanagida
Muthen, L. K., & Muthen, B. O. (1998-2017). Mplus User's Guide (8th ed.). Muthen & Muthen.
read.mplus
, write.mplus
, mplus
,
mplus.update
, mplus.print
, mplus.bayes
,
mplus.run
, mplus.lca
## Not run: #---------------------------------------------------------------------------- # Mplus Example 3.18: Moderated Mediation with a Plot of the Indirect Effect #.......... # Trace Plots # Example 1a: Default setting mplus.plot("ex3.18.gh5") # Example 1b: Exclude first half of each chain mplus.plot("ex3.18.gh5", burnin = FALSE) # Example 1c: Print all parameters mplus.plot("ex3.18.gh5", param = "all") # Example 1d: Print user-specified parameters mplus.plot("ex3.18.gh5", param = "param") # Example 1e: Arrange panels in three columns mplus.plot("ex3.18.gh5", ncol = 3) # Example 1f: Specify "Pastel 1" palette for the hcl.colors function mplus.plot("ex3.18.gh5", palette = "Pastel 1") #.......... # Posterior Distribution Plots # Example 2a: Default setting, i.e., posterior median and equal-tailed interval mplus.plot("ex3.18.gh5", plot = "post") # Example 2b: Display posterior mean and maximum a posteriori mplus.plot("ex3.18.gh5", plot = "post", point = c("m", "map")) # Example 2c: Display maximum a posteriori and highest density interval mplus.plot("ex3.18.gh5", plot = "post", point = "map", ci = "hdi") # Example 2d: Do not display any point estimates and credible interval mplus.plot("ex3.18.gh5", plot = "post", point = "none", ci = "none") # Example 2d: Do not display histograms mplus.plot("ex3.18.gh5", plot = "post", hist = FALSE) #.......... # Autocorrelation Plots # Example 3a: Default setting, i.e., first chain mplus.plot("ex3.18.gh5", plot = "auto") # Example 3b: Use second chain mplus.plot("ex3.18.gh5", plot = "auto", chain = 2) # Example 3b: Modify limits and breaks of the y-axis mplus.plot("ex3.18.gh5", plot = "auto", ylim = c(-0.05, 0.05), ybreaks = seq(-0.1, 0.1, by = 0.025)) #.......... # Posterior Predictive Check Plots # Example 4a: Default setting, i.e., 95% Interval mplus.plot("ex3.18.gh5", plot = "ppc") # Example 4b: Default setting, i.e., 99% Interval mplus.plot("ex3.18.gh5", plot = "ppc", conf.level = 0.99) #.......... # Loop Plot # Example 5a: Default setting mplus.plot("ex3.18.gh5", plot = "loop") # Example 5b: Do not fill area and draw vertical lines mplus.plot("ex3.18.gh5", plot = "loop", area = FALSE) #.......... # Save Plots # Example 6a: Save all plots in pdf format mplus.plot("ex3.18.gh5", saveplot = "all") # Example 6b: Save all plots in png format with 300 dpi mplus.plot("ex3.18.gh5", saveplot = "all", file = "Mplus_Plot.png", dpi = 300) # Example 6a: Save loop plot, specify width and height of the plot mplus.plot("ex3.18.gh5", plot = "none", saveplot = "loop", width = 7.5, height = 7) #---------------------------------------------------------------------------- # Plot from misty.object # Create misty.object object <- mplus.plot("ex3.18.gh5", plot = "none") # Trace plot mplus.plot(object, plot = "trace") # Posterior distribution plot mplus.plot(object, plot = "post") # Autocorrelation plot mplus.plot(object, plot = "auto") # Posterior predictive check plot mplus.plot(object, plot = "ppc") # Loop plot mplus.plot(object, plot = "loop") #---------------------------------------------------------------------------- # Create Plots Manually # Load ggplot2 package library(ggplot2) # Create misty object object <- mplus.plot("ex3.18.gh5", plot = "none") #.......... # Example 7: Trace Plots # Extract data in long format data.post <- object$data$post$long # Extract ON parameters data.trace <- data.post[grep(" ON ", data.post$param), ] # Plot ggplot(data.trace, aes(x = iter, y = value, color = chain)) + annotate("rect", xmin = 0, xmax = 15000, ymin = -Inf, ymax = Inf, alpha = 0.4, fill = "gray85") + geom_line() + facet_wrap(~ param, ncol = 2, scales = "free") + scale_x_continuous(name = "", expand = c(0.02, 0)) + scale_y_continuous(name = "", expand = c(0.02, 0)) + scale_colour_manual(name = "Chain", values = hcl.colors(n = 2, palette = "Set 2")) + theme_bw() + guides(color = guide_legend(nrow = 1, byrow = TRUE)) + theme(plot.margin = margin(c(4, 15, -10, 0)), legend.position = "bottom", legend.title = element_text(size = 10), legend.text = element_text(size = 10), legend.box.margin = margin(c(-16, 6, 6, 6)), legend.background = element_rect(fill = "transparent")) #.......... # Example 8: Posterior Distribution Plots # Extract data in long format data.post <- object$data$post$long # Extract ON parameters data.post <- data.post[grep(" ON ", data.post$param), ] # Discard burn-in iterations data.post <- data.post[data.post$iter > 15000, ] # Drop factor levels data.post$param <- droplevels(data.post$param, exclude = c("[Y]", "[M]", "Y", "M", "INDIRECT", "MOD")) # Plot ggplot(data.post, aes(x = value)) + geom_histogram(aes(y = after_stat(density)), color = "black", alpha = 0.4, fill = "gray85") + geom_density(color = "#0072B2") + geom_vline(data = data.frame(param = unique(data.post$param), stat = tapply(data.post$value, data.post$param, median)), aes(xintercept = stat, color = "Median"), linewidth = 0.6) + geom_vline(data = data.frame(param = unique(data.post$param), low = tapply(data.post$value, data.post$param, function(y) quantile(y, probs = 0.025))), aes(xintercept = low), linetype = "dashed", linewidth = 0.6) + geom_vline(data = data.frame(param = unique(data.post$param), upp = tapply(data.post$value, data.post$param, function(y) quantile(y, probs = 0.975))), aes(xintercept = upp), linetype = "dashed", linewidth = 0.6) + facet_wrap(~ param, ncol = 2, scales = "free") + scale_x_continuous(name = "", expand = c(0.02, 0)) + scale_y_continuous(name = "Probability Density, f(x)", expand = expansion(mult = c(0L, 0.05))) + scale_color_manual(name = "Point Estimate", values = c(Median = "#D55E00")) + labs(caption = "95% Equal-Tailed Interval") + theme_bw() + theme(plot.margin = margin(c(4, 15, -8, 4)), plot.caption = element_text(hjust = 0.5, vjust = 7), legend.position = "bottom", legend.title = element_text(size = 10), legend.text = element_text(size = 10), legend.box.margin = margin(c(-30, 6, 6, 6)), legend.background = element_rect(fill = "transparent")) #.......... # Example 9: Autocorrelation Plots # Extract data in long format data.auto <- object$data$auto$long # Select first chain data.auto <- data.auto[data.auto$chain == 1, ] # Extract ON parameters data.auto <- data.auto[grep(" ON ", data.auto$param), ] # Plot ggplot(data.auto, aes(x = lag, y = cor)) + geom_bar(stat = "identity", alpha = 0.4, color = "black", fill = "gray85", width = 0.8) + facet_wrap(~ param, ncol = 2) + scale_x_continuous(name = "Lag", breaks = seq(1, 30, by = 2), expand = c(0.02, 0)) + scale_y_continuous(name = "Autocorrelation", limits = c(-0.1, 0.1), breaks = seq(-0.1, 1., by = 0.05), expand = c(0.02, 0)) + theme_bw() + theme(plot.margin = margin(c(4, 15, 4, 4))) #.......... # Example 10: Posterior Predictive Check (PPC) Plots # Extract data data.ppc <- object$data$ppc # Scatter plot ppc.scatter <- ggplot(data.ppc, aes(x = obs, y = rep)) + geom_point(shape = 21, fill = "gray85") + geom_abline(slope = 1) + scale_x_continuous("Observed", limits = c(0, 45), breaks = seq(0, 45, by = 5), expand = c(0.02, 0)) + scale_y_continuous("Recpliated", limits = c(0, 45), breaks = seq(0, 45, by = 5), expand = c(0.02, 0)) + theme_bw() + theme(plot.margin = margin(c(2, 15, 4, 4))) # Histogram ppc.hist <- ggplot(data.ppc, aes(x = diff)) + geom_histogram(color = "black", alpha = 0.4, fill = "gray85") + geom_vline(xintercept = mean(data.ppc$diff), color = "#CC79A7") + geom_vline(xintercept = quantile(data.ppc$diff, probs = 0.025), linetype = "dashed", color = "#CC79A7") + geom_vline(xintercept = quantile(data.ppc$diff, probs = 0.975), linetype = "dashed", color = "#CC79A7") + scale_x_continuous("Observed - Replicated", expand = c(0.02, 0)) + scale_y_continuous("Count", expand = expansion(mult = c(0L, 0.05))) + theme_bw() + theme(plot.margin = margin(c(2, 15, 4, 4))) # Combine plots using the patchwork package patchwork::wrap_plots(ppc.scatter, ppc.hist) #.......... # Example 11: Loop Plot # Extract data data.loop <- object$data$loop # Plot plot.loop <- ggplot(data.loop, aes(x = xval, y = estimate)) + geom_line(linewidth = 0.6, show.legend = FALSE) + geom_line(aes(xval, low)) + geom_line(aes(xval, upp)) + scale_x_continuous("MOD", expand = c(0.02, 0)) + scale_y_continuous("INDIRECT", expand = c(0.02, 0)) + scale_fill_manual("Statistical Significance", values = hcl.colors(n = 2, palette = "Set 2")) + theme_bw() + theme(plot.margin = margin(c(4, 15, -6, 4)), legend.position = "bottom", legend.title = element_text(size = 10), legend.text = element_text(size = 10), legend.box.margin = margin(-10, 6, 6, 6), legend.background = element_rect(fill = "transparent")) # Significance area for (i in unique(data.loop$group)) { plot.loop <- plot.loop + geom_ribbon(data = data.loop[data.loop$group == i, ], aes(ymin = low, ymax = upp, fill = sig), alpha = 0.4) } # Vertical lines plot.loop + geom_vline(data = data.loop[data.loop$change == 1, ], aes(xintercept = xval, color = sig), linewidth = 0.6, linetype = "dashed", show.legend = FALSE) ## End(Not run)
## Not run: #---------------------------------------------------------------------------- # Mplus Example 3.18: Moderated Mediation with a Plot of the Indirect Effect #.......... # Trace Plots # Example 1a: Default setting mplus.plot("ex3.18.gh5") # Example 1b: Exclude first half of each chain mplus.plot("ex3.18.gh5", burnin = FALSE) # Example 1c: Print all parameters mplus.plot("ex3.18.gh5", param = "all") # Example 1d: Print user-specified parameters mplus.plot("ex3.18.gh5", param = "param") # Example 1e: Arrange panels in three columns mplus.plot("ex3.18.gh5", ncol = 3) # Example 1f: Specify "Pastel 1" palette for the hcl.colors function mplus.plot("ex3.18.gh5", palette = "Pastel 1") #.......... # Posterior Distribution Plots # Example 2a: Default setting, i.e., posterior median and equal-tailed interval mplus.plot("ex3.18.gh5", plot = "post") # Example 2b: Display posterior mean and maximum a posteriori mplus.plot("ex3.18.gh5", plot = "post", point = c("m", "map")) # Example 2c: Display maximum a posteriori and highest density interval mplus.plot("ex3.18.gh5", plot = "post", point = "map", ci = "hdi") # Example 2d: Do not display any point estimates and credible interval mplus.plot("ex3.18.gh5", plot = "post", point = "none", ci = "none") # Example 2d: Do not display histograms mplus.plot("ex3.18.gh5", plot = "post", hist = FALSE) #.......... # Autocorrelation Plots # Example 3a: Default setting, i.e., first chain mplus.plot("ex3.18.gh5", plot = "auto") # Example 3b: Use second chain mplus.plot("ex3.18.gh5", plot = "auto", chain = 2) # Example 3b: Modify limits and breaks of the y-axis mplus.plot("ex3.18.gh5", plot = "auto", ylim = c(-0.05, 0.05), ybreaks = seq(-0.1, 0.1, by = 0.025)) #.......... # Posterior Predictive Check Plots # Example 4a: Default setting, i.e., 95% Interval mplus.plot("ex3.18.gh5", plot = "ppc") # Example 4b: Default setting, i.e., 99% Interval mplus.plot("ex3.18.gh5", plot = "ppc", conf.level = 0.99) #.......... # Loop Plot # Example 5a: Default setting mplus.plot("ex3.18.gh5", plot = "loop") # Example 5b: Do not fill area and draw vertical lines mplus.plot("ex3.18.gh5", plot = "loop", area = FALSE) #.......... # Save Plots # Example 6a: Save all plots in pdf format mplus.plot("ex3.18.gh5", saveplot = "all") # Example 6b: Save all plots in png format with 300 dpi mplus.plot("ex3.18.gh5", saveplot = "all", file = "Mplus_Plot.png", dpi = 300) # Example 6a: Save loop plot, specify width and height of the plot mplus.plot("ex3.18.gh5", plot = "none", saveplot = "loop", width = 7.5, height = 7) #---------------------------------------------------------------------------- # Plot from misty.object # Create misty.object object <- mplus.plot("ex3.18.gh5", plot = "none") # Trace plot mplus.plot(object, plot = "trace") # Posterior distribution plot mplus.plot(object, plot = "post") # Autocorrelation plot mplus.plot(object, plot = "auto") # Posterior predictive check plot mplus.plot(object, plot = "ppc") # Loop plot mplus.plot(object, plot = "loop") #---------------------------------------------------------------------------- # Create Plots Manually # Load ggplot2 package library(ggplot2) # Create misty object object <- mplus.plot("ex3.18.gh5", plot = "none") #.......... # Example 7: Trace Plots # Extract data in long format data.post <- object$data$post$long # Extract ON parameters data.trace <- data.post[grep(" ON ", data.post$param), ] # Plot ggplot(data.trace, aes(x = iter, y = value, color = chain)) + annotate("rect", xmin = 0, xmax = 15000, ymin = -Inf, ymax = Inf, alpha = 0.4, fill = "gray85") + geom_line() + facet_wrap(~ param, ncol = 2, scales = "free") + scale_x_continuous(name = "", expand = c(0.02, 0)) + scale_y_continuous(name = "", expand = c(0.02, 0)) + scale_colour_manual(name = "Chain", values = hcl.colors(n = 2, palette = "Set 2")) + theme_bw() + guides(color = guide_legend(nrow = 1, byrow = TRUE)) + theme(plot.margin = margin(c(4, 15, -10, 0)), legend.position = "bottom", legend.title = element_text(size = 10), legend.text = element_text(size = 10), legend.box.margin = margin(c(-16, 6, 6, 6)), legend.background = element_rect(fill = "transparent")) #.......... # Example 8: Posterior Distribution Plots # Extract data in long format data.post <- object$data$post$long # Extract ON parameters data.post <- data.post[grep(" ON ", data.post$param), ] # Discard burn-in iterations data.post <- data.post[data.post$iter > 15000, ] # Drop factor levels data.post$param <- droplevels(data.post$param, exclude = c("[Y]", "[M]", "Y", "M", "INDIRECT", "MOD")) # Plot ggplot(data.post, aes(x = value)) + geom_histogram(aes(y = after_stat(density)), color = "black", alpha = 0.4, fill = "gray85") + geom_density(color = "#0072B2") + geom_vline(data = data.frame(param = unique(data.post$param), stat = tapply(data.post$value, data.post$param, median)), aes(xintercept = stat, color = "Median"), linewidth = 0.6) + geom_vline(data = data.frame(param = unique(data.post$param), low = tapply(data.post$value, data.post$param, function(y) quantile(y, probs = 0.025))), aes(xintercept = low), linetype = "dashed", linewidth = 0.6) + geom_vline(data = data.frame(param = unique(data.post$param), upp = tapply(data.post$value, data.post$param, function(y) quantile(y, probs = 0.975))), aes(xintercept = upp), linetype = "dashed", linewidth = 0.6) + facet_wrap(~ param, ncol = 2, scales = "free") + scale_x_continuous(name = "", expand = c(0.02, 0)) + scale_y_continuous(name = "Probability Density, f(x)", expand = expansion(mult = c(0L, 0.05))) + scale_color_manual(name = "Point Estimate", values = c(Median = "#D55E00")) + labs(caption = "95% Equal-Tailed Interval") + theme_bw() + theme(plot.margin = margin(c(4, 15, -8, 4)), plot.caption = element_text(hjust = 0.5, vjust = 7), legend.position = "bottom", legend.title = element_text(size = 10), legend.text = element_text(size = 10), legend.box.margin = margin(c(-30, 6, 6, 6)), legend.background = element_rect(fill = "transparent")) #.......... # Example 9: Autocorrelation Plots # Extract data in long format data.auto <- object$data$auto$long # Select first chain data.auto <- data.auto[data.auto$chain == 1, ] # Extract ON parameters data.auto <- data.auto[grep(" ON ", data.auto$param), ] # Plot ggplot(data.auto, aes(x = lag, y = cor)) + geom_bar(stat = "identity", alpha = 0.4, color = "black", fill = "gray85", width = 0.8) + facet_wrap(~ param, ncol = 2) + scale_x_continuous(name = "Lag", breaks = seq(1, 30, by = 2), expand = c(0.02, 0)) + scale_y_continuous(name = "Autocorrelation", limits = c(-0.1, 0.1), breaks = seq(-0.1, 1., by = 0.05), expand = c(0.02, 0)) + theme_bw() + theme(plot.margin = margin(c(4, 15, 4, 4))) #.......... # Example 10: Posterior Predictive Check (PPC) Plots # Extract data data.ppc <- object$data$ppc # Scatter plot ppc.scatter <- ggplot(data.ppc, aes(x = obs, y = rep)) + geom_point(shape = 21, fill = "gray85") + geom_abline(slope = 1) + scale_x_continuous("Observed", limits = c(0, 45), breaks = seq(0, 45, by = 5), expand = c(0.02, 0)) + scale_y_continuous("Recpliated", limits = c(0, 45), breaks = seq(0, 45, by = 5), expand = c(0.02, 0)) + theme_bw() + theme(plot.margin = margin(c(2, 15, 4, 4))) # Histogram ppc.hist <- ggplot(data.ppc, aes(x = diff)) + geom_histogram(color = "black", alpha = 0.4, fill = "gray85") + geom_vline(xintercept = mean(data.ppc$diff), color = "#CC79A7") + geom_vline(xintercept = quantile(data.ppc$diff, probs = 0.025), linetype = "dashed", color = "#CC79A7") + geom_vline(xintercept = quantile(data.ppc$diff, probs = 0.975), linetype = "dashed", color = "#CC79A7") + scale_x_continuous("Observed - Replicated", expand = c(0.02, 0)) + scale_y_continuous("Count", expand = expansion(mult = c(0L, 0.05))) + theme_bw() + theme(plot.margin = margin(c(2, 15, 4, 4))) # Combine plots using the patchwork package patchwork::wrap_plots(ppc.scatter, ppc.hist) #.......... # Example 11: Loop Plot # Extract data data.loop <- object$data$loop # Plot plot.loop <- ggplot(data.loop, aes(x = xval, y = estimate)) + geom_line(linewidth = 0.6, show.legend = FALSE) + geom_line(aes(xval, low)) + geom_line(aes(xval, upp)) + scale_x_continuous("MOD", expand = c(0.02, 0)) + scale_y_continuous("INDIRECT", expand = c(0.02, 0)) + scale_fill_manual("Statistical Significance", values = hcl.colors(n = 2, palette = "Set 2")) + theme_bw() + theme(plot.margin = margin(c(4, 15, -6, 4)), legend.position = "bottom", legend.title = element_text(size = 10), legend.text = element_text(size = 10), legend.box.margin = margin(-10, 6, 6, 6), legend.background = element_rect(fill = "transparent")) # Significance area for (i in unique(data.loop$group)) { plot.loop <- plot.loop + geom_ribbon(data = data.loop[data.loop$group == i, ], aes(ymin = low, ymax = upp, fill = sig), alpha = 0.4) } # Vertical lines plot.loop + geom_vline(data = data.loop[data.loop$change == 1, ], aes(xintercept = xval, color = sig), linewidth = 0.6, linetype = "dashed", show.legend = FALSE) ## End(Not run)
This function prints the input command sections and the result sections of a Mplus
output file (.out
) on the R console. By default, the function prints
selected result sections, e.g., short Summary of Analysis
, short
Summary of Data
, Model Fit Information
, and Model Results
.
mplus.print(x, print = c("all", "input", "result"), input = c("all", "default", "data", "variable", "define", "analysis", "model", "montecarlo", "mod.pop", "mod.cov", "mod.miss", "message"), result = c("all", "default", "summary.analysis.short", "summary.data.short", "random.starts", "summary.fit", "mod.est", "fit", "class.count", "classif", "mod.result", "total.indirect"), exclude = NULL, variable = FALSE, not.input = TRUE, not.result = TRUE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
mplus.print(x, print = c("all", "input", "result"), input = c("all", "default", "data", "variable", "define", "analysis", "model", "montecarlo", "mod.pop", "mod.cov", "mod.miss", "message"), result = c("all", "default", "summary.analysis.short", "summary.data.short", "random.starts", "summary.fit", "mod.est", "fit", "class.count", "classif", "mod.result", "total.indirect"), exclude = NULL, variable = FALSE, not.input = TRUE, not.result = TRUE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
x |
a character string indicating the name of the Mplus output
file with or without the file extension |
print |
a character vector indicating which section to show, i.e.
|
input |
a character vector specifying Mplus input command sections |
result |
a character vector specifying Mplus result sections included in the output (see 'Details'). |
exclude |
a character vector specifying Mplus input command or result sections excluded from the output (see 'Details'). |
variable |
logical: if |
not.input |
logical: if |
not.result |
logical: if |
write |
a character string naming a file for writing the output into
a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Following input command sections can be
selected by using the input
argument or excluded by using the exclude
argument:
"title"
for the TITLE
command used to provide a title
for the analysis.
"data"
for the DATA
command used to provide information
about the data set to be analyzed.
"data.imp"
for the DATA IMPUTATION
command used to
create a set of imputed data sets using multiple imputation methodology.
"data.wl"
for the DATA WIDETOLONG
command used to
rearrange data from a multivariate wide format to a univariate long format.
"data.lw"
for the DATA LONGTOWIDE
command used to
rearrange a univariate long format to a multivariate wide format.
"data.tp"
for the DATA TWOPART
command used to create
a binary and a continuous variable from a continuous variable with a floor
effect for use in two-part modeling.
"data.miss"
for the DATA MISSING
command used to
create a set of binary variables that are indicators of missing data or
dropout for another set of variables.
"data.surv"
for the DATA SURVIVAL
command used to
create variables for discrete-time survival modeling.
"data.coh"
for the DATA COHORT
command used to
rearrange longitudinal data from a format where time points represent
measurement occasions to a format where time points represent age or
another time-related variable,
"variable"
for the VARIABLE
command used to provide
information about the variables in the data set to be analyzed.
"define"
for the DEFINE
command used to transform
existing variables and to create new variables.
"analysis"
for the ANALYSIS
command used to describe
the technical details for the analysis.
"model"
MODEL
for the command used to describe the
model to be estimated.
"mod.ind"
for the MODEL INDIRECT
command used to
request indirect and directed effects and their standard errors.
"mod.test"
for the MODEL TEST
command used to
test restrictions on the parameters in the MODEL
and MODEL CONSTRAINT
commands using the Wald chi-square test.
"mod.prior"
for the MODEL PRIORS
command used with
ESTIMATOR IS BAYES
to specify the prior distribution for each
parameter.
"montecarlo"
for the MONTECARLO
command used to set
up and carry out a Monte Carlo simulation study.
"mod.pop"
for the MODEL POPULATION
command used
to provide the population parameter values to be used in data generation
using the options of the MODEL
command.
"mod.cov"
for the MODEL COVERAGE
used to provide
the population parameter values to be used for computing coverage.
"mod.miss"
for the MODEL MISSING
command used to
provide information about the population parameter values for the missing
data model to be used in the generation of data.
"output"
for the for the OUTPUT
command used to
request additional output beyond that included as the default.
"savedata"
for the SAVEDATA
command used to save
the analysis data and/or a variety of model results in an ASCII file for
future use.
"plot"
for the PLOT
command used to requested graphical
displays of observed data and analysis results.
"message"
for warning and error messages that have been
generated by the program after the input command sections.
Note that all input command sections are requested by specifying input = "all"
.
The input
argument is also used to select one (e.g., input = "model"
)
or more than one input command sections (e.g., input = c("analysis", "model")
),
or to request input command sections in addition to the default setting (e.g.,
input = c("default", "output")
). The exclude
argument is used
to exclude input command sections from the output (e.g., exclude = "variable").
Following result sections can be selected by
using the result
argument or excluded by using the exclude
argument:
"summary.analysis"
for the SUMMARY OF ANALYSIS
section..
"summary.analysis.short"
for a short SUMMARY OF ANALYSIS
section including the number of observations, number of groups, estimator, and optimization algorithm.
"summary.data"
for the SUMMARY OF DATA
section indicating.
"summary.data.short"
for a short SUMMARY OF DATA
section including number of clusters, average cluster size, and estimated intraclass correlations.
"prop.count"
for the UNIVARIATE PROPORTIONS AND COUNTS FOR CATEGORICAL VARIABLES
section.
"summary.censor"
for the SUMMARY OF CENSORED LIMITS
section.
"prop.zero"
for the COUNT PROPORTION OF ZERO, MINIMUM AND MAXIMUM VALUES
section.
"crosstab"
for the CROSSTABS FOR CATEGORICAL VARIABLES
section.
"summary.miss"
for the SUMMARY OF MISSING DATA PATTERNS
section.
"coverage"
for the COVARIANCE COVERAGE OF DATA
section.
"basic"
for the RESULTS FOR BASIC ANALYSIS
section.
"sample.stat"
for the SAMPLE STATISTICS
section.
"uni.sample.stat"
for the UNIVARIATE SAMPLE STATISTICS
section.
"random.starts"
for the RANDOM STARTS RESULTS
section.
"summary.fit"
for the SUMMARY OF MODEL FIT INFORMATION
section.
"mod.est"
for the THE MODEL ESTIMATION TERMINATED NORMALLY
message and warning messages from the model estimation.
"fit"
for the MODEL FIT INFORMATION
section.
"class.count"
for the FINAL CLASS COUNTS AND PROPORTIONS FOR THE LATENT CLASSES
section.
"ind.means"
for the LATENT CLASS INDICATOR MEANS AND PROBABILITIES
section.
"trans.prob"
for the LATENT TRANSITION PROBABILITIES BASED ON THE ESTIMATED MODEL
section.
"classif"
for the CLASSIFICATION QUALITY
section.
"mod.result"
for the MODEL RESULTS
and RESULTS FOR EXPLORATORY FACTOR ANALYSIS
section.
"odds.ratio"
for the LOGISTIC REGRESSION ODDS RATIO RESULTS
section.
"prob.scale"
for the RESULTS IN PROBABILITY SCALE
section.
"ind.odds.ratio"
for the LATENT CLASS INDICATOR ODDS RATIOS FOR THE LATENT CLASSES
section.
"alt.param"
for the ALTERNATIVE PARAMETERIZATIONS FOR THE CATEGORICAL LATENT VARIABLE REGRESSION
section.
"irt.param"
for the IRT PARAMETERIZATION
section.
"brant.wald"
for the BRANT WALD TEST FOR PROPORTIONAL ODDS
section.
"std.mod.result"
for the STANDARDIZED MODEL RESULTS
section.
"rsquare"
for the R-SQUARE
section.
"total.indirect"
for the TOTAL, TOTAL INDIRECT, SPECIFIC INDIRECT, AND DIRECT EFFECTS
section.
"std.total.indirect"
for the STANDARDIZED TOTAL, TOTAL INDIRECT, SPECIFIC INDIRECT, AND DIRECT EFFECTS
section.
"std.mod.result.cluster"
for the WITHIN-LEVEL STANDARDIZED MODEL RESULTS FOR CLUSTER
section.
"fs.comparison"
for the BETWEEN-LEVEL FACTOR SCORE COMPARISONS
section.
"conf.mod.result"
for the CONFIDENCE INTERVALS OF MODEL RESULTS
section.
"conf.std.conf"
for the CONFIDENCE INTERVALS OF STANDARDIZED MODEL RESULTS
section.
"conf.total.indirect"
for the CONFIDENCE INTERVALS OF TOTAL, TOTAL INDIRECT, SPECIFIC INDIRECT, AND DIRECT EFFECTS
section.
"conf.odds.ratio"
for the CONFIDENCE INTERVALS FOR THE LOGISTIC REGRESSION ODDS RATIO RESULTS
section.
"modind"
for the MODEL MODIFICATION INDICES
section.
"resid"
for the RESIDUAL OUTPUT
section.
"logrank"
for the LOGRANK OUTPUT
section.
"tech1"
for the TECHNICAL 1 OUTPUT
section.
"tech2"
for the TECHNICAL 2 OUTPUT
section.
"tech3"
for the TECHNICAL 3 OUTPUT
section.
"h1.tech3"
for the H1 TECHNICAL 3 OUTPUT
section.
"tech4"
for the TECHNICAL 4 OUTPUT
section.
"tech5"
for the TECHNICAL 5 OUTPUT
section.
"tech6"
for the TECHNICAL 6 OUTPUT
section.
"tech7"
for the TECHNICAL 7 OUTPUT
section.
"tech8"
for the TECHNICAL 8 OUTPUT
section.
"tech9"
for the TECHNICAL 9 OUTPUT
section.
"tech10"
for the TECHNICAL 10 OUTPUT
section.
"tech11"
for the TECHNICAL 11 OUTPUT
section.
"tech12"
for the TECHNICAL 12 OUTPUT
section.
"tech13"
for the TECHNICAL 13 OUTPUT
section.
"tech14"
for the TECHNICAL 14 OUTPUT
section.
"tech15"
for the TECHNICAL 15 OUTPUT
section.
"tech16"
for the TECHNICAL 16 OUTPUT
section.
"svalues"
for the MODEL COMMAND WITH FINAL ESTIMATES USED AS STARTING VALUES
section.
"stat.fscores"
for the SAMPLE STATISTICS FOR ESTIMATED FACTOR SCORES
section.
"summary.fscores"
for the SUMMARY OF FACTOR SCORES
section.
"pv"
for the SUMMARIES OF PLAUSIBLE VALUES
section.
"plotinfo"
for the PLOT INFORMATION
section.
"saveinfo"
for the SAVEDATA INFORMATION
section.
Note that all result sections are requested by specifying result = "all"
.
The result
argument is also used to select one (e.g., result = "mod.result"
)
or more than one result sections (e.g., result = c("mod.result", "std.mod.result")
),
or to request result sections in addition to the default setting (e.g.,
result = c("default", "odds.ratio")
). The exclude
argument is used
to exclude result sections from the output (e.g., exclude = "mod.result"
).
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
character string or misty object |
args |
specification of function arguments |
print |
print objects |
notprint |
character vectors indicating the input commands and result sections not requested |
result |
list with input command sections ( |
Takuya Yanagida
Muthen, L. K., & Muthen, B. O. (1998-2017). Mplus User's Guide (8th ed.). Muthen & Muthen.
read.mplus
, write.mplus
, mplus
,
mplus.update
, mplus.plot
, mplus.bayes
,
mplus.run
, mplus.lca
## Not run: #---------------------------------------------------------------------------- # Mplus Example 3.1: Linear Regression # Example 1a: Default setting mplus.print("ex3.1.out") # Example 1b: Print result section only mplus.print("ex3.1.out", print = "result") # Example 1c: Print MODEL RESULTS only mplus.print("ex3.1.out", print = "result", result = "mod.result") # Example 1d: Print UNIVARIATE SAMPLE STATISTICS in addition to the default setting mplus.print("ex3.1.out", result = c("default", "uni.sample.stat")) # Example 1e: Exclude MODEL FIT INFORMATION section mplus.print("ex3.1.out", exclude = "fit") # Example 1f: Print all result sections, but exclude MODEL FIT INFORMATION section mplus.print("ex3.1.out", result = "all", exclude = "fit") # Example 1g: Print result section in a different order mplus.print("ex3.1.out", result = c("mod.result", "fit", "summary.analysis")) #---------------------------------------------------------------------------- # misty.object of type 'mplus.print' # Example 2 # Create misty.object object <- mplus.print("ex3.1.out", output = FALSE) # Print misty.object mplus.print(object) #---------------------------------------------------------------------------- # Write Results # # Example 3: Write Results into a text file mplus.print("ex3.1.out", write = "Output_3-1.txt") ## End(Not run)
## Not run: #---------------------------------------------------------------------------- # Mplus Example 3.1: Linear Regression # Example 1a: Default setting mplus.print("ex3.1.out") # Example 1b: Print result section only mplus.print("ex3.1.out", print = "result") # Example 1c: Print MODEL RESULTS only mplus.print("ex3.1.out", print = "result", result = "mod.result") # Example 1d: Print UNIVARIATE SAMPLE STATISTICS in addition to the default setting mplus.print("ex3.1.out", result = c("default", "uni.sample.stat")) # Example 1e: Exclude MODEL FIT INFORMATION section mplus.print("ex3.1.out", exclude = "fit") # Example 1f: Print all result sections, but exclude MODEL FIT INFORMATION section mplus.print("ex3.1.out", result = "all", exclude = "fit") # Example 1g: Print result section in a different order mplus.print("ex3.1.out", result = c("mod.result", "fit", "summary.analysis")) #---------------------------------------------------------------------------- # misty.object of type 'mplus.print' # Example 2 # Create misty.object object <- mplus.print("ex3.1.out", output = FALSE) # Print misty.object mplus.print(object) #---------------------------------------------------------------------------- # Write Results # # Example 3: Write Results into a text file mplus.print("ex3.1.out", write = "Output_3-1.txt") ## End(Not run)
This function runs a group of Mplus models (.inp
files) located within
a single directory or nested within subdirectories.
mplus.run(target = getwd(), recursive = FALSE, filefilter = NULL, show.out = FALSE, replace.out = c("always", "never", "modified"), message = TRUE, logFile = NULL, Mplus = detect.mplus(), killOnFail = TRUE, local_tmpdir = FALSE)
mplus.run(target = getwd(), recursive = FALSE, filefilter = NULL, show.out = FALSE, replace.out = c("always", "never", "modified"), message = TRUE, logFile = NULL, Mplus = detect.mplus(), killOnFail = TRUE, local_tmpdir = FALSE)
target |
a character string indicating the directory containing
Mplus input files ( |
recursive |
logical: if |
filefilter |
a Perl regular expression (PCRE-compatible) specifying particular input files to be run within directory. See regex or http://www.pcre.org/pcre.txt for details about regular expression syntax. Not relevant if target is a single file. |
show.out |
logical: if |
replace.out |
a character string for specifying three settings:
|
message |
logical: if |
logFile |
a character string specifying a file that records the settings passed into the function and the models run (or skipped) during the run. |
Mplus |
a character string for specifying the name or path of the Mplus executable to be used for running models. This covers situations where Mplus is not in the system's path, or where one wants to test different versions of the Mplus program. Note that there is no need to specify this argument for most users since it has intelligent defaults. |
killOnFail |
logical: if |
local_tmpdir |
logical: if |
None.
This function is a copy of the runModels()
function in the
MplusAutomation package by Michael Hallquist and Joshua Wiley (2018).
Michael Hallquist and Joshua Wiley
Hallquist, M. N. & Wiley, J. F. (2018). MplusAutomation: An R package for facilitating large-scale latent variable analyses in Mplus. Structural Equation Modeling: A Multidisciplinary Journal, 25, 621-638. https://doi.org/10.1080/10705511.2017.1402334.
Muthen, L. K., & Muthen, B. O. (1998-2017). Mplus User's Guide (8th ed.). Muthen & Muthen.
read.mplus
, write.mplus
, mplus
,
mplus.update
, mplus.print
, mplus.plot
,
mplus.bayes
, mplus.lca
## Not run: # Example 1: Run Mplus models located within a single directory mplus.run(Mplus = "C:/Program Files/Mplus/Mplus.exe") # Example 2: Run Mplus models located nested within subdirectories mplus.run(recursive = TRUE, Mplus = "C:/Program Files/Mplus/Mplus.exe") ## End(Not run)
## Not run: # Example 1: Run Mplus models located within a single directory mplus.run(Mplus = "C:/Program Files/Mplus/Mplus.exe") # Example 2: Run Mplus models located nested within subdirectories mplus.run(recursive = TRUE, Mplus = "C:/Program Files/Mplus/Mplus.exe") ## End(Not run)
This function updates specific input command sections of a misty.object
of type mplus
to create an updated Mplus input file, run the updated
input file by using the mplus.run()
function, and print the updated Mplus
output file by using the mplus.print()
function.
mplus.update(x, update, file = "Mplus_Input_Update.inp", comment = FALSE, replace.inp = TRUE, mplus.run = TRUE, show.out = FALSE, replace.out = c("always", "never", "modified"), print = c("all", "input", "result"), input = c("all", "default", "data", "variable", "define", "analysis", "model", "montecarlo", "mod.pop", "mod.cov", "mod.miss", "message"), result = c("all", "default", "summary.analysis.short", "summary.data.short", "random.starts", "summary.fit", "mod.est", "fit", "class.count", "classif", "mod.result", "total.indirect"), exclude = NULL, variable = FALSE, not.input = TRUE, not.result = TRUE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
mplus.update(x, update, file = "Mplus_Input_Update.inp", comment = FALSE, replace.inp = TRUE, mplus.run = TRUE, show.out = FALSE, replace.out = c("always", "never", "modified"), print = c("all", "input", "result"), input = c("all", "default", "data", "variable", "define", "analysis", "model", "montecarlo", "mod.pop", "mod.cov", "mod.miss", "message"), result = c("all", "default", "summary.analysis.short", "summary.data.short", "random.starts", "summary.fit", "mod.est", "fit", "class.count", "classif", "mod.result", "total.indirect"), exclude = NULL, variable = FALSE, not.input = TRUE, not.result = TRUE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
x |
|
update |
a character string containing the updated input command sections. |
file |
a character string indicating the name of the updated Mplus
input file with or without the file extension |
comment |
logical: if |
replace.inp |
logical: if |
mplus.run |
logical: if |
show.out |
logical: if |
replace.out |
a character string for specifying three settings:
|
print |
a character vector indicating which results to show, i.e.
|
input |
a character vector specifying Mplus input command sections
included in the output (see 'Details' in the |
result |
a character vector specifying Mplus result sections included
in the output (see 'Details' in the |
exclude |
a character vector specifying Mplus input command or result
sections excluded from the output (see 'Details' in the
|
variable |
logical: if |
not.input |
logical: if |
not.result |
logical: if |
write |
a character string naming a file for writing the output into
a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
The function is used to update following Mplus input sections:
TITLE
DATA
DATA IMPUTATION
DATA WIDETOLONG
DATA LONGTOWIDE
DATA TWOPARTE
DATA MISSING
DATA SURVIVAL
DATA COHORT
VARIABLE
DEFINE
ANALYSIS
MODEL
MODEL INDIRECT
MODEL CONSTRAINT
MODEL TEST
MODEL PRIORS
MODEL MONTECARLO
MODEL POPULATION
MODEL COVERAGE
MODEL MISSING
OUTPUT
SAVEDATA
PLOT
...;
SpecificationThe ...;
Specification
is used to update specific options in the VARIABLE
and ANALYSIS
section, while keeping all other options in the misty.object
of type
mplus
specified in the argument x
. The ...;
specification
is only available for the VARIABLE
and ANALYSIS
section. Note
that ...;
including the semicolon ;
needs to be specified,
i.e., ...
without the semicolon ;
will result in an error message.
---;
SpecificationThe ---;
specification is
used to remove entire sections (e.g., OUTPUT: ---;
) or options within the
VARIABLE:
and ANALYSIS:
section (e.g., ANALYSIS: ESTIMATOR IS ---;
)
from the Mplus input. Note that ---;
including the semicolon ;
needs to be specified, i.e., ---
without the semicolon ;
will
result in an error message.
Comments in the Mplus Input can cause
problems when following keywords in uppercase, lower case, or mixed upper and lower
case letters are involved in the comments of the VARIABLE
and ANALYSIS
section:
VARIABLE
section: "NAMES", "USEOBSERVATIONS",
"USEVARIABLES", "MISSING", "CENSORED", "CATEGORICAL", "NOMINAL", "COUNT",
"DSURVIVAL", "GROUPING", "IDVARIABLE", "FREQWEIGHT", "TSCORES", "AUXILIARY",
"CONSTRAINT", "PATTERN", "STRATIFICATION", "CLUSTER", "WEIGHT", "WTSCALE",
"BWEIGHT", "B2WEIGHT", "B3WEIGHT", "BWTSCALE", "REPWEIGHTS", "SUBPOPULATION",
"FINITE", "CLASSES", "KNOWNCLASS", "TRAINING", "WITHIN", "BETWEEN", "SURVIVAL",
"TIMECENSORED", "LAGGED"
, or "TINTERVAL"
.
ANALYSIS
section: "TYPE", "ESTIMATOR", "MODEL",
"ALIGNMENT", "DISTRIBUTION", "PARAMETERIZATION", "LINK", "ROTATION",
"ROWSTANDARDIZATION", "PARALLEL", "REPSE", "BASEHAZARD", "CHOLESKY", "ALGORITHM",
"INTEGRATION", "MCSEED", "ADAPTIVE", "INFORMATION", "BOOTSTRAP", "LRTBOOTSTRAP",
"STARTS", "STITERATIONS", "STCONVERGENCE", "STSCALE", "STSEED", "OPTSEED",
"K-1STARTS", "LRTSTARTS", "RSTARTS", "ASTARTS", "H1STARTS", "DIFFTEST",
"MULTIPLIER", "COVERAGE", "ADDFREQUENCY", "ITERATIONS", "SDITERATIONS",
"H1ITERATIONS", "MITERATIONS", "MCITERATIONS", "MUITERATIONS", "RITERATIONS",
"AITERATIONS", "CONVERGENCE", "H1CONVERGENCE", "LOGCRITERION", "RLOGCRITERION",
"MCONVERGENCE", "MCCONVERGENCE", "MUCONVERGENCE", "RCONVERGENCE", "ACONVERGENCE",
"MIXC", "MIXU", "LOGHIGH", "LOGLOW", "UCELLSIZE", "VARIANCE", "SIMPLICITY",
"TOLERANCE", "METRIC", "MATRIX", "POINT", "CHAINS", "BSEED", "STVALUES",
"PREDICTOR", "ALGORITHM", "BCONVERGENCE", "BITERATIONS", "FBITERATIONS",
"THIN", "MDITERATIONS", "KOLMOGOROV", "PRIOR", "INTERACTIVE"
, or "PROCESSORS"
.
Note that comments are removed from the input text by default, i.e., comment = FALSE
.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
|
args |
specification of function arguments |
input |
list with input command sections |
write |
updated write command sections |
result |
list with input command sections ( |
Takuya Yanagida
Muthen, L. K., & Muthen, B. O. (1998-2017). Mplus User's Guide (8th ed.). Muthen & Muthen.
read.mplus
, write.mplus
, mplus
,
mplus.print
, mplus.plot
, mplus.bayes
,
mplus.run
, mplus.lca
## Not run: #---------------------------------------------------------------------------- # Example 1: Update VARIABLE and MODEL section # Write Mplus Data File write.mplus(ex3_1, file = "ex3_1.dat") # Specify Mplus input input <- ' DATA: FILE IS ex3_1.dat; VARIABLE: NAMES ARE y1 x1 x3; MODEL: y1 ON x1 x3; OUTPUT: SAMPSTAT; ' # Run Mplus input mod0 <- mplus(input, file = "ex3_1.inp") # Update VARIABLE and MODEL section update1 <- ' VARIABLE: ...; USEVARIABLES ARE y1 x1; MODEL: y1 ON x1; ' # Run updated Mplus input mod1 <- mplus.update(mod0, update1, file = "ex3_1_update1.inp") #---------------------------------------------------------------------------- # Example 2: Update ANALYSIS section # Update ANALYSIS section update2 <- ' ANALYSIS: ESTIMATOR IS MLR; ' # Run updated Mplus input mod2 <- mplus.update(mod1, update2, file = "ex3_1_update2.inp") #---------------------------------------------------------------------------- # Example 3: Remove OUTPUT section # Remove OUTPUT section update3 <- ' OUTPUT: ---; ' # Run updated Mplus input mod3 <- mplus.update(mod2, update3, file = "ex3_1_update3.inp") ## End(Not run)
## Not run: #---------------------------------------------------------------------------- # Example 1: Update VARIABLE and MODEL section # Write Mplus Data File write.mplus(ex3_1, file = "ex3_1.dat") # Specify Mplus input input <- ' DATA: FILE IS ex3_1.dat; VARIABLE: NAMES ARE y1 x1 x3; MODEL: y1 ON x1 x3; OUTPUT: SAMPSTAT; ' # Run Mplus input mod0 <- mplus(input, file = "ex3_1.inp") # Update VARIABLE and MODEL section update1 <- ' VARIABLE: ...; USEVARIABLES ARE y1 x1; MODEL: y1 ON x1; ' # Run updated Mplus input mod1 <- mplus.update(mod0, update1, file = "ex3_1_update1.inp") #---------------------------------------------------------------------------- # Example 2: Update ANALYSIS section # Update ANALYSIS section update2 <- ' ANALYSIS: ESTIMATOR IS MLR; ' # Run updated Mplus input mod2 <- mplus.update(mod1, update2, file = "ex3_1_update2.inp") #---------------------------------------------------------------------------- # Example 3: Remove OUTPUT section # Remove OUTPUT section update3 <- ' OUTPUT: ---; ' # Run updated Mplus input mod3 <- mplus.update(mod2, update3, file = "ex3_1_update3.inp") ## End(Not run)
This function is a wrapper function for conducting multilevel confirmatory factor
analysis to investigate four types of constructs, i.e., within-cluster constructs,
shared cluster-level constructs, configural cluster constructs, and simultaneous
shared and configural cluster constructs by calling the cfa
function in
the R package lavaan.
multilevel.cfa(..., data = NULL, cluster, model = NULL, rescov = NULL, model.w = NULL, model.b = NULL, rescov.w = NULL, rescov.b = NULL, const = c("within", "shared", "config", "shareconf"), fix.resid = NULL, ident = c("marker", "var", "effect"), ls.fit = FALSE, estimator = c("ML", "MLR"), optim.method = c("nlminb", "em"), missing = c("listwise", "fiml"), print = c("all", "summary", "coverage", "descript", "fit", "est", "modind", "resid"), mod.minval = 6.63, resid.minval = 0.1, digits = 3, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
multilevel.cfa(..., data = NULL, cluster, model = NULL, rescov = NULL, model.w = NULL, model.b = NULL, rescov.w = NULL, rescov.b = NULL, const = c("within", "shared", "config", "shareconf"), fix.resid = NULL, ident = c("marker", "var", "effect"), ls.fit = FALSE, estimator = c("ML", "MLR"), optim.method = c("nlminb", "em"), missing = c("listwise", "fiml"), print = c("all", "summary", "coverage", "descript", "fit", "est", "modind", "resid"), mod.minval = 6.63, resid.minval = 0.1, digits = 3, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame. If |
data |
a data frame when specifying one or more variables in the
argument |
cluster |
either a character string indicating the variable name of
the cluster variable in |
model |
a character vector for specifying the same factor structure
with one factor at the Within and Between Level, or a list
of character vectors for specifying the same measurement
model with more than one factor at the Within and Between
Level, e.g., |
rescov |
a character vector or a list of character vectors for specifying
residual covariances at the Within level, e.g. |
model.w |
a character vector specifying a measurement model with one factor at the Within level, or a list of character vectors for specifying a measurement model with more than one factor at the Within level. |
model.b |
a character vector specifying a measurement model with one factor at the Between level, or a list of character vectors for specifying a measurement model with more than one factor at the Between level. |
rescov.w |
a character vector or a list of character vectors for specifying residual covariances at the Within level. |
rescov.b |
a character vector or a list of character vectors for specifying residual covariances at the Between level. |
const |
a character string indicating the type of construct(s), i.e.,
|
fix.resid |
a character vector for specifying residual variances to be
fixed at 0 at the Between level, e.g., |
ident |
a character string indicating the method used for identifying
and scaling latent variables, i.e., |
ls.fit |
logical: if |
estimator |
a character string indicating the estimator to be used:
|
optim.method |
a character string indicating the optimizer, i.e., |
missing |
a character string indicating how to deal with missing data,
i.e., |
print |
a character string or character vector indicating which
results to show on the console, i.e. |
mod.minval |
numeric value to filter modification indices and only
show modifications with a modification index value equal
or higher than this minimum value. By default, modification
indices equal or higher 6.63 are printed. Note that a
modification index value of 6.63 is equivalent to a
significance level of |
resid.minval |
numeric value indicating the minimum absolute residual correlation coefficients and standardized means to highlight in boldface. By default, absolute residual correlation coefficients and standardized means equal or higher 0.1 are highlighted. Note that highlighting can be disabled by setting the minimum value to 1. |
digits |
an integer value indicating the number of decimal places
to be used for displaying results. Note that loglikelihood,
information criteria and chi-square test statistic is
printed with |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame used for the current analysis |
args |
specification of function arguments |
model |
specified model |
model.fit |
fitted lavaan object ( |
check |
results of the convergence and model identification check |
result |
list with result tables, i.e., |
The function uses the functions cfa
, lavInspect
, lavTech
,
modindices
, parameterEstimates
, and standardizedsolution
provided in the R package lavaan by Yves Rosseel (2012).
Takuya Yanagida [email protected]
Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48, 1-36. https://doi.org/10.18637/jss.v048.i02
item.cfa
, multilevel.fit
, multilevel.invar
,
multilevel.omega
, multilevel.cor
, multilevel.descript
## Not run: # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #---------------------------------------------------------------------------- # Model specification using 'x' for a one-factor model # with the same factor structure with one factor at the Within and Between Level #.......... # Cluster variable specification # Example 1a: Cluster variable 'cluster' in 'x' multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4", "cluster")], cluster = "cluster") # Example 1b: Cluster variable 'cluster' not in 'x' multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster) # Example 1c: Alternative specification using the 'data' argument multilevel.cfa(y1:y4, data = Demo.twolevel, cluster = "cluster") #.......... # Type of construct # Example 2a: Within-cluster constructs multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "within") # Example 2b: Shared cluster-level construct multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "shared") # Example 2c: Configural cluster construct (default) multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "config") # Example 2d: Simultaneous shared and configural cluster construct multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "shareconf") #.......... # Residual covariances at the Within level # Example 3a: Residual covariance between 'y1' and 'y3' multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, rescov = c("y1", "y3")) # Example 3b: Residual covariance between 'y1' and 'y3', and 'y2' and 'y4' multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, rescov = list(c("y1", "y3"), c("y2", "y4"))) #.......... # Residual variances at the Between level fixed at 0 # Example 4a: All residual variances fixed at 0 # i.e., strong factorial invariance across clusters multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, fix.resid = "all") # Example 4b: Fesidual variances of 'y1', 'y2', and 'y4' fixed at 0 # i.e., partial strong factorial invariance across clusters multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, fix.resid = c("y1", "y2", "y4")) #.......... # Print all results # Example 5: Set minimum value for modification indices to 1 multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", mod.minval = 1) #.......... # Example 6: lavaan model and summary of the estimated model mod <- multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, output = FALSE) # lavaan model syntax cat(mod$model) # Fitted lavaan object lavaan::summary(mod$model.fit, standardized = TRUE, fit.measures = TRUE) #.......... # Write results # Example 7a: Assign results into an object and write results into an Excel file mod <- multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", write = "Multilevel_CFA.txt", output = FALSE) # Example 7b: Assign results into an object and write results into an Excel file mod <- multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", output = FALSE) # Write results into an Excel file write.result(mod, "Multilevel_CFA.xlsx") # Estimate model and write results into an Excel file multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", write = "Multilevel_CFA.xlsx") #---------------------------------------------------------------------------- # Model specification using 'model' for one or multiple factor model # with the same factor structure at the Within and Between Level # Example 8a: One-factor model multilevel.cfa(Demo.twolevel, cluster = "cluster", model = c("y1", "y2", "y3", "y4")) # Example 8b: Two-factor model multilevel.cfa(Demo.twolevel, cluster = "cluster", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6"))) # Example 8c: Two-factor model with user-specified labels for the factors multilevel.cfa(Demo.twolevel, cluster = "cluster", model = list(factor1 = c("y1", "y2", "y3"), factor2 = c("y4", "y5", "y6"))) #.......... # Type of construct # Example 9a: Within-cluster constructs multilevel.cfa(Demo.twolevel, cluster = "cluster", const = "within", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6"))) # Example 9b: Shared cluster-level construct multilevel.cfa(Demo.twolevel, cluster = "cluster", const = "shared", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6"))) # Example 9c: Configural cluster construct (default) multilevel.cfa(Demo.twolevel, cluster = "cluster", const = "config", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6"))) # Example 9d: Simultaneous shared and configural cluster construct multilevel.cfa(Demo.twolevel, cluster = "cluster", const = "shareconf", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6"))) #.......... # Residual covariances at the Within level # Example 10a: Residual covariance between 'y1' and 'y4' at the Within level multilevel.cfa(Demo.twolevel, cluster = "cluster", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6")), rescov = c("y1", "y4")) # Example 10b: Fix all residual variances at 0 # i.e., strong factorial invariance across clusters multilevel.cfa(Demo.twolevel, cluster = "cluster", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6")), fix.resid = "all") #---------------------------------------------------------------------------- # Model specification using 'model.w' and 'model.b' for one or multiple factor model # with different factor structure at the Within and Between Level # Example 11a: Two-factor model at the Within level and one-factor model at the Between level multilevel.cfa(Demo.twolevel, cluster = "cluster", model.w = list(c("y1", "y2", "y3"), c("y4", "y5", "y6")), model.b = c("y1", "y2", "y3", "y4", "y5", "y6")) # Example 11b: Residual covariance between 'y1' and 'y4' at the Within level # Residual covariance between 'y5' and 'y6' at the Between level multilevel.cfa(Demo.twolevel, cluster = "cluster", model.w = list(c("y1", "y2", "y3"), c("y4", "y5", "y6")), model.b = c("y1", "y2", "y3", "y4", "y5", "y6"), rescov.w = c("y1", "y4"), rescov.b = c("y5", "y6")) ## End(Not run)
## Not run: # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #---------------------------------------------------------------------------- # Model specification using 'x' for a one-factor model # with the same factor structure with one factor at the Within and Between Level #.......... # Cluster variable specification # Example 1a: Cluster variable 'cluster' in 'x' multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4", "cluster")], cluster = "cluster") # Example 1b: Cluster variable 'cluster' not in 'x' multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster) # Example 1c: Alternative specification using the 'data' argument multilevel.cfa(y1:y4, data = Demo.twolevel, cluster = "cluster") #.......... # Type of construct # Example 2a: Within-cluster constructs multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "within") # Example 2b: Shared cluster-level construct multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "shared") # Example 2c: Configural cluster construct (default) multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "config") # Example 2d: Simultaneous shared and configural cluster construct multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "shareconf") #.......... # Residual covariances at the Within level # Example 3a: Residual covariance between 'y1' and 'y3' multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, rescov = c("y1", "y3")) # Example 3b: Residual covariance between 'y1' and 'y3', and 'y2' and 'y4' multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, rescov = list(c("y1", "y3"), c("y2", "y4"))) #.......... # Residual variances at the Between level fixed at 0 # Example 4a: All residual variances fixed at 0 # i.e., strong factorial invariance across clusters multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, fix.resid = "all") # Example 4b: Fesidual variances of 'y1', 'y2', and 'y4' fixed at 0 # i.e., partial strong factorial invariance across clusters multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, fix.resid = c("y1", "y2", "y4")) #.......... # Print all results # Example 5: Set minimum value for modification indices to 1 multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", mod.minval = 1) #.......... # Example 6: lavaan model and summary of the estimated model mod <- multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, output = FALSE) # lavaan model syntax cat(mod$model) # Fitted lavaan object lavaan::summary(mod$model.fit, standardized = TRUE, fit.measures = TRUE) #.......... # Write results # Example 7a: Assign results into an object and write results into an Excel file mod <- multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", write = "Multilevel_CFA.txt", output = FALSE) # Example 7b: Assign results into an object and write results into an Excel file mod <- multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", output = FALSE) # Write results into an Excel file write.result(mod, "Multilevel_CFA.xlsx") # Estimate model and write results into an Excel file multilevel.cfa(Demo.twolevel[, c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", write = "Multilevel_CFA.xlsx") #---------------------------------------------------------------------------- # Model specification using 'model' for one or multiple factor model # with the same factor structure at the Within and Between Level # Example 8a: One-factor model multilevel.cfa(Demo.twolevel, cluster = "cluster", model = c("y1", "y2", "y3", "y4")) # Example 8b: Two-factor model multilevel.cfa(Demo.twolevel, cluster = "cluster", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6"))) # Example 8c: Two-factor model with user-specified labels for the factors multilevel.cfa(Demo.twolevel, cluster = "cluster", model = list(factor1 = c("y1", "y2", "y3"), factor2 = c("y4", "y5", "y6"))) #.......... # Type of construct # Example 9a: Within-cluster constructs multilevel.cfa(Demo.twolevel, cluster = "cluster", const = "within", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6"))) # Example 9b: Shared cluster-level construct multilevel.cfa(Demo.twolevel, cluster = "cluster", const = "shared", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6"))) # Example 9c: Configural cluster construct (default) multilevel.cfa(Demo.twolevel, cluster = "cluster", const = "config", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6"))) # Example 9d: Simultaneous shared and configural cluster construct multilevel.cfa(Demo.twolevel, cluster = "cluster", const = "shareconf", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6"))) #.......... # Residual covariances at the Within level # Example 10a: Residual covariance between 'y1' and 'y4' at the Within level multilevel.cfa(Demo.twolevel, cluster = "cluster", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6")), rescov = c("y1", "y4")) # Example 10b: Fix all residual variances at 0 # i.e., strong factorial invariance across clusters multilevel.cfa(Demo.twolevel, cluster = "cluster", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6")), fix.resid = "all") #---------------------------------------------------------------------------- # Model specification using 'model.w' and 'model.b' for one or multiple factor model # with different factor structure at the Within and Between Level # Example 11a: Two-factor model at the Within level and one-factor model at the Between level multilevel.cfa(Demo.twolevel, cluster = "cluster", model.w = list(c("y1", "y2", "y3"), c("y4", "y5", "y6")), model.b = c("y1", "y2", "y3", "y4", "y5", "y6")) # Example 11b: Residual covariance between 'y1' and 'y4' at the Within level # Residual covariance between 'y5' and 'y6' at the Between level multilevel.cfa(Demo.twolevel, cluster = "cluster", model.w = list(c("y1", "y2", "y3"), c("y4", "y5", "y6")), model.b = c("y1", "y2", "y3", "y4", "y5", "y6"), rescov.w = c("y1", "y4"), rescov.b = c("y5", "y6")) ## End(Not run)
This function is a wrapper function for computing the within-group and
between-group correlation matrix by calling the sem
function in the
R package lavaan and provides standard errors, z test statistics, and
significance values (p-values) for testing the hypothesis
H0: = 0 for all pairs of variables within and between groups.
multilevel.cor(..., data = NULL, cluster, within = NULL, between = NULL, estimator = c("ML", "MLR"), optim.method = c("nlminb", "em"), missing = c("listwise", "fiml"), sig = FALSE, alpha = 0.05, print = c("all", "cor", "se", "stat", "p"), split = FALSE, order = FALSE, tri = c("both", "lower", "upper"), tri.lower = TRUE, p.adj = c("none", "bonferroni", "holm", "hochberg", "hommel", "BH", "BY", "fdr"), digits = 2, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
multilevel.cor(..., data = NULL, cluster, within = NULL, between = NULL, estimator = c("ML", "MLR"), optim.method = c("nlminb", "em"), missing = c("listwise", "fiml"), sig = FALSE, alpha = 0.05, print = c("all", "cor", "se", "stat", "p"), split = FALSE, order = FALSE, tri = c("both", "lower", "upper"), tri.lower = TRUE, p.adj = c("none", "bonferroni", "holm", "hochberg", "hommel", "BH", "BY", "fdr"), digits = 2, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame. Alternatively, an expression
indicating the variable names in |
data |
a data frame when specifying one or more variables in the
argument |
cluster |
either a character string indicating the variable name of
the cluster variable in |
within |
a character vector representing variables that are measured
on the within level and modeled only on the within level.
Variables not mentioned in |
between |
a character vector representing variables that are measured
on the between level and modeled only on the between level.
Variables not mentioned in |
estimator |
a character string indicating the estimator to be used:
|
optim.method |
a character string indicating the optimizer, i.e., |
missing |
a character string indicating how to deal with missing
data, i.e., |
sig |
logical: if |
alpha |
a numeric value between 0 and 1 indicating the significance
level at which correlation coefficients are printed
boldface when |
print |
a character string or character vector indicating which
results to show on the console, i.e. |
split |
logical: if |
order |
logical: if |
tri |
a character string indicating which triangular of the
matrix to show on the console when |
tri.lower |
logical: if |
p.adj |
a character string indicating an adjustment method for
multiple testing based on |
digits |
an integer value indicating the number of decimal places to be used for displaying correlation coefficients. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying p-values. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
The specification of the within-group and between-group variables is in line
with the syntax in Mplus. That is, the within
argument is used to
identify the variables in the matrix or data frame specified in x
that
are measured on the individual level and modeled only on the within level.
They are specified to have no variance in the between part of the model. The
between
argument is used to identify the variables in the matrix or
data frame specified in x
that are measured on the cluster level and
modeled only on the between level. Variables not mentioned in the arguments
within
or between
are measured on the individual level and will
be modeled on both the within and between level.
The function uses maximum likelihood estimation with conventional standard
errors (estimator = "ML"
) which are not robust against non-normality
and full information maximum likelihood (FIML) method (missing = "fiml"
)
to deal with missing data by default. FIML method cannot be used when
within-group variables have no variance within some clusters. In this cases,
the function
will switch to listwise deletion. Note that the current lavaan version 0.6-11
supports FIML method only for maximum likelihood estimation with conventional
standard errors (estimator = "ML"
) in multilevel models. Maximum
likelihood estimation with Huber-White robust standard errors
(estimator = "MLR"
) uses listwise deletion to deal with missing data.
When using FIML method there might be issues in model convergence, which might
be resolved by switching to listwise deletion (missing = "listwise"
).
The lavaan package uses a quasi-Newton optimization method ("nlminb"
)
by default. If the optimizer does not converge, model estimation will switch
to the Expectation Maximization (EM) algorithm.
Statistically significant correlation coefficients can be shown in boldface
on the console when specifying sig = TRUE
. However, this option is not
supported when using R Markdown, i.e., the argument sig
will switch to
FALSE
.
Adjustment method for multiple testing when specifying the argument p.adj
is applied to the within-group and between-group correlation matrix separately.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame specified in |
args |
specification of function arguments |
model.fit |
fitted lavaan object ( |
result |
list with result tables, i.e., |
The function uses the functions sem
, lavInspect
,
lavMatrixRepresentation
, lavTech
, parameterEstimates
,
and standardizedsolution
provided in the R package lavaan by
Yves Rosseel (2012).
Takuya Yanagida [email protected]
Hox, J., Moerbeek, M., & van de Schoot, R. (2018). Multilevel analysis: Techniques and applications (3rd. ed.). Routledge.
Snijders, T. A. B., & Bosker, R. J. (2012). Multilevel analysis: An introduction to basic and advanced multilevel modeling (2nd ed.). Sage Publishers.
write.result
, multilevel.descript
,
multilevel.icc
, cluster.scores
## Not run: # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #------------------------------------------------------------------------------- # Cluster variable specification # Example 1a: Cluster variable 'cluster' in 'x' multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3", "cluster")], cluster = "cluster") # Example 1b: Cluster variable 'cluster' not in 'x' multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster) # Example 1c: Alternative specification using the 'data' argument multilevel.cor(x1:x3, data = Demo.twolevel, cluster = "cluster") #------------------------------------------------------------------------------- # Example 2: All variables modeled on both the within and between level # Highlight statistically significant result at alpha = 0.05 multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], sig = TRUE, cluster = Demo.twolevel$cluster) # Example 3: Split output table in within-group and between-group correlation matrix. multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, split = TRUE) # Example 4: Print correlation coefficients, standard errors, z test statistics, # and p-values multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, print = "all") # Example 5: Print correlation coefficients and p-values # significance values with Bonferroni correction multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, print = c("cor", "p"), p.adj = "bonferroni") #------------------------------------------------------------------------------- # Example 6: Variables "y1", "y2", and "y2" modeled on both the within and between level # Variables "w1" and "w2" modeled on the cluster level multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster, between = c("w1", "w2")) # Example 7: Show variables specified in the argument 'between' first multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster, between = c("w1", "w2"), order = TRUE) #------------------------------------------------------------------------------- # Example 8: Variables "y1", "y2", and "y2" modeled only on the within level # Variables "w1" and "w2" modeled on the cluster level multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster, within = c("y1", "y2", "y3"), between = c("w1", "w2")) #------------------------------------------------------------------------------- # Example 9: lavaan model and summary of the multilevel model used to compute the # within-group and between-group correlation matrix mod <- multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, output = FALSE) # lavaan model syntax mod$model # Fitted lavaan object lavaan::summary(mod$model.fit, standardized = TRUE) #---------------------------------------------------------------------------- # Write Results # Example 10a: Write results into a text file multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, write = "Multilevel_Correlation.txt") # Example 10b: Write results into an Excel file multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, write = "Multilevel_Correlation.xlsx") result <- multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, output = FALSE) write.result(result, "Multilevel_Correlation.xlsx") ## End(Not run)
## Not run: # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #------------------------------------------------------------------------------- # Cluster variable specification # Example 1a: Cluster variable 'cluster' in 'x' multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3", "cluster")], cluster = "cluster") # Example 1b: Cluster variable 'cluster' not in 'x' multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster) # Example 1c: Alternative specification using the 'data' argument multilevel.cor(x1:x3, data = Demo.twolevel, cluster = "cluster") #------------------------------------------------------------------------------- # Example 2: All variables modeled on both the within and between level # Highlight statistically significant result at alpha = 0.05 multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], sig = TRUE, cluster = Demo.twolevel$cluster) # Example 3: Split output table in within-group and between-group correlation matrix. multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, split = TRUE) # Example 4: Print correlation coefficients, standard errors, z test statistics, # and p-values multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, print = "all") # Example 5: Print correlation coefficients and p-values # significance values with Bonferroni correction multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, print = c("cor", "p"), p.adj = "bonferroni") #------------------------------------------------------------------------------- # Example 6: Variables "y1", "y2", and "y2" modeled on both the within and between level # Variables "w1" and "w2" modeled on the cluster level multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster, between = c("w1", "w2")) # Example 7: Show variables specified in the argument 'between' first multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster, between = c("w1", "w2"), order = TRUE) #------------------------------------------------------------------------------- # Example 8: Variables "y1", "y2", and "y2" modeled only on the within level # Variables "w1" and "w2" modeled on the cluster level multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster, within = c("y1", "y2", "y3"), between = c("w1", "w2")) #------------------------------------------------------------------------------- # Example 9: lavaan model and summary of the multilevel model used to compute the # within-group and between-group correlation matrix mod <- multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, output = FALSE) # lavaan model syntax mod$model # Fitted lavaan object lavaan::summary(mod$model.fit, standardized = TRUE) #---------------------------------------------------------------------------- # Write Results # Example 10a: Write results into a text file multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, write = "Multilevel_Correlation.txt") # Example 10b: Write results into an Excel file multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, write = "Multilevel_Correlation.xlsx") result <- multilevel.cor(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster, output = FALSE) write.result(result, "Multilevel_Correlation.xlsx") ## End(Not run)
This function computes descriptive statistics for two-level and three-level multilevel data, e.g. average cluster size, variance components, intraclass correlation coefficient, design effect, and effective sample size.
multilevel.descript(..., data = NULL, cluster, type = c("1a", "1b"), method = c("aov", "lme4", "nlme"), print = c("all", "var", "sd"), REML = TRUE, digits = 2, icc.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
multilevel.descript(..., data = NULL, cluster, type = c("1a", "1b"), method = c("aov", "lme4", "nlme"), print = c("all", "var", "sd"), REML = TRUE, digits = 2, icc.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a numeric vector, matrix, or data frame. Alternatively, an
expression indicating the variable names in |
data |
a data frame when specifying one or more variables in the
argument |
cluster |
a character string indicating the name of the cluster
variable in |
type |
a character string indicating the type of intraclass
correlation coefficient, i.e., |
method |
a character string indicating the method used to estimate
intraclass correlation coefficients, i.e., |
print |
a character string or character vector indicating which results to
show on the console, i.e. |
REML |
logical: if |
digits |
an integer value indicating the number of decimal places to be used. |
icc.digits |
an integer indicating the number of decimal places to be used for displaying intraclass correlation coefficients. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
In a two-level model, the intraclass correlation coefficients, design effect, and the effective sample size are computed based on the random intercept-only model:
where the variance in is decomposed into two independent components:
, which represents the variance at Level 2, and
, which represents the variance at Level 1 (Hox et al.,
2018). For the computation of the intraclass correlation coefficients, see
'Details' in the
multilevel.icc
function. The design effect
represents the effect of cluster sampling on the variance of parameter
estimation and is defined by the equation
where is the standard error under cluster sampling,
is the standard error under simple random sampling,
is the intraclass correlation coefficient, ICC(1), and
is the average cluster size. The effective sample size is defined
by the equation:
The effective sample size represents the equivalent total
sample size that we should use in estimating the standard error (Snijders &
Bosker, 2012).
In a three-level model, the intraclass correlation coefficients, design effect, and the effective sample size are computed based on the random intercept-only model:
where the variance in is decomposed into three independent components:
, which represents the variance at Level 3,
, which represents the variance at Level 2, and
, which represents the variance at Level 1 (Hox et al., 2018).
For the computation of the intraclass correlation coefficients, see 'Details'
in the
multilevel.icc
function. The design effect
represents the effect of cluster sampling on the variance of parameter
estimation and is defined by the equation
where is the ICC(1) at Level 2,
is the ICC(1) at Level 3,
is the average cluster size at Level 2, and
is the average
cluster size at Level 3.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame specified in |
args |
specification of function arguments |
model.fit |
fitted lavaan object ( |
result |
list with result tables, i.e.,
|
Takuya Yanagida [email protected]
Hox, J., Moerbeek, M., & van de Schoot, R. (2018). Multilevel analysis: Techniques and applications (3rd. ed.). Routledge.
Snijders, T. A. B., & Bosker, R. J. (2012). Multilevel analysis: An introduction to basic and advanced multilevel modeling (2nd ed.). Sage Publishers.
write.result
, multilevel.icc
, descript
## Not run: # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #---------------------------------------------------------------------------- # Two-Level Data #.......... # Cluster variable specification # Example 1a: Cluster variable 'cluster' multilevel.descript(Demo.twolevel[, c("y1", "cluster")], cluster = "cluster") # Example 1b: Cluster variable 'cluster' not in '...' multilevel.descript(Demo.twolevel$y1, cluster = Demo.twolevel$cluster) # Example 1c: Alternative specification using the 'data' argument multilevel.descript(y1, data = Demo.twolevel, cluster = "cluster") #--------------------------- # Example 2: Multilevel descriptive statistics for 'y1' multilevel.descript(Demo.twolevel$y1, cluster = Demo.twolevel$cluster) # Example 3: Multilevel descriptive statistics, print variance and standard deviation multilevel.descript(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, print = "all") # Example 4: Multilevel descriptive statistics, print ICC with 5 digits multilevel.descript(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, icc.digits = 5) # Example 5: Multilevel descriptive statistics # use lme() function in the nlme package to estimate ICC multilevel.descript(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, method = "nlme") # Example 6a: Multilevel descriptive statistics for 'y1', 'y2', 'y3', 'w1', and 'w2' multilevel.descript(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster) # Example 6b: Alternative specification using the 'data' argument multilevel.descript(y1:y3, w1, w2, data = Demo.twolevel, cluster = "cluster") #---------------------------------------------------------------------------- # Three-Level Data # Create arbitrary three-level data Demo.threelevel <- data.frame(Demo.twolevel, cluster2 = Demo.twolevel$cluster, cluster3 = rep(1:10, each = 250)) #.......... # Cluster variable specification # Example 7a: Cluster variables 'cluster' in '...' multilevel.descript(Demo.threelevel[, c("y1", "cluster3", "cluster2")], cluster = c("cluster3", "cluster2")) # Example 7b: Cluster variables 'cluster' not in '...' multilevel.descript(Demo.threelevel$y1, cluster = Demo.threelevel[, c("cluster3", "cluster2")]) # Example 7c: Alternative specification using the 'data' argument multilevel.descript(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2")) #---------------------------------------------------------------------------- # Example 8: Multilevel descriptive statistics for 'y1', 'y2', 'y3', 'w1', and 'w2' multilevel.descript(y1:y3, w1, w2, data = Demo.threelevel, cluster = c("cluster3", "cluster2")) #---------------------------------------------------------------------------- # Write Results # Example 9a: Write results into a Excel file multilevel.descript(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster, write = "Multilevel_Descript.txt") # Example 9b: Write results into a Excel file multilevel.descript(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster, write = "Multilevel_Descript.xlsx") result <- multilevel.descript(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster, output = FALSE) write.result(result, "Multilevel_Descript.xlsx") ## End(Not run)
## Not run: # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #---------------------------------------------------------------------------- # Two-Level Data #.......... # Cluster variable specification # Example 1a: Cluster variable 'cluster' multilevel.descript(Demo.twolevel[, c("y1", "cluster")], cluster = "cluster") # Example 1b: Cluster variable 'cluster' not in '...' multilevel.descript(Demo.twolevel$y1, cluster = Demo.twolevel$cluster) # Example 1c: Alternative specification using the 'data' argument multilevel.descript(y1, data = Demo.twolevel, cluster = "cluster") #--------------------------- # Example 2: Multilevel descriptive statistics for 'y1' multilevel.descript(Demo.twolevel$y1, cluster = Demo.twolevel$cluster) # Example 3: Multilevel descriptive statistics, print variance and standard deviation multilevel.descript(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, print = "all") # Example 4: Multilevel descriptive statistics, print ICC with 5 digits multilevel.descript(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, icc.digits = 5) # Example 5: Multilevel descriptive statistics # use lme() function in the nlme package to estimate ICC multilevel.descript(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, method = "nlme") # Example 6a: Multilevel descriptive statistics for 'y1', 'y2', 'y3', 'w1', and 'w2' multilevel.descript(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster) # Example 6b: Alternative specification using the 'data' argument multilevel.descript(y1:y3, w1, w2, data = Demo.twolevel, cluster = "cluster") #---------------------------------------------------------------------------- # Three-Level Data # Create arbitrary three-level data Demo.threelevel <- data.frame(Demo.twolevel, cluster2 = Demo.twolevel$cluster, cluster3 = rep(1:10, each = 250)) #.......... # Cluster variable specification # Example 7a: Cluster variables 'cluster' in '...' multilevel.descript(Demo.threelevel[, c("y1", "cluster3", "cluster2")], cluster = c("cluster3", "cluster2")) # Example 7b: Cluster variables 'cluster' not in '...' multilevel.descript(Demo.threelevel$y1, cluster = Demo.threelevel[, c("cluster3", "cluster2")]) # Example 7c: Alternative specification using the 'data' argument multilevel.descript(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2")) #---------------------------------------------------------------------------- # Example 8: Multilevel descriptive statistics for 'y1', 'y2', 'y3', 'w1', and 'w2' multilevel.descript(y1:y3, w1, w2, data = Demo.threelevel, cluster = c("cluster3", "cluster2")) #---------------------------------------------------------------------------- # Write Results # Example 9a: Write results into a Excel file multilevel.descript(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster, write = "Multilevel_Descript.txt") # Example 9b: Write results into a Excel file multilevel.descript(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster, write = "Multilevel_Descript.xlsx") result <- multilevel.descript(Demo.twolevel[, c("y1", "y2", "y3", "w1", "w2")], cluster = Demo.twolevel$cluster, output = FALSE) write.result(result, "Multilevel_Descript.xlsx") ## End(Not run)
This function provides simultaneous and level-specific model fit information using the partially saturated model method for multilevel models estimated with the lavaan package. Note that level-specific fit indices cannot be computed when the fitted model contains cross-level constraints, e.g., equal factor loadings across levels in line with the metric cross-level measurement invariance assumption.
multilevel.fit(x, print = c("all", "summary", "fit"), digits = 3, p.digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
multilevel.fit(x, print = c("all", "summary", "fit"), digits = 3, p.digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
x |
a fitted model of class |
print |
a character string or character vector indicating which results
to show on the console, i.e. |
digits |
an integer value indicating the number of decimal places
to be used for displaying results. Note that loglikelihood,
information criteria and chi-square test statistic is
printed with |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
x |
a fitted model of class |
args |
specification of function arguments |
model |
specified models, i.e., |
result |
list with result tables, i.e., |
The function uses the functions cfa
, fitmeasures
, lavInspect
,
lavTech
, and parTable
provided in the R package lavaan by
Yves Rosseel (2012).
Takuya Yanagida [email protected]
Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48, 1-36. https://doi.org/10.18637/jss.v048.i02
multilevel.cfa
, multilevel.invar
,
multilevel.omega
, multilevel.cor
,
multilevel.descript
## Not run: # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") # Model specification model <- 'level: 1 fw =~ y1 + y2 + y3 fw ~ x1 + x2 + x3 level: 2 fb =~ y1 + y2 + y3 fb ~ w1 + w2' #------------------------------------------------------------------------------- # Example 1: Model estimation with estimator = "ML" fit1 <- lavaan::sem(model = model, data = Demo.twolevel, cluster = "cluster", estimator = "ML") # Simultaneous and level-specific multilevel model fit information ls.fit1 <- multilevel.fit(fit1) # Write results into a text file multilevel.fit(fit1, write = "LS-Fit1.txt") # Write results into an Excel file write.result(ls.fit1, "LS-Fit1.xlsx") # Example 2: Model estimation with estimator = "MLR" fit2 <- lavaan::sem(model = model, data = Demo.twolevel, cluster = "cluster", estimator = "MLR") # Simultaneous and level-specific multilevel model fit information # Write results into an Excel file multilevel.fit(fit2, write = "LS-Fit2.xlsx") ## End(Not run)
## Not run: # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") # Model specification model <- 'level: 1 fw =~ y1 + y2 + y3 fw ~ x1 + x2 + x3 level: 2 fb =~ y1 + y2 + y3 fb ~ w1 + w2' #------------------------------------------------------------------------------- # Example 1: Model estimation with estimator = "ML" fit1 <- lavaan::sem(model = model, data = Demo.twolevel, cluster = "cluster", estimator = "ML") # Simultaneous and level-specific multilevel model fit information ls.fit1 <- multilevel.fit(fit1) # Write results into a text file multilevel.fit(fit1, write = "LS-Fit1.txt") # Write results into an Excel file write.result(ls.fit1, "LS-Fit1.xlsx") # Example 2: Model estimation with estimator = "MLR" fit2 <- lavaan::sem(model = model, data = Demo.twolevel, cluster = "cluster", estimator = "MLR") # Simultaneous and level-specific multilevel model fit information # Write results into an Excel file multilevel.fit(fit2, write = "LS-Fit2.xlsx") ## End(Not run)
This function computes the intraclass correlation coefficient ICC(1), i.e., proportion of the total variance explained by the grouping structure, and ICC(2), i.e., reliability of aggregated variables in a two-level and three-level model.
multilevel.icc(..., data = NULL, cluster, type = c("1a", "1b", "2"), method = c("aov", "lme4", "nlme"), REML = TRUE, as.na = NULL, check = TRUE)
multilevel.icc(..., data = NULL, cluster, type = c("1a", "1b", "2"), method = c("aov", "lme4", "nlme"), REML = TRUE, as.na = NULL, check = TRUE)
... |
a numeric vector, matrix, or data frame. Alternatively, an
expression indicating the variable names in |
data |
a data frame when specifying one or more variables in the
argument |
cluster |
a character string indicating the name of the cluster
variable in |
type |
a character string indicating the type of intraclass correlation
coefficient, i.e., |
method |
a character string indicating the method used to estimate
intraclass correlation coefficients, i.e., |
REML |
logical: if |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
check |
logical: if |
In a two-level model, the intraclass correlation coefficients are computed in the random intercept-only model:
where the variance in is decomposed into two independent components:
, which represents the variance at Level 2, and
, which represents the variance at Level 1 (Hox et al.,
2018). These two variances sum up to the total variance and are referred to
as variance components. The intraclass correlation coefficient, ICC(1)
requested by
type = "1a"
represents the proportion of the
total variance explained by the grouping structure and is defined by the equation
The intraclass correlation coefficient, ICC(2) requested by
type = "2"
represents the reliability of aggregated variables and is
defined by the equation
where is the average group size (Snijders & Bosker, 2012).
In a three-level model, the intraclass correlation coefficients are computed in the random intercept-only model:
where the variance in is decomposed into three independent components:
, which represents the variance at Level 3,
, which represents the variance at Level 2, and
, which represents the variance at Level 1 (Hox et al.,
2018). There are two ways to compute the intraclass correlation coefficient
in a three-level model. The first method requested by
type = "1a"
represents the proportion of variance at Level 2 and Level 3 and should be
used if we are interested in a decomposition of the variance across levels.
The intraclass correlation coefficient, ICC(1) at Level 2 is
defined as:
The ICC(1) at Level 3 is defined as:
The second method requested by type = "1b"
represents the expected
correlation between two randomly chosen elements in the same group. The
intraclass correlation coefficient, ICC(1) at Level 2 is
defined as:
The ICC(1) at Level 3 is defined as:
Note that both formula are correct, but express different aspects of the data, which happen to coincide when there are only two levels (Hox et al., 2018).
The intraclass correlation coefficients, ICC(2) requested by type = "2"
represent the reliability of aggregated variables at Level 2 and Level 3.
The ICC(2) at Level 2 is defined as:
The ICC(2) at Level 3 is defined as:
where is the average group size at Level 2 and
is the average group size at Level 3 (Hox et al., 2018).
Returns a numeric vector or matrix with intraclass correlation coefficient(s).
In a three level model, the label L2
is used for ICCs at Level 2 and L3
for ICCs at Level 3.
Takuya Yanagida [email protected]
Hox, J., Moerbeek, M., & van de Schoot, R. (2018). Multilevel analysis: Techniques and applications (3rd. ed.). Routledge.
Snijders, T. A. B., & Bosker, R. J. (2012). Multilevel analysis: An introduction to basic and advanced multilevel modeling (2nd ed.). Sage Publishers.
multilevel.cfa
, multilevel.cor
,
multilevel.descript
# Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #---------------------------------------------------------------------------- # Two-Level Models #.......... # Cluster variable specification # Example 1a: Cluster variable 'cluster' in '...' multilevel.icc(Demo.twolevel[, c("y1", "cluster")], cluster = "cluster") # Example 1b: Cluster variable 'cluster' not in '...' multilevel.icc(Demo.twolevel$y1, cluster = Demo.twolevel$cluster) # Example 1c: Alternative specification using the 'data' argument multilevel.icc(y1, data = Demo.twolevel, cluster = "cluster") #.......... # Example 2: ICC(1) for 'y1' multilevel.icc(Demo.twolevel$y1, cluster = Demo.twolevel$cluster) # Example 3: ICC(2) multilevel.icc(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, type = 2) # Example 4: ICC(1) # use lme() function in the lme4 package to estimate ICC multilevel.icc(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, method = "nlme") # Example 5a: ICC(1) for 'y1', 'y2', and 'y3' multilevel.icc(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster) # Example 5b: Alternative specification using the 'data' argument multilevel.icc(y1:y3, data = Demo.twolevel, cluster = "cluster") #---------------------------------------------------------------------------- # Three-Level Models # Create arbitrary three-level data Demo.threelevel <- data.frame(Demo.twolevel, cluster2 = Demo.twolevel$cluster, cluster3 = rep(1:10, each = 250)) #.......... # Cluster variable specification # Example 6a: Cluster variables 'cluster' in '...' multilevel.icc(Demo.threelevel[, c("y1", "cluster3", "cluster2")], cluster = c("cluster3", "cluster2")) # Example 6b: Cluster variables 'cluster' not in '...' multilevel.icc(Demo.threelevel$y1, cluster = Demo.threelevel[, c("cluster3", "cluster2")]) # Example 6c: Alternative specification using the 'data' argument multilevel.icc(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2")) #.......... # Example 7a: ICC(1), proportion of variance at Level 2 and Level 3 multilevel.icc(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2")) # Example 7b: ICC(1), expected correlation between two randomly chosen elements # in the same group multilevel.icc(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2"), type = "1b") # Example 7c: ICC(2) multilevel.icc(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2"), type = "2")
# Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #---------------------------------------------------------------------------- # Two-Level Models #.......... # Cluster variable specification # Example 1a: Cluster variable 'cluster' in '...' multilevel.icc(Demo.twolevel[, c("y1", "cluster")], cluster = "cluster") # Example 1b: Cluster variable 'cluster' not in '...' multilevel.icc(Demo.twolevel$y1, cluster = Demo.twolevel$cluster) # Example 1c: Alternative specification using the 'data' argument multilevel.icc(y1, data = Demo.twolevel, cluster = "cluster") #.......... # Example 2: ICC(1) for 'y1' multilevel.icc(Demo.twolevel$y1, cluster = Demo.twolevel$cluster) # Example 3: ICC(2) multilevel.icc(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, type = 2) # Example 4: ICC(1) # use lme() function in the lme4 package to estimate ICC multilevel.icc(Demo.twolevel$y1, cluster = Demo.twolevel$cluster, method = "nlme") # Example 5a: ICC(1) for 'y1', 'y2', and 'y3' multilevel.icc(Demo.twolevel[, c("y1", "y2", "y3")], cluster = Demo.twolevel$cluster) # Example 5b: Alternative specification using the 'data' argument multilevel.icc(y1:y3, data = Demo.twolevel, cluster = "cluster") #---------------------------------------------------------------------------- # Three-Level Models # Create arbitrary three-level data Demo.threelevel <- data.frame(Demo.twolevel, cluster2 = Demo.twolevel$cluster, cluster3 = rep(1:10, each = 250)) #.......... # Cluster variable specification # Example 6a: Cluster variables 'cluster' in '...' multilevel.icc(Demo.threelevel[, c("y1", "cluster3", "cluster2")], cluster = c("cluster3", "cluster2")) # Example 6b: Cluster variables 'cluster' not in '...' multilevel.icc(Demo.threelevel$y1, cluster = Demo.threelevel[, c("cluster3", "cluster2")]) # Example 6c: Alternative specification using the 'data' argument multilevel.icc(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2")) #.......... # Example 7a: ICC(1), proportion of variance at Level 2 and Level 3 multilevel.icc(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2")) # Example 7b: ICC(1), expected correlation between two randomly chosen elements # in the same group multilevel.icc(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2"), type = "1b") # Example 7c: ICC(2) multilevel.icc(y1, data = Demo.threelevel, cluster = c("cluster3", "cluster2"), type = "2")
This function computes the confidence interval for the indirect effect in a 1-1-1 multilevel mediation model with random slopes based on the Monte Carlo method.
multilevel.indirect(a, b, se.a, se.b, cov.ab = 0, cov.rand, se.cov.rand, nrep = 100000, alternative = c("two.sided", "less", "greater"), seed = NULL, conf.level = 0.95, digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
multilevel.indirect(a, b, se.a, se.b, cov.ab = 0, cov.rand, se.cov.rand, nrep = 100000, alternative = c("two.sided", "less", "greater"), seed = NULL, conf.level = 0.95, digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
a |
a numeric value indicating the coefficient |
b |
a numeric value indicating the coefficient |
se.a |
a positive numeric value indicating the standard error of
|
se.b |
a positive numeric value indicating the standard error of
|
cov.ab |
a positive numeric value indicating the covariance between
|
cov.rand |
a positive numeric value indicating the covariance between
the random slopes for |
se.cov.rand |
a positive numeric value indicating the standard error of the
covariance between the random slopes for |
nrep |
an integer value indicating the number of Monte Carlo repetitions. |
alternative |
a character string specifying the alternative hypothesis, must be
one of |
seed |
a numeric value specifying the seed of the random number generator when using the Monte Carlo method. |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
digits |
an integer value indicating the number of decimal places to be used for displaying |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
In statistical mediation analysis (MacKinnon & Tofighi, 2013), the indirect effect
refers to the effect of the independent variable on the outcome variable
transmitted by the mediator variable
. The magnitude of the indirect
effect
is quantified by the product of the the coefficient
(i.e., effect of
on
) and the coefficient
(i.e., effect of
on
adjusted for
). However, mediation in the context of a
1-1-1 multilevel mediation model where variables
,
, and
are measured at level 1, the coefficients
and
can vary across
level-2 units (i.e., random slope). As a result,
and
may covary
so that the estimate of the indirect effect is no longer simply the product of
the coefficients
, but
,
where
(i.e.,
cov.rand
) is the level-2 covariance between
the random slopes and
. The covariance term needs to be added to
only when random slopes are estimated for both
and
. Otherwise, the simple product is sufficient to quantify the indirect
effect, and the
indirect
function can be used instead.
In practice, researchers are often interested in confidence limit estimation
for the indirect effect. There are several methods for computing a confidence
interval for the indirect effect in a single-level mediation models (see
indirect
function). The Monte Carlo (MC) method (MacKinnon et al.,
2004) is a promising method in single-level mediation model which was also adapted
to the multilevel mediation model (Bauer, Preacher & Gil, 2006). This method
requires seven pieces of information available from the results of a multilevel
mediation model:
Coefficient , i.e., average effect of
on
on the cluster or between-group level. In Mplus,
Estimate
of the random slope under
Means
at the
Between Level
.
Coefficient , i.e., average effect of
on
on the cluster or between-group level. In Mplus,
Estimate
of the random slope under
Means
at the
Between Level
.
Standard error of a
. In Mplus, S.E.
of the random slope under
Means
at the
Between Level
.
Standard error of a
. In Mplus, S.E.
of the random slope under
Means
at the
Between Level
.
Covariance between and
. In Mplus, the
estimated covariance matrix for the parameter estimates
(i.e., asymptotic covariance matrix) need to be requested
by specifying
TECH3
along with TECH1
in the
OUTPUT
section. In the TECHNICAL 1 OUTPUT
under PARAMETER SPECIFICATION FOR BETWEEN
, the
numbers of the parameter for the coefficients and
need to be identified under
ALPHA
to look
up cov.av
in the corresponding row and column in
the TECHNICAL 3 OUTPUT
under ESTIMATED COVARIANCE
MATRIX FOR PARAMETER ESTIMATES
.
Covariance between the random slopes for and
. In Mplus,
Estimate
of the covariance
WITH
at the
Between Level
.
Standard error of the covariance between the random
slopes for and
. In Mplus,
S.E.
of the covariance
WITH
at the
Between Level
.
Note that all pieces of information except cov.ab
can be looked up in
the standard output of the multilevel mediation model. In order to specify
cov.ab
, the covariance matrix for the parameter estimates (i.e.,
asymptotic covariance matrix) is required. In practice, cov.ab
will
oftentimes be very small so that cov.ab
may be set to 0 (i.e., default
value) with negligible impact on the results.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
list with the input specified in |
args |
specification of function arguments |
result |
list with result tables, i.e., |
The function was adapted from the interactive web tool by Preacher and Selig (2010).
Takuya Yanagida [email protected]
Bauer, D. J., Preacher, K. J., & Gil, K. M. (2006). Conceptualizing and testing random indirect effects and moderated Mediation in multilevel models: New procedures and recommendations. Psychological Methods, 11, 142-163. https://doi.org/10.1037/1082-989X.11.2.142
Kenny, D. A., Korchmaros, J. D., & Bolger, N. (2003). Lower level Mediation in multilevel models. Psychological Methods, 8, 115-128. https://doi.org/10.1037/1082-989x.8.2.115
MacKinnon, D. P., Lockwood, C. M., & Williams, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39, 99-128. https://doi.org/10.1207/s15327906mbr3901_4
MacKinnon, D. P., & Tofighi, D. (2013). Statistical mediation analysis. In J. A. Schinka, W. F. Velicer, & I. B. Weiner (Eds.), Handbook of psychology: Research methods in psychology (pp. 717-735). John Wiley & Sons, Inc..
Preacher, K. J., & Selig, J. P. (2010). Monte Carlo method for assessing multilevel Mediation: An interactive tool for creating confidence intervals for indirect effects in 1-1-1 multilevel models [Computer software]. Available from http://quantpsy.org/.
## Not run: # Example 1: Confidence Interval for the Indirect Effect multilevel.indirect(a = 0.25, b = 0.20, se.a = 0.11, se.b = 0.13, cov.ab = 0.01, cov.rand = 0.40, se.cov.rand = 0.02) # Example 2: Save results of the Monte Carlo method ab <- multilevel.indirect(a = 0.25, b = 0.20, se.a = 0.11, se.b = 0.13, cov.ab = 0.01, cov.rand = 0.40, se.cov.rand = 0.02, output = FALSE)$result$ab # Histogram of the distribution of the indirect effect hist(ab) # Example 3: Write results into a text file multilevel.indirect(a = 0.25, b = 0.20, se.a = 0.11, se.b = 0.13, cov.ab = 0.01, cov.rand = 0.40, se.cov.rand = 0.02, write = "ML-Indirect.txt") ## End(Not run)
## Not run: # Example 1: Confidence Interval for the Indirect Effect multilevel.indirect(a = 0.25, b = 0.20, se.a = 0.11, se.b = 0.13, cov.ab = 0.01, cov.rand = 0.40, se.cov.rand = 0.02) # Example 2: Save results of the Monte Carlo method ab <- multilevel.indirect(a = 0.25, b = 0.20, se.a = 0.11, se.b = 0.13, cov.ab = 0.01, cov.rand = 0.40, se.cov.rand = 0.02, output = FALSE)$result$ab # Histogram of the distribution of the indirect effect hist(ab) # Example 3: Write results into a text file multilevel.indirect(a = 0.25, b = 0.20, se.a = 0.11, se.b = 0.13, cov.ab = 0.01, cov.rand = 0.40, se.cov.rand = 0.02, write = "ML-Indirect.txt") ## End(Not run)
This function is a wrapper function for evaluating configural, metric, and
scalar cross-level measurement invariance using multilevel confirmatory factor
analysis with continuous indicators by calling the cfa
function in the
R package lavaan.
multilevel.invar(..., data = NULL, cluster, model = NULL, rescov = NULL, invar = c("config", "metric", "scalar"), fix.resid = NULL, ident = c("marker", "var", "effect"), estimator = c("ML", "MLR"), optim.method = c("nlminb", "em"), missing = c("listwise", "fiml"), print = c("all", "summary", "coverage", "descript", "fit", "est", "modind", "resid"), print.fit = c("all", "standard", "scaled", "robust"), mod.minval = 6.63, resid.minval = 0.1, digits = 3, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
multilevel.invar(..., data = NULL, cluster, model = NULL, rescov = NULL, invar = c("config", "metric", "scalar"), fix.resid = NULL, ident = c("marker", "var", "effect"), estimator = c("ML", "MLR"), optim.method = c("nlminb", "em"), missing = c("listwise", "fiml"), print = c("all", "summary", "coverage", "descript", "fit", "est", "modind", "resid"), print.fit = c("all", "standard", "scaled", "robust"), mod.minval = 6.63, resid.minval = 0.1, digits = 3, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame. If |
data |
a data frame when specifying one or more variables in the
argument |
cluster |
either a character string indicating the variable name of
the cluster variable in |
model |
a character vector specifying the same factor structure
with one factor at the Within and Between Level, or a list
of character vectors for specifying the same measurement
model with more than one factor at the Within and Between
Level, e.g., |
rescov |
a character vector or a list of character vectors for specifying
residual covariances at the Within level, e.g. |
invar |
a character string indicating the level of measurement invariance
to be evaluated, i.e., |
fix.resid |
a character vector for specifying residual variances to be
fixed at 0 at the Between level for the configural and metric
invariance model, e.g., |
ident |
a character string indicating the method used for identifying
and scaling latent variables, i.e., |
estimator |
a character string indicating the estimator to be used:
|
optim.method |
a character string indicating the optimizer, i.e., |
missing |
a character string indicating how to deal with missing data,
i.e., |
print |
a character string or character vector indicating which
results to show on the console, i.e. |
print.fit |
a character string or character vector indicating which
version of the CFI, TLI, and RMSEA to show on the console,
i.e., |
mod.minval |
numeric value to filter modification indices and only
show modifications with a modification index value equal
or higher than this minimum value. By default, modification
indices equal or higher 6.63 are printed. Note that a
modification index value of 6.63 is equivalent to a
significance level of |
resid.minval |
numeric value indicating the minimum absolute residual correlation coefficients and standardized means to highlight in boldface. By default, absolute residual correlation coefficients and standardized means equal or higher 0.1 are highlighted. Note that highlighting can be disabled by setting the minimum value to 1. |
digits |
an integer value indicating the number of decimal places
to be used for displaying results. Note that information
criteria and chi-square test statistic is printed with
|
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
matrix or data frame specified in |
args |
specification of function arguments |
model |
list with specified model for the configural, metric, and scalar invariance model |
model.fit |
list with fitted lavaan object of the configural, metric, and scalar invariance model |
check |
list with the results of the convergence and model identification check for the configural, metric, and scalar invariance model |
result |
list with result tables, i.e., |
The function uses the functions lavTestLRT
provided in the R package
lavaan by Yves Rosseel (2012).
Takuya Yanagida [email protected]
Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48, 1-36. https://doi.org/10.18637/jss.v048.i02
multilevel.cfa
, multilevel.fit
, multilevel.omega
,
multilevel.cor
, multilevel.descript
## Not run: # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #---------------------------------------------------------------------------- # Cluster variable specification # Example 1a: Cluster variable 'cluster' in 'x' multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4", "cluster")], cluster = "cluster") # Example 1b: Cluster variable 'cluster' not in 'x' multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster) # Example 1c: Alternative specification using the 'data' argument multilevel.invar(y1:y4, data = Demo.twolevel, cluster = "cluster") #---------------------------------------------------------------------------- # Model specification using 'x' for a one-factor model #.......... # Level of measurement invariance # Example 2a: Configural invariance multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, invar = "config") # Example 2b: Metric invariance multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, invar = "metric") # Example 2c: Scalar invariance multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, invar = "scalar") #.......... # Residual covariance at the Within level and residual variance at the Between level # Example 3a: Residual covariance between "y3" and "y4" at the Within level multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, rescov = c("y3", "y4")) # Example 3b: Residual variances of 'y1' at the Between level fixed at 0 multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, fix.resid = "y1") #.......... # Example 4: Print all results multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all") #.......... # Example 5: lavaan model and summary of the estimated model mod <- multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, output = FALSE) # lavaan syntax of the metric invariance model mod$model$metric # Fitted lavaan object of the metric invariance model lavaan::summary(mod$model.fit$metric, standardized = TRUE, fit.measures = TRUE) #---------------------------------------------------------------------------- # Model specification using 'model' for one or multiple factor model # Example 6a: One-factor model multilevel.invar(Demo.twolevel, cluster = "cluster", model = c("y1", "y2", "y3", "y4")) # Example 6b: Two-factor model multilevel.invar(Demo.twolevel, cluster = "cluster", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6"))) #---------------------------------------------------------------------------- # Write results # Example 7a: Write results into an Excel file multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", write = "Multilevel_Invariance.txt") # Example 7b: Write results into an Excel file multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", write = "Multilevel_Invariance.xlsx") # Assign results into an object and write results into an Excel file mod <- multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", output = FALSE) # Write results into an Excel file write.result(mod, "Multilevel_Invariance.xlsx") ## End(Not run)
## Not run: # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #---------------------------------------------------------------------------- # Cluster variable specification # Example 1a: Cluster variable 'cluster' in 'x' multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4", "cluster")], cluster = "cluster") # Example 1b: Cluster variable 'cluster' not in 'x' multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster) # Example 1c: Alternative specification using the 'data' argument multilevel.invar(y1:y4, data = Demo.twolevel, cluster = "cluster") #---------------------------------------------------------------------------- # Model specification using 'x' for a one-factor model #.......... # Level of measurement invariance # Example 2a: Configural invariance multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, invar = "config") # Example 2b: Metric invariance multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, invar = "metric") # Example 2c: Scalar invariance multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, invar = "scalar") #.......... # Residual covariance at the Within level and residual variance at the Between level # Example 3a: Residual covariance between "y3" and "y4" at the Within level multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, rescov = c("y3", "y4")) # Example 3b: Residual variances of 'y1' at the Between level fixed at 0 multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, fix.resid = "y1") #.......... # Example 4: Print all results multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all") #.......... # Example 5: lavaan model and summary of the estimated model mod <- multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, output = FALSE) # lavaan syntax of the metric invariance model mod$model$metric # Fitted lavaan object of the metric invariance model lavaan::summary(mod$model.fit$metric, standardized = TRUE, fit.measures = TRUE) #---------------------------------------------------------------------------- # Model specification using 'model' for one or multiple factor model # Example 6a: One-factor model multilevel.invar(Demo.twolevel, cluster = "cluster", model = c("y1", "y2", "y3", "y4")) # Example 6b: Two-factor model multilevel.invar(Demo.twolevel, cluster = "cluster", model = list(c("y1", "y2", "y3"), c("y4", "y5", "y6"))) #---------------------------------------------------------------------------- # Write results # Example 7a: Write results into an Excel file multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", write = "Multilevel_Invariance.txt") # Example 7b: Write results into an Excel file multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", write = "Multilevel_Invariance.xlsx") # Assign results into an object and write results into an Excel file mod <- multilevel.invar(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, print = "all", output = FALSE) # Write results into an Excel file write.result(mod, "Multilevel_Invariance.xlsx") ## End(Not run)
This function computes point estimate and Monte Carlo confidence interval for
the multilevel composite reliability defined by Lai (2021) for a within-cluster
construct, shared cluster-level construct, and configural cluster construct by
calling the cfa
function in the R package lavaan.
multilevel.omega(..., data = NULL, cluster, rescov = NULL, const = c("within", "shared", "config"), fix.resid = NULL, optim.method = c("nlminb", "em"), missing = c("listwise", "fiml"), nrep = 100000, seed = NULL, conf.level = 0.95, print = c("all", "omega", "item"), digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
multilevel.omega(..., data = NULL, cluster, rescov = NULL, const = c("within", "shared", "config"), fix.resid = NULL, optim.method = c("nlminb", "em"), missing = c("listwise", "fiml"), nrep = 100000, seed = NULL, conf.level = 0.95, print = c("all", "omega", "item"), digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame. Multilevel confirmatory factor
analysis based on a measurement model with one factor
at the Within level and one factor at the Between level
comprising all variables in the matrix or data frame is
conducted. Note that the cluster variable specified in
|
data |
a data frame when specifying one or more variables in the
argument |
cluster |
either a character string indicating the variable name of
the cluster variable in |
rescov |
a character vector or a list of character vectors for specifying
residual covariances at the Within level, e.g. |
const |
a character string indicating the type of construct(s), i.e.,
|
fix.resid |
a character vector for specifying residual variances to be
fixed at 0 at the Between level, e.g., |
optim.method |
a character string indicating the optimizer, i.e., |
missing |
a character string indicating how to deal with missing data,
i.e., |
nrep |
an integer value indicating the number of Monte Carlo repetitions for computing confidence intervals. |
seed |
a numeric value specifying the seed of the random number generator for computing the Monte Carlo confidence interval. |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
print |
a character vector indicating which results to show, i.e.
|
digits |
an integer value indicating the number of decimal places
to be used for displaying results. Note that loglikelihood,
information criteria and chi-square test statistic is
printed with |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
call |
function call |
type |
type of analysis |
data |
data frame specified in |
args |
specification of function arguments |
model |
specified model |
model.fit |
fitted lavaan object ( |
check |
results of the convergence and model identification check |
result |
list with result tables, i.e., |
The function uses the functions lavInspect
, lavTech
, and lavNames
,
provided in the R package lavaan by Yves Rosseel (2012). The internal function
.internal.mvrnorm
is a copy of the mvrnorm
function in the package
MASS by Venables and Ripley (2002).
Takuya Yanagida [email protected]
Lai, M. H. C. (2021). Composite reliability of multilevel data: It’s about observed scores and construct meanings. Psychological Methods, 26(1), 90–102. https://doi.org/10.1037/met0000287
Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48, 1-36. https://doi.org/10.18637/jss.v048.i02
Venables, W. N., Ripley, B. D. (2002).Modern Applied Statistics with S (4th ed.). Springer. https://www.stats.ox.ac.uk/pub/MASS4/.
item.omega
, multilevel.cfa
, multilevel.fit
,
multilevel.invar
, multilevel.cor
,
multilevel.descript
## Not run: # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #------------------------------------------------------------------------------- # Cluster variable specification # Example 1a: Cluster variable 'cluster' in 'x' multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4", "cluster")], cluster = "cluster") # Example 1b: Cluster variable 'cluster' not in 'x' multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster) # Example 1c: Alternative specification using the 'data' argument multilevel.omega(y1:y4, data = Demo.twolevel, cluster = "cluster") #------------------------------------------------------------------------------- # Type of construct # Example 2a: Within-Cluster Construct multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "within") # Example 2b: Shared Cluster-Level Construct multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "shared") # Example 2c: Configural Construct multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "config") #------------------------------------------------------------------------------- # Residual covariance at the Within level and residual variance at the Between level # Example 3a: Residual covariance between "y4" and "y5" at the Within level multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "config", rescov = c("y3", "y4")) # Example 3b: Residual variances of 'y1' at the Between level fixed at 0 multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "config", fix.resid = c("y1", "y2"), digits = 3) #---------------------------------------------------------------------------- # Write results # Example 4a: Write results into a text file multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, write = "Multilevel_Omega.txt") # Example 4b: Write results into an Excel file multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, write = "Multilevel_Omega.xlsx") # Example 4b: Assign results into an object and write results into an Excel file mod <- multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, output = FALSE) # Write results into an Excel file write.result(mod, "Multilevel_Omega.xlsx") ## End(Not run)
## Not run: # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #------------------------------------------------------------------------------- # Cluster variable specification # Example 1a: Cluster variable 'cluster' in 'x' multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4", "cluster")], cluster = "cluster") # Example 1b: Cluster variable 'cluster' not in 'x' multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster) # Example 1c: Alternative specification using the 'data' argument multilevel.omega(y1:y4, data = Demo.twolevel, cluster = "cluster") #------------------------------------------------------------------------------- # Type of construct # Example 2a: Within-Cluster Construct multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "within") # Example 2b: Shared Cluster-Level Construct multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "shared") # Example 2c: Configural Construct multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "config") #------------------------------------------------------------------------------- # Residual covariance at the Within level and residual variance at the Between level # Example 3a: Residual covariance between "y4" and "y5" at the Within level multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "config", rescov = c("y3", "y4")) # Example 3b: Residual variances of 'y1' at the Between level fixed at 0 multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, const = "config", fix.resid = c("y1", "y2"), digits = 3) #---------------------------------------------------------------------------- # Write results # Example 4a: Write results into a text file multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, write = "Multilevel_Omega.txt") # Example 4b: Write results into an Excel file multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, write = "Multilevel_Omega.xlsx") # Example 4b: Assign results into an object and write results into an Excel file mod <- multilevel.omega(Demo.twolevel[,c("y1", "y2", "y3", "y4")], cluster = Demo.twolevel$cluster, output = FALSE) # Write results into an Excel file write.result(mod, "Multilevel_Omega.xlsx") ## End(Not run)
This function computes R-squared measures by Raudenbush and Bryk (2002),
Snijders and Bosker (1994), Nakagawa and Schielzeth (2013) as extended by
Johnson (2014), and Rights and Sterba (2019) for multilevel and linear mixed
effects models estimated by using the lmer()
function in the package
lme4 or lme()
function in the package nlme.
multilevel.r2(model, print = c("all", "RB", "SB", "NS", "RS"), digits = 3, plot = FALSE, gray = FALSE, start = 0.15, end = 0.85, color = c("#D55E00", "#0072B2", "#CC79A7", "#009E73", "#E69F00"), write = NULL, append = TRUE, check = TRUE, output = TRUE)
multilevel.r2(model, print = c("all", "RB", "SB", "NS", "RS"), digits = 3, plot = FALSE, gray = FALSE, start = 0.15, end = 0.85, color = c("#D55E00", "#0072B2", "#CC79A7", "#009E73", "#E69F00"), write = NULL, append = TRUE, check = TRUE, output = TRUE)
model |
a fitted model of class |
print |
a character vector indicating which R-squared measures to be
printed on the console, i.e., |
digits |
an integer value indicating the number of decimal places to be used. |
plot |
logical: if |
gray |
logical: if |
start |
a numeric value between 0 and 1, graphical parameter to specify the gray value at the low end of the palette. |
end |
a numeric value between 0 and 1, graphical parameter to specify the gray value at the high end of the palette. |
color |
a character vector, graphical parameter indicating the color of bars in the bar chart in the following order: Fixed slopes (Within), Fixed slopes (Between), Slope variation (Within), Intercept variation (Between), and Residual (Within). By default, colors from the colorblind-friendly palettes are used |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
A number of R-squared measures for multilevel and linear mixed effects models have been developed in the methodological literature (see Rights & Sterba, 2018). Based on these measures, following measures were implemented in the current function:
R-squared measures by Raudenbush
and Bryk (2002) are based on the proportional reduction of unexplained variance
when predictors are added. More specifically, variance estimates from the
baseline/null model (i.e., and
)
and variance estimates from the model including predictors (i.e.,
and
) are used to compute the proportional reduction in
variance between baseline/null model and the complete model by:
for the proportional reduction at level-1 (within-cluster) and by:
for the proportional reduction at level-2 (between-cluster), where
and
represent the baseline and full models, respectively (Hox et al.,
2018; Roberts et al., 2010).
A major disadvantage of these measures is that adding predictors can increases
rather than decreases some of the variance components and it is even possible
to obtain negative values for with these formulas (Snijders & Bosker,
2012). According to Snijders and Bosker (1994) this can occur because the
between-group variance is a function of both level-1 and level-2 variance:
Hence, adding a predictor (e.g., cluster-mean centered predictor) that explains
proportion of the within-group variance will decrease the estimate of
and increase the estimate
if this predictor does not explain
a proportion of the between-group variance to balance out the decrease in
(LaHuis et al., 2014). Negative estimates for
can
also simply occur due to chance fluctuation in sample estimates from the two
models.
Another disadvantage of these measures is that for the explained
variance at level-2 has been shown to perform poorly in simulation studies even
with
clusters with group cluster size of
(LaHuis
et al., 2014; Rights & Sterba, 2019).
Moreover, when there is missing data in the level-1 predictors, it is possible that sample sizes for the baseline and complete models differ.
Finally, it should be noted that R-squared measures by Raudenbush and Bryk (2002) are appropriate for random intercept models, but not for random intercept and slope models. For random slope models, Snijders and Bosker (2012) suggested to re-estimate the model as random intercept models with the same predictors while omitting the random slopes to compute the R-squared measures. However, the simulation study by LaHuis (2014) suggested that the R-squared measures showed an acceptable performance when there was little slope variance, but did not perform well in the presence of higher levels of slope variance.
R-squared measures by Snijders and Bosker (1994) are based on the proportional reduction of mean squared prediction error and is computed using the formula:
for computing the proportional reduction of error at level-1 representing the total amount of explained variance and using the formula:
for computing the proportional reduction of error at level-2 by dividing the
by the group cluster size
or by the average
cluster size for unbalanced data (Roberts et al., 2010). Note that the function
uses the harmonic mean of the group sizes as recommended by Snijders and Bosker
(1994). The population values of
based on these measures cannot be
negative because the interplay of level-1 and level-2 variance components is
considered. However, sample estimates of
can be negative either due
to chance fluctuation when sample sizes are small or due to model misspecification
(Snijders and Bosker, 2012).
When there is missing data in the level-1 predictors, it is possible that sample sizes for the baseline and complete models differ.
Similar to the R-squared measures by Raudenbush and Bryk (2002), the measures
by Snijders and Bosker (1994) are appropriate for random intercept models, but
not for random intercept and slope models. Accordingly, for random slope models,
Snijders and Bosker (2012) suggested to re-estimate the model as random intercept
models with the same predictors while omitting the random slopes to compute the
R-squared measures. The simulation study by LaHuis et al. (2014) revealed that
the R-squared measures showed an acceptable performance, but it should be noted
that the explained variance at level-2 was not investigated in
their study.
R-squared measures by Nakagawa
and Schielzeth (2013) are based on partitioning model-implied variance from a
single fitted model and uses the variance of predicted values of
to form both the outcome variance in the denominator and the explained variance
in the numerator of the formulas:
for marginal total and:
for conditional total . In the former formula
predicted
scores are marginalized across random effects to indicate the variance explained
by fixed effects and in the latter formula
predicted scores are conditioned
on random effects to indicate the variance explained by fixed and random effects
(Rights and Sterba, 2019).
The advantage of these measures is that they can never become negative and
that they can also be extended to generalized linear mixed effects models (GLMM)
when outcome variables are not continuous (e.g., binary outcome variables).
Note that currently the function does not provide measures for GLMMs,
but these measures can be obtained using the
r.squaredGLMM()
function in
the MuMIn package.
A disadvantage is that these measures do not allow random slopes and are restricted
to the simplest random effect structure (i.e., random intercept model). In other
words, these measures do not fully reflect the structure of the fitted model when
using random intercept and slope models. However, Johnson (2014) extended these
measures to allow random slope by taking into account the contribution of random
slopes, intercept-slope covariances, and the covariance matrix of random slope
to the variance in . As a result, R-squared measures by Nakagawa
and Schielzeth (2013) as extended by Johnson (2014) can be used for both random
intercept, and random intercept and slope models.
The major criticism of the R-squared measures by Nakagawa and Schielzeth (2013)
as extended by Johnson (2014) is that these measures do not decompose outcome
variance into each of total, within-cluster, and between-cluster variance which
precludes from computing level-specific measures. In addition, these
measures do not distinguish variance attributable to level-1 versus level-2
predictors via fixed effects, and they also do not distinguish between random
intercept and random slope variation (Rights and Sterba, 2019).
R-squared measures by Rights and Sterba (2019) provide an integrative framework of R-squared measures for multilevel and linear mixed effects models with random intercepts and/or slopes. Their measures are also based on partitioning model implied variance from a single fitted model, but they provide a full partitioning of the total outcome variance to one of five specific sources:
variance attributable to level-1 predictors via fixed slopes (shorthand:
variance attributable to f1
)
variance attributable to level-2 predictors via fixed slopes (shorthand:
variance attributable to f2
)
variance attributable to level-1 predictors via random slope variation/
covariation (shorthand: variance attributable to v
)
variance attributable to cluster-specific outcome means via random
intercept variation (shorthand: variance attributable to m
)
variance attributable to level-1 residuals
measures are based on the outcome variance of interest (total,
within-cluster, or between-cluster) in the denominator, and the source contributing
to explained variance in the numerator:
measuresincorporate both within-cluster and between cluster variance in the denominator and quantify variance explained in an omnibus sense:
: Proportion of total outcome variance explained
by level-1 predictors via fixed slopes.
: Proportion of total outcome variance explained
by level-2 predictors via fixed slopes.
: Proportion of total outcome variance explained
by all predictors via fixed slopes.
: Proportion of total outcome variance explained
by level-1 predictors via random slope variation/covariation.
: Proportion of total outcome variance explained
by cluster-specific outcome means via random intercept variation.
: Proportion of total outcome variance explained
by predictors via fixed slopes and random slope variation/covariation.
: Proportion of total outcome variance explained
by predictors via fixed slopes and random slope variation/covariation
and by cluster-specific outcome means via random intercept variation.
measuresincorporate only within-cluster variance in the denominator and indicate the degree to which within-cluster variance can be explained by a given model:
: Proportion of within-cluster outcome variance
explained by level-1 predictors via fixed slopes.
: Proportion of within-cluster outcome variance
explained by level-1 predictors via random slope variation/covariation.
: Proportion of within-cluster outcome variance
explained by level-1 predictors via fixed slopes and random slope
variation/covariation.
measuresincorporate only between-cluster variance in the denominator and indicate the degree to which between-cluster variance can be explained by a given model:
: Proportion of between-cluster outcome variance
explained by level-2 predictors via fixed slopes.
: Proportion of between-cluster outcome variance
explained by cluster-specific outcome means via random intercept variation.
The decomposition of the total outcome variance can be visualized in a bar
chart by specifying plot = TRUE
. The first column of the bar chart
decomposes scaled total variance into five distinct proportions (i.e.,
,
,
,
,
,
, and
), the second
column decomposes scaled within-cluster variance into three distinct proportions
(i.e.,
,
, and
), and
the third column decomposes scaled between-cluster variance into two distinct
proportions (i.e.,
,
).
Note that the function assumes that all level-1 predictors are centered within
cluster (i.e., group-mean or cluster-mean centering) as has been widely recommended
(e.g., Enders & Tofighi, D., 2007; Rights et al., 2019). In fact, it does not
matter whether a lower-level predictor is merely a control variable, or is
quantitative or categorical (Yaremych et al., 2021), cluster-mean centering
should always be used for lower-level predictors to obtain an orthogonal
between-within partitioning of a lower-level predictor's variance that directly
parallels what happens to a level-1 outcome (Hoffman & Walters, 2022). In the
absence of cluster-mean-centering, however, the function provides total
measures, but does not provide any within-cluster or between-cluster
measures.
By default, the function only computes R-squared measures by Rights and Sterba
(2019) because the other R-squared measures reflect the same population quantity
provided by Rights and Sterba (2019). That is, R-squared measures
and
by Raudenbush and Bryk (2002) are equivalent to
and
, R-squared measures
and
are equivalent to
and
, and R-squared measures
and
by Nakagawa and Schielzeth (2013) as extended
by Johnson (2014) are equivalent to
and
(see Rights and Sterba, Table 3).
Note that none of these measures provide an for the random slope
variance explained by cross-level interactions, a quantity that is frequently
of interest (Hoffman & Walters, 2022).
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
matrix or data frame specified in |
plot |
ggplot2 object for plotting the results |
args |
specification of function arguments |
result |
list with result tables, i.e., |
This function is based on the multilevelR2()
function from the mitml
package by Simon Grund, Alexander Robitzsch and Oliver Luedtke (2021), and a
copy of the function r2mlm
in the r2mlm package by Mairead Shaw,
Jason Rights, Sonya Sterba, and Jessica Flake.
Simon Grund, Alexander Robitzsch, Oliver Luedtk, Mairead Shaw, Jason D. Rights, Sonya K. Sterba, Jessica K. Flake, and Takuya Yanagida
Enders, C. K., & Tofighi, D. (2007). Centering predictor variables in cross-sectional multilevel models: A new look at an old issue. Psychological Methods, 12, 121-138. https://doi.org/10.1037/1082-989X.12.2.121
Hoffmann, L., & Walter, W. R. (2022). Catching up on multilevel modeling. Annual Review of Psychology, 73, 629-658. https://doi.org/10.1146/annurev-psych-020821-103525
Hox, J., Moerbeek, M., & van de Schoot, R. (2018). Multilevel Analysis: Techniques and Applications (3rd ed.) Routledge.
Johnson, P. C. D. (2014). Extension of Nakagawa & Schielzeth’s R2 GLMM to random slopes models. Methods in Ecology and Evolution, 5(9), 944-946. https://doi.org/10.1111/2041-210X.12225
LaHuis, D. M., Hartman, M. J., Hakoyama, S., & Clark, P. C. (2014). Explained variance measures for multilevel models. Organizational Research Methods, 17, 433-451. https://doi.org/10.1177/1094428114541701
Nakagawa, S., & Schielzeth, H. (2013). A general and simple method for obtaining R2 from generalized linear mixed-effects models. Methods in Ecology and Evolution, 4(2), 133-142. https://doi.org/10.1111/j.2041-210x.2012.00261.x
Raudenbush, S. W., & Bryk, A. S., (2002). Hierarchical linear models: Applications and data analysis methods. Sage.
Rights, J. D., Preacher, K. J., & Cole, D. A. (2020). The danger of conflating level-specific effects of control variables when primary interest lies in level-2 effects. British Journal of Mathematical and Statistical Psychology, 73(Suppl 1), 194-211. https://doi.org/10.1111/bmsp.12194
Rights, J. D., & Sterba, S. K. (2019). Quantifying explained variance in multilevel models: An integrative framework for defining R-squared measures. Psychological Methods, 24, 309-338. https://doi.org/10.1037/met0000184
Roberts, K. J., Monaco, J. P., Stovall, H., & Foster, V. (2011). Explained variance in multilevel models (pp. 219-230). In J. J. Hox & J. K. Roberts (Eds.), Handbook of Advanced Multilevel Analysis. Routledge.
Snijders, T. A. B., & Bosker, R. (1994). Modeled variance in two-level models. Sociological methods and research, 22, 342-363. https://doi.org/10.1177/0049124194022003004
Snijders, T. A. B., & Bosker, R. J. (2012). Multilevel analysis: An introduction to basic and advanced multilevel modeling (2nd ed.). Sage.
Yaremych, H. E., Preacher, K. J., & Hedeker, D. (2021). Centering categorical predictors in multilevel models: Best practices and interpretation. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000434
multilevel.cor
, multilevel.descript
,
multilevel.icc
, multilevel.indirect
## Not run: # Load misty, lme4, nlme, and ggplot2 package library(misty) library(lme4) library(nlme) library(ggplot2) # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #---------------------------------------------------------------------------- #' # Cluster mean centering, center() from the misty package Demo.twolevel$x2.c <- center(Demo.twolevel$x2, type = "CWC", cluster = Demo.twolevel$cluster) # Compute group means, cluster.scores() from the misty package Demo.twolevel$x2.b <- cluster.scores(Demo.twolevel$x2, cluster = Demo.twolevel$cluster) # Estimate multilevel model using the lme4 package mod1a <- lmer(y1 ~ x2.c + x2.b + w1 + (1 + x2.c | cluster), data = Demo.twolevel, REML = FALSE, control = lmerControl(optimizer = "bobyqa")) # Estimate multilevel model using the nlme package mod1b <- lme(y1 ~ x2.c + x2.b + w1, random = ~ 1 + x2.c | cluster, data = Demo.twolevel, method = "ML") #---------------------------------------------------------------------------- #' # Example 1a: R-squared measures according to Rights and Sterba (2019) multilevel.r2(mod1a) #' # Example 1b: R-squared measures according to Rights and Sterba (2019) multilevel.r2(mod1b) #' # Example 1a: Write Results into a text file multilevel.r2(mod1a, write = "ML-R2.txt") #------------------------------------------------------------------------------- # Example 2: Bar chart showing the decomposition of scaled total, within-cluster, # and between-cluster outcome variance multilevel.r2(mod1a, plot = TRUE) # Bar chart in gray scale multilevel.r2(mod1a, plot = TRUE, gray = TRUE) # Save bar chart, ggsave() from the ggplot2 package ggsave("Proportion_of_Variance.png", dpi = 600, width = 5.5, height = 5.5) #------------------------------------------------------------------------------- # Example 3: Estimate multilevel model without random slopes # Note. R-squared measures by Raudenbush and Bryk (2002), and Snijders and # Bosker (2012) should be computed based on the random intercept model mod2 <- lmer(y1 ~ x2.c + x2.b + w1 + (1 | cluster), data = Demo.twolevel, REML = FALSE, control = lmerControl(optimizer = "bobyqa")) # Print all available R-squared measures multilevel.r2(mod2, print = "all") #------------------------------------------------------------------------------- # Example 4: Draw bar chart manually mod1a.r2 <- multilevel.r2(mod1a, output = FALSE) # Prepare data frame for ggplot() df <- data.frame(var = factor(rep(c("Total", "Within", "Between"), each = 5), level = c("Total", "Within", "Between")), part = factor(c("Fixed Slopes (Within)", "Fixed Slopes (Between)", "Slope Variation (Within)", "Intercept Variation (Between)", "Residual (Within)"), level = c("Residual (Within)", "Intercept Variation (Between)", "Slope Variation (Within)", "Fixed Slopes (Between)", "Fixed Slopes (Within)")), y = as.vector(mod1a.r2$result$rs$decomp)) # Draw bar chart in line with the default setting of multilevel.r2() ggplot(df, aes(x = var, y = y, fill = part)) + theme_bw() + geom_bar(stat = "identity") + scale_fill_manual(values = c("#E69F00", "#009E73", "#CC79A7", "#0072B2", "#D55E00")) + scale_y_continuous(name = "Proportion of Variance", breaks = seq(0, 1, by = 0.1)) + theme(axis.title.x = element_blank(), axis.ticks.x = element_blank(), legend.title = element_blank(), legend.position = "bottom", legend.box.margin = margin(-10, 6, 6, 6)) + guides(fill = guide_legend(nrow = 2, reverse = TRUE)) ## End(Not run)
## Not run: # Load misty, lme4, nlme, and ggplot2 package library(misty) library(lme4) library(nlme) library(ggplot2) # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #---------------------------------------------------------------------------- #' # Cluster mean centering, center() from the misty package Demo.twolevel$x2.c <- center(Demo.twolevel$x2, type = "CWC", cluster = Demo.twolevel$cluster) # Compute group means, cluster.scores() from the misty package Demo.twolevel$x2.b <- cluster.scores(Demo.twolevel$x2, cluster = Demo.twolevel$cluster) # Estimate multilevel model using the lme4 package mod1a <- lmer(y1 ~ x2.c + x2.b + w1 + (1 + x2.c | cluster), data = Demo.twolevel, REML = FALSE, control = lmerControl(optimizer = "bobyqa")) # Estimate multilevel model using the nlme package mod1b <- lme(y1 ~ x2.c + x2.b + w1, random = ~ 1 + x2.c | cluster, data = Demo.twolevel, method = "ML") #---------------------------------------------------------------------------- #' # Example 1a: R-squared measures according to Rights and Sterba (2019) multilevel.r2(mod1a) #' # Example 1b: R-squared measures according to Rights and Sterba (2019) multilevel.r2(mod1b) #' # Example 1a: Write Results into a text file multilevel.r2(mod1a, write = "ML-R2.txt") #------------------------------------------------------------------------------- # Example 2: Bar chart showing the decomposition of scaled total, within-cluster, # and between-cluster outcome variance multilevel.r2(mod1a, plot = TRUE) # Bar chart in gray scale multilevel.r2(mod1a, plot = TRUE, gray = TRUE) # Save bar chart, ggsave() from the ggplot2 package ggsave("Proportion_of_Variance.png", dpi = 600, width = 5.5, height = 5.5) #------------------------------------------------------------------------------- # Example 3: Estimate multilevel model without random slopes # Note. R-squared measures by Raudenbush and Bryk (2002), and Snijders and # Bosker (2012) should be computed based on the random intercept model mod2 <- lmer(y1 ~ x2.c + x2.b + w1 + (1 | cluster), data = Demo.twolevel, REML = FALSE, control = lmerControl(optimizer = "bobyqa")) # Print all available R-squared measures multilevel.r2(mod2, print = "all") #------------------------------------------------------------------------------- # Example 4: Draw bar chart manually mod1a.r2 <- multilevel.r2(mod1a, output = FALSE) # Prepare data frame for ggplot() df <- data.frame(var = factor(rep(c("Total", "Within", "Between"), each = 5), level = c("Total", "Within", "Between")), part = factor(c("Fixed Slopes (Within)", "Fixed Slopes (Between)", "Slope Variation (Within)", "Intercept Variation (Between)", "Residual (Within)"), level = c("Residual (Within)", "Intercept Variation (Between)", "Slope Variation (Within)", "Fixed Slopes (Between)", "Fixed Slopes (Within)")), y = as.vector(mod1a.r2$result$rs$decomp)) # Draw bar chart in line with the default setting of multilevel.r2() ggplot(df, aes(x = var, y = y, fill = part)) + theme_bw() + geom_bar(stat = "identity") + scale_fill_manual(values = c("#E69F00", "#009E73", "#CC79A7", "#0072B2", "#D55E00")) + scale_y_continuous(name = "Proportion of Variance", breaks = seq(0, 1, by = 0.1)) + theme(axis.title.x = element_blank(), axis.ticks.x = element_blank(), legend.title = element_blank(), legend.position = "bottom", legend.box.margin = margin(-10, 6, 6, 6)) + guides(fill = guide_legend(nrow = 2, reverse = TRUE)) ## End(Not run)
This function computes R-squared measures by Rights and Sterba (2019) for multilevel and linear mixed effects models by manually inputting parameter estimates.
multilevel.r2.manual(data, within = NULL, between = NULL, random = NULL, gamma.w = NULL, gamma.b = NULL, tau, sigma2, intercept = TRUE, center = TRUE, digits = 3, plot = FALSE, gray = FALSE, start = 0.15, end = 0.85, color = c("#D55E00", "#0072B2", "#CC79A7", "#009E73", "#E69F00"), write = NULL, append = TRUE, check = TRUE, output = TRUE)
multilevel.r2.manual(data, within = NULL, between = NULL, random = NULL, gamma.w = NULL, gamma.b = NULL, tau, sigma2, intercept = TRUE, center = TRUE, digits = 3, plot = FALSE, gray = FALSE, start = 0.15, end = 0.85, color = c("#D55E00", "#0072B2", "#CC79A7", "#009E73", "#E69F00"), write = NULL, append = TRUE, check = TRUE, output = TRUE)
data |
a matrix or data frame with the level-1 and level-2 predictors and outcome variable used in the model. |
within |
a character vector with the variable names in |
between |
a character vector with the variable names in |
random |
a character vector with the variable names in |
gamma.w |
a numeric vector of fixed slope estimates for all level-1
predictors, to be entered in the order of the predictors
listed in the argument |
gamma.b |
a numeric vector of the intercept and fixed slope estimates
for all level-2predictors, to be entered in the order of the
predictors listed in the argument |
tau |
a matrix indicating the random effects covariance matrix, the
first row/column denotes the intercept variance and covariances
(if intercept is fixed, set all to 0) and each subsequent
row/column denotes a given random slope's variance and covariances
(to be entered in the order listed in the argument |
sigma2 |
a numeric value indicating the level-1 residual variance. |
intercept |
logical: if |
center |
logical: if |
digits |
an integer value indicating the number of decimal places to be used. |
plot |
logical: if |
gray |
logical: if |
start |
a numeric value between 0 and 1, graphical parameter to specify the gray value at the low end of the palette. |
end |
a numeric value between 0 and 1, graphical parameter to specify the gray value at the high end of the palette. |
color |
a character vector, graphical parameter indicating the color of bars in the bar chart in the following order: Fixed slopes (Within), Fixed slopes (Between), Slope variation (Within), Intercept variation (Between), and Residual (Within). By default, colors from the colorblind-friendly palettes are used. |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
A number of R-squared measures for multilevel and linear mixed effects models
have been developed in the methodological literature (see Rights & Sterba, 2018).
R-squared measures by Rights and Sterba (2019) provide an integrative framework
of R-squared measures for multilevel and linear mixed effects models with random
intercepts and/or slopes. Their measures are based on partitioning model implied
variance from a single fitted model, but they provide a full partitioning of
the total outcome variance to one of five specific sources. See the help page
of the multilevel.r2
function for more details.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
matrix or data frame specified in |
plot |
ggplot2 object for plotting the results |
args |
specification of function arguments |
result |
list with result tables, i.e., |
for the between-cluster R2 measures.
This function is based on a copy of the function r2mlm_manual()
in the
r2mlm package by Mairead Shaw, Jason Rights, Sonya Sterba, and Jessica
Flake.
Jason D. Rights, Sonya K. Sterba, Jessica K. Flake, and Takuya Yanagida
Rights, J. D., & Cole, D. A. (2018). Effect size measures for multilevel models in clinical child and adolescent research: New r-squared methods and recommendations. Journal of Clinical Child and Adolescent Psychology, 47, 863-873. https://doi.org/10.1080/15374416.2018.1528550
Rights, J. D., & Sterba, S. K. (2019). Quantifying explained variance in multilevel models: An integrative framework for defining R-squared measures. Psychological Methods, 24, 309-338. https://doi.org/10.1037/met0000184
multilevel.r2
, multilevel.cor
,
multilevel.descript
, multilevel.icc
,
multilevel.indirect
## Not run: # Load misty, lme4, nlme, and ggplot2 package library(misty) library(lme4) # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #------------------------------------------------------------------------------- # Cluster mean centering, center() from the misty package Demo.twolevel$x2.c <- center(Demo.twolevel$x2, type = "CWC", cluster = Demo.twolevel$cluster) # Compute group means, cluster.scores() from the misty package Demo.twolevel$x2.b <- cluster.scores(Demo.twolevel$x2, cluster = Demo.twolevel$cluster) # Estimate random intercept model using the lme4 package mod1 <- lmer(y1 ~ x2.c + x2.b + w1 + (1| cluster), data = Demo.twolevel, REML = FALSE, control = lmerControl(optimizer = "bobyqa")) # Estimate random intercept and slope model using the lme4 package mod2 <- lmer(y1 ~ x2.c + x2.b + w1 + (1 + x2.c | cluster), data = Demo.twolevel, REML = FALSE, control = lmerControl(optimizer = "bobyqa")) #------------------------------------------------------------------------------- # Example 1: Random intercept model # Fixed slope estimates fixef(mod1) # Random effects variance-covariance matrix as.data.frame(VarCorr(mod1)) # R-squared measures according to Rights and Sterba (2019) multilevel.r2.manual(data = Demo.twolevel, within = "x2.c", between = c("x2.b", "w1"), gamma.w = 0.41127956, gamma.b = c(0.01123245, -0.08269374, 0.17688507), tau = 0.9297401, sigma2 = 1.813245794) #------------------------------------------------------------------------------- # Example 2: Random intercept and slope model # Fixed slope estimates fixef(mod2) # Random effects variance-covariance matrix as.data.frame(VarCorr(mod2)) # R-squared measures according to Rights and Sterba (2019) multilevel.r2.manual(data = Demo.twolevel, within = "x2.c", between = c("x2.b", "w1"), random = "x2.c", gamma.w = 0.41127956, gamma.b = c(0.01123245, -0.08269374, 0.17688507), tau = matrix(c(0.931008649, 0.004110479, 0.004110479, 0.017068857), ncol = 2), sigma2 = 1.813245794) ## End(Not run)
## Not run: # Load misty, lme4, nlme, and ggplot2 package library(misty) library(lme4) # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") #------------------------------------------------------------------------------- # Cluster mean centering, center() from the misty package Demo.twolevel$x2.c <- center(Demo.twolevel$x2, type = "CWC", cluster = Demo.twolevel$cluster) # Compute group means, cluster.scores() from the misty package Demo.twolevel$x2.b <- cluster.scores(Demo.twolevel$x2, cluster = Demo.twolevel$cluster) # Estimate random intercept model using the lme4 package mod1 <- lmer(y1 ~ x2.c + x2.b + w1 + (1| cluster), data = Demo.twolevel, REML = FALSE, control = lmerControl(optimizer = "bobyqa")) # Estimate random intercept and slope model using the lme4 package mod2 <- lmer(y1 ~ x2.c + x2.b + w1 + (1 + x2.c | cluster), data = Demo.twolevel, REML = FALSE, control = lmerControl(optimizer = "bobyqa")) #------------------------------------------------------------------------------- # Example 1: Random intercept model # Fixed slope estimates fixef(mod1) # Random effects variance-covariance matrix as.data.frame(VarCorr(mod1)) # R-squared measures according to Rights and Sterba (2019) multilevel.r2.manual(data = Demo.twolevel, within = "x2.c", between = c("x2.b", "w1"), gamma.w = 0.41127956, gamma.b = c(0.01123245, -0.08269374, 0.17688507), tau = 0.9297401, sigma2 = 1.813245794) #------------------------------------------------------------------------------- # Example 2: Random intercept and slope model # Fixed slope estimates fixef(mod2) # Random effects variance-covariance matrix as.data.frame(VarCorr(mod2)) # R-squared measures according to Rights and Sterba (2019) multilevel.r2.manual(data = Demo.twolevel, within = "x2.c", between = c("x2.b", "w1"), random = "x2.c", gamma.w = 0.41127956, gamma.b = c(0.01123245, -0.08269374, 0.17688507), tau = matrix(c(0.931008649, 0.004110479, 0.004110479, 0.017068857), ncol = 2), sigma2 = 1.813245794) ## End(Not run)
This function computes (1) Pearson product-moment correlation matrix to identify variables related to the incomplete variable (i.e., correlates of incomplete variables), (2) Cohen's d matrix comparing cases with and without missing values to identify variables related to the probability of missingness(i.e., correlates of missingness), and (3) semi-partial correlations of an outcome variable conditional on the predictor variables of a substantive model with a set of candidate auxiliary variables to identify correlates of an incomplete outcome variable as suggested by Raykov and West (2016).
na.auxiliary(..., data = NULL, model = NULL, estimator = c("ML", "MLR"), missing = c("fiml", "two.stage", "robust.two.stage", "doubly.robust"), tri = c("both", "lower", "upper"), weighted = FALSE, correct = FALSE, digits = 2, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
na.auxiliary(..., data = NULL, model = NULL, estimator = c("ML", "MLR"), missing = c("fiml", "two.stage", "robust.two.stage", "doubly.robust"), tri = c("both", "lower", "upper"), weighted = FALSE, correct = FALSE, digits = 2, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame with incomplete data, where missing
values are coded as |
data |
a data frame when specifying one or more variables in the
argument |
model |
a character string specifying the substantive model predicting
an continuous outcome variable using a set of predictor variables
to estimate semi-partial correlations between the outcome
variable and a set of candidate auxiliary variables. The default
setting is |
estimator |
a character string indicating the estimator to be used
when estimating semi-partial correlation coefficients, i.e.,
|
missing |
a character string indicating how to deal with missing data
when estimating semi-partial correlation coefficients,
i.e., |
tri |
a character string indicating which triangular of the correlation
matrix to show on the console, i.e., |
weighted |
logical: if |
correct |
logical: if |
digits |
integer value indicating the number of decimal places digits to be used for displaying correlation coefficients and Cohen's d estimates. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Note that non-numeric variables (i.e., factors, character vectors, and logical vectors) are excluded from to the analysis.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame used for the current analysis |
model |
lavaan model syntax for estimating the semi-partial correlations |
model.fit |
fitted lavaan model for estimating the semi-partial correlations |
args |
specification of function arguments |
result |
list with result tables |
Takuya Yanagida [email protected]
Enders, C. K. (2010). Applied missing data analysis. Guilford Press.
Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549-576. https://doi.org/10.1146/annurev.psych.58.110405.085530
Raykov, T., & West, B. T. (2016). On enhancing plausibility of the missing at random assumption in incomplete data analyses via evaluation of response-auxiliary variable correlations. Structural Equation Modeling, 23(1), 45–53. https://doi.org/10.1080/10705511.2014.937848
van Buuren, S. (2018). Flexible imputation of missing data (2nd ed.). Chapman & Hall.
as.na
, na.as
, na.coverage
,
na.descript
, na.indicator
, na.pattern
,
na.prop
, na.test
# Example 1a: Auxiliary variables na.auxiliary(airquality) # Example 1b: Alternative specification using the 'data' argument na.auxiliary(., data = airquality) # Example 2a: Semi-partial correlation coefficients na.auxiliary(airquality, model = "Ozone ~ Solar.R + Wind") # Example 2b: Alternative specification using the 'data' argument na.auxiliary(Temp, Month, Day, data = airquality, model = "Ozone ~ Solar.R + Wind") ## Not run: # Example 3: Write Results into a text file na.auxiliary(airquality, write = "NA_Auxiliary.txt") ## End(Not run)
# Example 1a: Auxiliary variables na.auxiliary(airquality) # Example 1b: Alternative specification using the 'data' argument na.auxiliary(., data = airquality) # Example 2a: Semi-partial correlation coefficients na.auxiliary(airquality, model = "Ozone ~ Solar.R + Wind") # Example 2b: Alternative specification using the 'data' argument na.auxiliary(Temp, Month, Day, data = airquality, model = "Ozone ~ Solar.R + Wind") ## Not run: # Example 3: Write Results into a text file na.auxiliary(airquality, write = "NA_Auxiliary.txt") ## End(Not run)
This function computes the proportion of cases that contributes for the calculation of each variance and covariance.
na.coverage(..., data = NULL, tri = c("both", "lower", "upper"), digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
na.coverage(..., data = NULL, tri = c("both", "lower", "upper"), digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame with incomplete data, where missing
values are coded as |
data |
a data frame when specifying one or more variables in the
argument |
tri |
a character string or character vector indicating which triangular
of the matrix to show on the console, i.e., |
digits |
an integer value indicating the number of decimal places to be used for displaying proportions. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame used for the current analysis |
args |
specification of function arguments |
result |
result table |
Takuya Yanagida [email protected]
Enders, C. K. (2010). Applied missing data analysis. Guilford Press.
Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549-576. https://doi.org/10.1146/annurev.psych.58.110405.085530
van Buuren, S. (2018). Flexible imputation of missing data (2nd ed.). Chapman & Hall.
write.result
, as.na
, na.as
,
na.auxiliary
, na.descript
, na.indicator
,
na.pattern
, na.prop
, na.test
# Example 1a: Compute variance-covariance coverage na.coverage(airquality) # Example 1b: Alternative specification using the 'data' argument na.coverage(., data = airquality) ## Not run: # Example 2a: Write Results into a text file na.coverage(airquality, write = "Coverage.txt") # Example 2b: Write Results into an Excel file na.coverage(airquality, write = "Coverage.xlsx") result <- na.coverage(airquality, output = FALSE) write.result(result, "Coverage.xlsx") ## End(Not run)
# Example 1a: Compute variance-covariance coverage na.coverage(airquality) # Example 1b: Alternative specification using the 'data' argument na.coverage(., data = airquality) ## Not run: # Example 2a: Write Results into a text file na.coverage(airquality, write = "Coverage.txt") # Example 2b: Write Results into an Excel file na.coverage(airquality, write = "Coverage.xlsx") result <- na.coverage(airquality, output = FALSE) write.result(result, "Coverage.xlsx") ## End(Not run)
This function computes descriptive statistics for missing data in single-level, two-level, and three-level data, e.g. number of incomplete cases, number of missing values, and summary statistics for the number of missing values across all variables.
na.descript(..., data = NULL, cluster = NULL, table = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
na.descript(..., data = NULL, cluster = NULL, table = FALSE, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame with incomplete data, where missing
values are coded as |
data |
a data frame when specifying one or more variables in the
argument |
cluster |
a character string indicating the name of the cluster
variable in |
table |
logical: if |
digits |
an integer value indicating the number of decimal places to be used for displaying percentages. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
data frame used for the current analysis |
args |
specification of function arguments |
result |
list with results |
Takuya Yanagida [email protected]
Enders, C. K. (2010). Applied missing data analysis. Guilford Press.
Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549-576. https://doi.org/10.1146/annurev.psych.58.110405.085530
van Buuren, S. (2018). Flexible imputation of missing data (2nd ed.). Chapman & Hall.
write.result
, as.na
, na.as
,
na.auxiliary
, na.coverage
, na.indicator
,
na.pattern
, na.prop
, na.test
#---------------------------------------------------------------------------- # Single-Level Data # Example 1a: Descriptive statistics for missing data na.descript(airquality) # Example 1b: Alternative specification using the 'data' argument na.descript(., data = airquality) # Example 2: Descriptive statistics for missing data, print results with 3 digits na.descript(airquality, digits = 3) # Example 3: Descriptive statistics for missing data with frequency table na.descript(airquality, table = TRUE) #---------------------------------------------------------------------------- # Two-Level Data # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") # Example 4: escriptive statistics for missing data na.descript(Demo.twolevel, cluster = "cluster") #---------------------------------------------------------------------------- # Three-Level Data # Create arbitrary three-level data Demo.threelevel <- data.frame(Demo.twolevel, cluster2 = Demo.twolevel$cluster, cluster3 = rep(1:10, each = 250)) # Example 5: Descriptive statistics for missing data na.descript(Demo.threelevel, cluster = c("cluster3", "cluster2")) #---------------------------------------------------------------------------- # Write Results ## Not run: # Example 6a: Write Results into a text file na.descript(airquality, table = TRUE, write = "NA_Descriptives.txt") # Example 6b: Write Results into a Excel file na.descript(airquality, table = TRUE, write = "NA_Descriptives.xlsx") result <- na.descript(airquality, table = TRUE, output = FALSE) write.result(result, "NA_Descriptives.xlsx") ## End(Not run)
#---------------------------------------------------------------------------- # Single-Level Data # Example 1a: Descriptive statistics for missing data na.descript(airquality) # Example 1b: Alternative specification using the 'data' argument na.descript(., data = airquality) # Example 2: Descriptive statistics for missing data, print results with 3 digits na.descript(airquality, digits = 3) # Example 3: Descriptive statistics for missing data with frequency table na.descript(airquality, table = TRUE) #---------------------------------------------------------------------------- # Two-Level Data # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") # Example 4: escriptive statistics for missing data na.descript(Demo.twolevel, cluster = "cluster") #---------------------------------------------------------------------------- # Three-Level Data # Create arbitrary three-level data Demo.threelevel <- data.frame(Demo.twolevel, cluster2 = Demo.twolevel$cluster, cluster3 = rep(1:10, each = 250)) # Example 5: Descriptive statistics for missing data na.descript(Demo.threelevel, cluster = c("cluster3", "cluster2")) #---------------------------------------------------------------------------- # Write Results ## Not run: # Example 6a: Write Results into a text file na.descript(airquality, table = TRUE, write = "NA_Descriptives.txt") # Example 6b: Write Results into a Excel file na.descript(airquality, table = TRUE, write = "NA_Descriptives.xlsx") result <- na.descript(airquality, table = TRUE, output = FALSE) write.result(result, "NA_Descriptives.xlsx") ## End(Not run)
This function creates a missing data indicator matrix that denotes whether
values are observed or missing, i.e.,
if a value is observed, and
if a value is missing.
na.indicator(..., data = NULL, na = 0, append = TRUE, name = ".i", as.na = NULL, check = TRUE)
na.indicator(..., data = NULL, na = 0, append = TRUE, name = ".i", as.na = NULL, check = TRUE)
... |
a matrix or data frame with incomplete data, where missing
values are coded as |
data |
a data frame when specifying one or more variables in the
argument |
na |
an integer value specifying the value representing missing values,
i.e., either |
append |
logical: if |
name |
a character string indicating the name suffix of indicator variables
By default, the indicator variables are named with the ending
|
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
check |
logical: if |
Returns a matrix or data frame with if a value is observed, and
if a value is missing.
Takuya Yanagida [email protected]
Enders, C. K. (2010). Applied missing data analysis. Guilford Press.
Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549-576. https://doi.org/10.1146/annurev.psych.58.110405.085530
van Buuren, S. (2018). Flexible imputation of missing data (2nd ed.). Chapman & Hall.
as.na
, na.as
, na.auxiliary
,
na.coverage
, na.descript
, na.pattern
,
na.prop
, na.test
# Example 1a: Create missing data indicator matrix na.indicator(airquality) # Example 1b: Alternative specification using the 'data' argument na.indicator(., data = airquality) # Example 2: Append missing data indicator matrix to the data frame na.indicator(., data = airquality)
# Example 1a: Create missing data indicator matrix na.indicator(airquality) # Example 1b: Alternative specification using the 'data' argument na.indicator(., data = airquality) # Example 2: Append missing data indicator matrix to the data frame na.indicator(., data = airquality)
This function computes a summary of missing data patterns, i.e., number ( cases with a specific missing data pattern and plots the missing data patterns.
na.pattern(..., data = NULL, order = FALSE, n.pattern = NULL, plot = FALSE, square = TRUE, rotate = FALSE, fill.col = c("#B61A51B3", "#006CC2B3"), alpha = 0.6, plot.margin = c(4, 16, 0, 4), legend.box.margin = c(-8, 6, 6, 6), legend.key.size = 12, legend.text.size = 9, saveplot = FALSE, file = "NA_Patternt.pdf", width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 600, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
na.pattern(..., data = NULL, order = FALSE, n.pattern = NULL, plot = FALSE, square = TRUE, rotate = FALSE, fill.col = c("#B61A51B3", "#006CC2B3"), alpha = 0.6, plot.margin = c(4, 16, 0, 4), legend.box.margin = c(-8, 6, 6, 6), legend.key.size = 12, legend.text.size = 9, saveplot = FALSE, file = "NA_Patternt.pdf", width = NA, height = NA, units = c("in", "cm", "mm", "px"), dpi = 600, digits = 2, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame with incomplete data, where missing
values are coded as |
data |
a data frame when specifying one or more variables in the
argument |
order |
logical: if |
n.pattern |
an integer value indicating the minimum number of cases sharing
a missing data pattern to be included in the result table and the plot, e.g., specifying
|
plot |
logical: if |
square |
logical: if |
rotate |
logical: if |
fill.col |
a character string indicating the color for the |
alpha |
a numeric value between 0 and 1 for the |
plot.margin |
a numeric vector indicating the |
legend.box.margin |
a numeric vector indicating the |
legend.key.size |
a numeric value indicating the |
legend.text.size |
a numeric value indicating the |
saveplot |
logical: if |
file |
a character string indicating the |
width |
a numeric value indicating the |
height |
a numeric value indicating the |
units |
a character string indicating the |
dpi |
a numeric value indicating the |
digits |
an integer value indicating the number of decimal places to be used for displaying percentages. |
as.na |
a numeric vector indicating user-defined missing values, i.e. these values are converted to NA before conducting the analysis. |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
list with data frames, i.e., |
args |
specification of function arguments |
result |
result table |
plot |
ggplot2 object for plotting the results |
pattern |
a numeric vector indicating the missing data pattern for each case |
The code for plotting missing data patterns is based on the plot_pattern
function in the ggmice package by Hanne Oberman.
Takuya Yanagida [email protected]
Enders, C. K. (2010). Applied missing data analysis. Guilford Press.
Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549-576. https://doi.org/10.1146/annurev.psych.58.110405.085530
Oberman, H. (2023). ggmice: Visualizations for 'mice' with 'ggplot2'. R package version 0.1.0. https://doi.org/10.32614/CRAN.package.ggmice
van Buuren, S. (2018). Flexible imputation of missing data (2nd ed.). Chapman & Hall.
write.result
, as.na
, na.as
,
na.auxiliary
, na.coverage
, na.descript
,
na.indicator
, na.prop
, na.test
## Not run: # Example 1a: Compute a summary of missing data patterns dat.pattern <- na.pattern(airquality) # Example 1b: Alternative specification using the 'data' argument dat.pattern <- na.pattern(., data = airquality) # Example 2a: Compute and plot a summary of missing data patterns na.pattern(airquality, plot = TRUE) # Example 2b: Plot missing data patterns with at least 3 cases na.pattern(airquality, plot = TRUE, n.pattern = 3) # Example 3: Vector of missing data pattern for each case dat.pattern$pattern # Data frame without cases with missing data pattern 2 and 4 airquality[!dat.pattern$pattern # Example 4a: Write Results into a text file result <- na.pattern(airquality, write = "NA_Pattern.xlsx") # Example 4b: Write Results into a Excel file result <- na.pattern(airquality, write = "NA_Pattern.xlsx") result <- 4c.pattern(dat, output = FALSE) write.result(result, "NA_Pattern.xlsx") ## End(Not run)
## Not run: # Example 1a: Compute a summary of missing data patterns dat.pattern <- na.pattern(airquality) # Example 1b: Alternative specification using the 'data' argument dat.pattern <- na.pattern(., data = airquality) # Example 2a: Compute and plot a summary of missing data patterns na.pattern(airquality, plot = TRUE) # Example 2b: Plot missing data patterns with at least 3 cases na.pattern(airquality, plot = TRUE, n.pattern = 3) # Example 3: Vector of missing data pattern for each case dat.pattern$pattern # Data frame without cases with missing data pattern 2 and 4 airquality[!dat.pattern$pattern # Example 4a: Write Results into a text file result <- na.pattern(airquality, write = "NA_Pattern.xlsx") # Example 4b: Write Results into a Excel file result <- na.pattern(airquality, write = "NA_Pattern.xlsx") result <- 4c.pattern(dat, output = FALSE) write.result(result, "NA_Pattern.xlsx") ## End(Not run)
This function computes the proportion of missing data for each case in a matrix or data frame.
na.prop(..., data = NULL, digits = 2, append = TRUE, name = "na.prop", as.na = NULL, check = TRUE)
na.prop(..., data = NULL, digits = 2, append = TRUE, name = "na.prop", as.na = NULL, check = TRUE)
... |
a matrix or data frame with incomplete data, where missing
values are coded as |
data |
a data frame when specifying one or more variables in the
argument |
name |
a character string indicating the name of the variable appended
to the data frame specified in the argument |
.
append |
logical: if |
digits |
an integer value indicating the number of decimal places to be used for displaying proportions. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
check |
logical: if |
Returns a numeric vector with the same length as the number of rows in x
containing the proportion of missing data.
Takuya Yanagida [email protected]
Enders, C. K. (2010). Applied missing data analysis. Guilford Press.
Graham, J. W. (2009). Missing data analysis: Making it work in the real world. Annual Review of Psychology, 60, 549-576. https://doi.org/10.1146/annurev.psych.58.110405.085530
van Buuren, S. (2018). Flexible imputation of missing data (2nd ed.). Chapman & Hall.
as.na
, na.as
, na.auxiliary
,
na.coverage
, na.descript
, na.indicator
,
na.pattern
, na.test
# Example 1a: Compute proportion of missing data for each case in the data frame na.prop(airquality) # Example 1b: Alternative specification using the 'data' argument, # append proportions to the data frame 'airquality' na.prop(., data = airquality)
# Example 1a: Compute proportion of missing data for each case in the data frame na.prop(airquality) # Example 1b: Alternative specification using the 'data' argument, # append proportions to the data frame 'airquality' na.prop(., data = airquality)
This function estimates a confirmatory factor analysis model (cfa.satcor
function), structural equation model (sem.satcor
function), growth curve
model (growth.satcor
function), or latent variable model (lavaan.satcor
function) in the R package lavaan using full information maximum likelihood
(FIML) method to handle missing data while automatically specifying a saturated
correlates model to incorporate auxiliary variables into a substantive model
without affecting the parameter estimates, the standard errors, or the estimates
of quality of fit (Graham, 2003).
na.satcor(model, data, aux, fun = c("cfa", "sem", "growth", "lavaan"), check = TRUE, ...) cfa.satcor(model, data, aux, check = TRUE, ...) sem.satcor(model, data, aux, check = TRUE, ...) growth.satcor(model, data, aux, check = TRUE, ...) lavaan.satcor(model, data, aux, check = TRUE, ...)
na.satcor(model, data, aux, fun = c("cfa", "sem", "growth", "lavaan"), check = TRUE, ...) cfa.satcor(model, data, aux, check = TRUE, ...) sem.satcor(model, data, aux, check = TRUE, ...) growth.satcor(model, data, aux, check = TRUE, ...) lavaan.satcor(model, data, aux, check = TRUE, ...)
model |
a character string indicating the lavaan model syntax without the
auxiliary variables specified in |
data |
a data frame containing the observed variables used in the lavaan
model syntax specified in |
aux |
a character vector indicating the names of the auxiliary variables
in the data frame specified in |
fun |
a character string indicating the name of a specific lavaan function
used to fit |
check |
logical: if |
... |
additional arguments passed to the lavaan function. |
An object of class lavaan, for which several methods are available in the R package lavaan, including a summary method.
This function is a modified copy of the auxiliary()
, cfa.auxiliary()
,
sem.auxiliary()
, growth.auxiliary()
, and lavaan.auxiliary()
functions in the semTools package by Terrence D. Jorgensen et al.
(2022).
Takuya Yanagida
Graham, J. W. (2003). Adding missing-data-relevant variables to FIML-based structural equation models. Structural Equation Modeling, 10(1), 80-100. https://doi.org/10.1207/S15328007SEM1001_4
Jorgensen, T. D., Pornprasertmanit, S., Schoemann, A. M., & Rosseel, Y. (2022). semTools: Useful tools for structural equation modeling. R package version 0.5-6. Retrieved from https://CRAN.R-project.org/package=semTools
## Not run: # Load lavaan package library(lavaan) #---------------------------------------------------------------------------- # Example 1: Saturated correlates model for the sem function # Model specification model <- 'Ozone ~ Wind' # Model estimation using the sem.satcor function mod.fit <- sem.satcor(model, data = airquality, aux = c("Temp", "Month")) # Model estimation using the na.satcor function mod.fit <- na.satcor(model, data = airquality, fun = "sem", aux = c("Temp", "Month"), estimator = "MLR") # Result summary summary(mod.fit) ## End(Not run)
## Not run: # Load lavaan package library(lavaan) #---------------------------------------------------------------------------- # Example 1: Saturated correlates model for the sem function # Model specification model <- 'Ozone ~ Wind' # Model estimation using the sem.satcor function mod.fit <- sem.satcor(model, data = airquality, aux = c("Temp", "Month")) # Model estimation using the na.satcor function mod.fit <- na.satcor(model, data = airquality, fun = "sem", aux = c("Temp", "Month"), estimator = "MLR") # Result summary summary(mod.fit) ## End(Not run)
This function performs Little's Missing Completely at Random (MCAR) test and Jamshidian and Jalal's approach for testing the MCAR assumption. By default, the function performs the Little's MCAR test.
na.test(..., data = NULL, print = c("all", "little", "jamjal"), impdat = NULL, delete = 6, method = c("npar", "normal"), m = 20, seed = 123, nrep = 10000, n.min = 30, pool = c("m", "med", "min", "max", "random"), alpha = 0.05, digits = 2, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
na.test(..., data = NULL, print = c("all", "little", "jamjal"), impdat = NULL, delete = 6, method = c("npar", "normal"), m = 20, seed = 123, nrep = 10000, n.min = 30, pool = c("m", "med", "min", "max", "random"), alpha = 0.05, digits = 2, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
... |
a matrix or data frame with incomplete data, where missing
values are coded as |
data |
a data frame when specifying one or more variables in the
argument |
print |
a character vector indicating which results to be printed on
the console, i.e. |
impdat |
an object of class |
delete |
an integer value indicating missing data patterns consisting
of |
method |
a character string indicating the imputation method, i.e.,
|
m |
an integer value indicating the number of multiple imputations.
The default setting is |
seed |
an integer value that is used as argument by the |
nrep |
an integer value indicating the replications used to simulate
the Neyman distribution to determine the cut off value for the
Neyman test. Larger values increase the accuracy of the Neyman
test. The default setting is |
n.min |
an integer value indicating the minimum number of cases in a group that triggers the use of asymptotic Chi-square distribution in place of the empirical distribution in the Neyman test of uniformity. |
pool |
a character string indicating the pooling method, i.e.,
|
alpha |
a numeric value between 0 and 1 indicating the significance
level of the Hawkins test. The default setting is |
digits |
an integer value indicating the number of decimal places to be used for displaying results. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
as.na |
a numeric vector indicating user-defined missing values, i.e. these values are converted to NA before conducting the analysis. |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
Little (1988) proposed a multivariate test of Missing Completely at Random
(MCAR) that tests for mean differences on every variable in the data set
across subgroups that share the same missing data pattern by comparing the
observed variable means for each pattern of missing data with the expected
population means estimated using the expectation-maximization (EM) algorithm
(i.e., EM maximum likelihood estimates). The test statistic is the sum of
the squared standardized differences between the subsample means and the
expected population means weighted by the estimated variance-covariance
matrix and the number of observations within each subgroup (Enders, 2010).
Under the null hypothesis that data are MCAR, the test statistic follows
asymptotically a chi-square distribution with degrees of
freedom, where
is the number of complete variables for missing data
pattern
, and
is the total number of variables. A statistically
significant result provides evidence against MCAR.
Note that Little's MCAR test has a number of problems (see Enders, 2010).
First, the test does not identify the specific variables that violates MCAR, i.e., the test does not identify potential correlates of missingness (i.e., auxiliary variables).
Second, the test is based on multivariate normality, i.e., under departure from the normality assumption the test might be unreliable unless the sample size is large and is not suitable for categorical variables.
Third, the test investigates mean differences assuming that the missing data pattern share a common covariance matrix, i.e., the test cannot detect covariance-based deviations from MCAR stemming from a Missing at Random (MAR) or Missing Not at Random (MNAR) mechanism because MAR and MNAR mechanisms can also produce missing data subgroups with equal means.
Fourth, simulation studies suggest that Little's MCAR test suffers from low statistical power, particularly when the number of variables that violate MCAR is small, the relationship between the data and missingness is weak, or the data are MNAR (Thoemmes & Enders, 2007).
Fifth, the test can only reject, but cannot prove the MCAR assumption, i.e., a statistically not significant result and failing to reject the null hypothesis of the MCAR test does not prove the null hypothesis that the data is MCAR.
Sixth, under the null hypothesis the data are actually MCAR or MNAR, while a statistically significant result indicates that missing data are MAR or MNAR, i.e., MNAR cannot be ruled out regardless of the result of the test.
The function for performing Little's MCAR test is based on the mlest
function from the mvnmle package which can handle up to 50 variables.
Note that the mcar_test
function in the naniar package is based
on the prelim.norm
function from the norm package. This function
can handle about 30 variables, but with more than 30 variables specified in
the argument data
, the prelim.norm
function might run into
numerical problems leading to results that are not trustworthy (i.e.,
p.value = 1
). In that case, the warning message
In norm::prelim.norm(data) : NAs introduced by coercion to integer range
is printed on the console.
Jamshidian and Jalal (2010) proposed an approach for testing the Missing Completely at Random (MCAR) assumption based on two tests of multivariate normality and homogeneity of covariances among groups of cases with identical missing data patterns:
In the first step, missing data are multiply imputed
(m = 20
times by default) using a non-parametric imputation method
(method = "npar"
by default) by Sirvastava and Dolatabadi (2009)
or using a parametric imputation method assuming multivariate normality
of data (method = "normal"
) for each group of cases sharing a common
missing data pattern.
In the second step, a modified Hawkins test for multivariate normality and homogeneity of covariances applicable to complete data consisting of groups with a small number of cases is performed. A statistically not significant result indicates no evidence against multivariate normality of data or homogeneity of covariances, while a statistically significant result provides evidence against multivariate normality of data or homogeneity of covariances (i.e., violation of the MCAR assumption). Note that the Hawkins test is a test of multivariate normality as well as homogeneity of covariance. Hence, a statistically significant test is ambiguous unless the researcher assumes multivariate normality of data.
In the third step, if the Hawkins test is statistically significant, the Anderson-Darling non-parametric test is performed. A statistically not significant result indicates evidence against multivariate normality of data but no evidence against homogeneity of covariances, while a statistically significant result provides evidence against homogeneity of covariances (i.e., violation of the MCAR assumption). However, no conclusions can be made about the multivariate normality of data when the Anderson-Darling non-parametric test is statistically significant.
In summary, a statistically significant result of both the Hawkins and the
Anderson-Darling non-parametric test provides evidence against the MCAR assumption.
The test statistic and the significance values of the Hawkins test and the
Anderson-Darling non-parametric based on multiply imputed data sets are pooled
by computing the median test statistic and significance value (pool = "med"
by default) as suggested by Eekhout, Wiel, and Heymans (2017).
Note that out of the problems listed for the Little's MCAR test the first, second (i.e., approach is not suitable for categorical variables), fifth, and sixth problems also apply to the Jamshidian and Jalal's approach for testing the MCAR assumption.
In practice, rejecting or not rejecting the MCAR assumption may not be relevant as modern missing data handling methods like full information maximum likelihood (FIML) estimation, Bayesian estimation, or multiple imputation are asymptotically valid under the missing at random (MAR) assumption (Jamshidian & Yuan, 2014). It is more important to distinguish MAR from missing not at random (MNAR), but MAR and MNAR mechanisms cannot be distinguished without auxiliary information.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
matrix or data frame specified in |
args |
specification of function arguments |
result |
list with result tables, i.e., |
The code for Little's MCAR test is a modified copy of the LittleMCAR
function in the BaylorEdPsych package by A. Alexander Beaujean. The code
for Jamshidian and Jalal's approach is a modified copy of the TestMCARNormality
function in the MissMech package by Mortaza Jamshidian, Siavash Jalal,
Camden Jansen, and Mao Kobayashi (2024).
Takuya Yanagida [email protected]
Beaujean, A. A. (2012). BaylorEdPsych: R Package for Baylor University Educational Psychology Quantitative Courses. R package version 0.5. http://cran.nexr.com/web/packages/BaylorEdPsych/index.html
Eekhout, I., M. A. Wiel, & M. W. Heymans (2017). Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: Power and applicability analysis. BMC Medical Research Methodology, 17:129. https://doi.org/10.1186/s12874-017-0404-7
Enders, C. K. (2010). Applied missing data analysis. Guilford Press.
Little, R. J. A. (1988). A test of Missing Completely at Random for multivariate data with missing values. Journal of the American Statistical Association, 83, 1198-1202. https://doi.org/10.2307/2290157
Jamshidian, M., & Jalal, S. (2010). Tests of homoscedasticity, normality, and missing completely at random for incomplete multivariate data. Psychometrika, 75(4), 649-674. https://doi.org/10.1007/s11336-010-9175-3
Jamshidian, M., & Yuan, K.H. (2014). Examining missing data mechanisms via homogeneity of parameters, homogeneity of distributions, and multivariate normality. WIREs Computational Statistics, 6(1), 56-73. https://doi.org/10.1002/wics.1287
Mortaza, J., Siavash, J., Camden, J., & Kobayashi, M. (2024). MissMech: Testing Homoscedasticity, Multivariate Normality, and Missing Completely at Random. R package version 1.0.4. https://doi.org/10.32614/CRAN.package.MissMech
Srivastava, M.S., & Dolatabadi, M. (2009). Multiple imputation and other resampling scheme for imputing missing observations. Journal of Multivariate Analysis, 100, 1919-1937. https://doi.org/10.1016/j.jmva.2009.06.003
Thoemmes, F., & Enders, C. K. (2007, April). A structural equation model for testing whether data are missing completely at random. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.
as.na
, na.as
, na.auxiliary
,
na.coverage
, na.descript
, na.indicator
,
na.pattern
, na.prop
.
# Example 1a: Perform Little's MCAR test and Jamshidian and Jalal's approach na.test(airquality) # Example 1b: Alternative specification using the 'data' argument, na.test(., data = airquality) # Example 2: Perform Jamshidian and Jalal's approach na.test(airquality, print = "jamjal") ## Not run: # Example 3: Write results into a text file na.test(airquality, write = "NA_Test.txt") ## End(Not run)
# Example 1a: Perform Little's MCAR test and Jamshidian and Jalal's approach na.test(airquality) # Example 1b: Alternative specification using the 'data' argument, na.test(., data = airquality) # Example 2: Perform Jamshidian and Jalal's approach na.test(airquality, print = "jamjal") ## Not run: # Example 3: Write results into a text file na.test(airquality, write = "NA_Test.txt") ## End(Not run)
This function prints the misty.object
object
## S3 method for class 'misty.object' print(x, print = x$args$print, tri = x$args$tri, freq = x$args$freq, hypo = x$args$hypo, descript = x$args$descript, epsilon = x$args$epsilon, effsize = x$args$effsize, posthoc = x$args$posthoc, split = x$args$split, table = x$args$table, digits = x$args$digits, p.digits = x$args$p.digits, icc.digits = x$args$icc.digits, r.digits = x$args$r.digits, ess.digits = x$args$ess.digits, mcse.digits = x$args$mcse.digits, sort.var = x$args$sort.var, order = x$args$order, check = TRUE, ...)
## S3 method for class 'misty.object' print(x, print = x$args$print, tri = x$args$tri, freq = x$args$freq, hypo = x$args$hypo, descript = x$args$descript, epsilon = x$args$epsilon, effsize = x$args$effsize, posthoc = x$args$posthoc, split = x$args$split, table = x$args$table, digits = x$args$digits, p.digits = x$args$p.digits, icc.digits = x$args$icc.digits, r.digits = x$args$r.digits, ess.digits = x$args$ess.digits, mcse.digits = x$args$mcse.digits, sort.var = x$args$sort.var, order = x$args$order, check = TRUE, ...)
x |
|
print |
a character string or character vector indicating which results to to be printed on the console. |
tri |
a character string or character vector indicating which
triangular of the matrix to show on the console, i.e.,
|
freq |
logical: if |
hypo |
logical: if |
descript |
logical: if |
epsilon |
logical: if |
effsize |
logical: if |
posthoc |
logical: if |
split |
logical: if |
table |
logical: if |
digits |
an integer value indicating the number of decimal places to be used for displaying results. |
p.digits |
an integer indicating the number of decimal places to be used for displaying p-values. |
icc.digits |
an integer indicating the number of decimal places to be used
for displaying intraclass correlation coefficients
( |
r.digits |
an integer value indicating the number of decimal places to be used for displaying R-hat values. |
ess.digits |
an integer value indicating the number of decimal places to be used for displaying effective sample sizes. |
mcse.digits |
an integer value indicating the number of decimal places to be used for displaying monte carlo standard errors. |
sort.var |
logical: if |
order |
logical: if |
check |
logical: if |
... |
further arguments passed to or from other methods. |
Takuya Yanagida [email protected]
This function reads a (1) data file in CSV (.csv
), DAT (.dat
),
or TXT (.txt
) format using the fread
function from the data.table
package, (2) SPSS file (.sav
) using the read.sav
function, (3)
Excel file (.xlsx
) using the read.xlsx
function, or a (4) Stata
DTA file (.dta
) using the read.dta
function in the misty
package.
read.data(file, sheet = NULL, header = TRUE, select = NULL, drop = NULL, use.value.labels = FALSE, use.missings = TRUE, na.strings = "NA", stringsAsFactors = FALSE, formats = FALSE, label = FALSE, labels = FALSE, missing = FALSE, widths = FALSE, as.data.frame = TRUE, encoding = c("unknown", "UTF-8", "Latin-1"), check = TRUE)
read.data(file, sheet = NULL, header = TRUE, select = NULL, drop = NULL, use.value.labels = FALSE, use.missings = TRUE, na.strings = "NA", stringsAsFactors = FALSE, formats = FALSE, label = FALSE, labels = FALSE, missing = FALSE, widths = FALSE, as.data.frame = TRUE, encoding = c("unknown", "UTF-8", "Latin-1"), check = TRUE)
file |
a character string indicating the name of the data file
with the file extension |
sheet |
a character string indicating the name of a Excel sheet
or a numeric value indicating the position of the Excel
sheet to read. By default the first sheet will be read
when reading an Excel file ( |
header |
logical: if |
select |
a character vector of column names or numeric vector to
keep, drop the rest. See the help page of the
|
drop |
a character vector of column names or numeric vector to drop, keep the rest. |
use.value.labels |
logical: if |
use.missings |
logical: if |
na.strings |
a character vector of strings which are to be interpreted as NA values. |
stringsAsFactors |
logical: if |
formats |
logical: if |
label |
logical: if |
labels |
logical: if |
missing |
logical: if |
widths |
logical: if |
as.data.frame |
logical: if |
encoding |
a character string indicating the encoding, i.e.,
|
check |
logical: if |
Returns a data frame, tibble, or data table.
Takuya Yanagida
Barrett, T., Dowle, M., Srinivasan, A., Gorecki, J., Chirico, M., Hocking, T., & Schwendinger, B. (2024). data.table: Extension of 'data.frame'. R package version 1.16.0. https://CRAN.R-project.org/package=data.table
Wickham H, Miller E, Smith D (2023). haven: Import and Export 'SPSS', 'Stata' and 'SAS' Files. R package version 2.5.3. https://CRAN.R-project.org/package=haven
read.sav
, read.xlsx
, read.dta
,
read.mplus
## Not run: # Read CSV data file dat <- read.data("CSV_Data.csv") # Read DAT data file dat <- read.data("DAT_Data.dat") # Read TXT data file dat <- read.data("TXT_Data.txt") # Read SPSS data file dat <- read.data("SPSS_Data.sav") # Read Excel data file dat <- read.data("Excel_Data.xlsx") # Read Stata data file dat <- read.data("Stata_Data.dta") ## End(Not run)
## Not run: # Read CSV data file dat <- read.data("CSV_Data.csv") # Read DAT data file dat <- read.data("DAT_Data.dat") # Read TXT data file dat <- read.data("TXT_Data.txt") # Read SPSS data file dat <- read.data("SPSS_Data.sav") # Read Excel data file dat <- read.data("Excel_Data.xlsx") # Read Stata data file dat <- read.data("Stata_Data.dta") ## End(Not run)
This function calls the read_dta
function in the haven package
by Hadley Wickham, Evan Miller and Danny Smith (2023) to read a Stata DTA file.
read.dta(file, use.value.labels = FALSE, formats = FALSE, label = FALSE, labels = FALSE, missing = FALSE, widths = FALSE, as.data.frame = TRUE, check = TRUE)
read.dta(file, use.value.labels = FALSE, formats = FALSE, label = FALSE, labels = FALSE, missing = FALSE, widths = FALSE, as.data.frame = TRUE, check = TRUE)
file |
a character string indicating the name of the Stata
data file with or without file extension '.dta', e.g.,
|
use.value.labels |
logical: if |
formats |
logical: if |
label |
logical: if |
labels |
logical: if |
missing |
logical: if |
widths |
logical: if |
as.data.frame |
logical: if |
check |
logical: if |
Returns a data frame or tibble.
This function is a modified copy of the read_dta()
function in the
haven package by Hadley Wickham, Evan Miller and Danny Smith (2023).
Hadley Wickham and Evan Miller
Wickham H, Miller E, Smith D (2023). haven: Import and Export 'SPSS', 'Stata' and 'SAS' Files. R package version 2.5.3. https://CRAN.R-project.org/package=haven
read.sav
, write.sav
, read.xlsx
,
write.xlsx
, read.mplus
, write.mplus
## Not run: read.dta("Stata_Data.dta") read.dta("Stata_Data") # Example 2: Read Stata data, convert variables with value labels into factors read.dta("Stata_Data.dta", use.value.labels = TRUE) # Example 3: Read Stata data as tibble read.dta("Stata_Data.dta", as.data.frame = FALSE) ## End(Not run)
## Not run: read.dta("Stata_Data.dta") read.dta("Stata_Data") # Example 2: Read Stata data, convert variables with value labels into factors read.dta("Stata_Data.dta", use.value.labels = TRUE) # Example 3: Read Stata data as tibble read.dta("Stata_Data.dta", as.data.frame = FALSE) ## End(Not run)
This function reads a Mplus data file and/or Mplus input/output file to return
a data frame with variable names extracted from the Mplus input/output file. Note
that by default -99
in the Mplus data file is replaced with to NA
.
read.mplus(file, sep = "", input = NULL, na = -99, print = FALSE, return.var = FALSE, encoding = "UTF-8-BOM", check = TRUE)
read.mplus(file, sep = "", input = NULL, na = -99, print = FALSE, return.var = FALSE, encoding = "UTF-8-BOM", check = TRUE)
file |
a character string indicating the name of the Mplus data
file with or without the file extension |
sep |
a character string indicating the field separator (i.e.,
delimiter) used in the data file specified in |
input |
a character string indicating the Mplus input ( |
na |
a numeric vector indicating values to replace with |
print |
logical: if |
return.var |
logical: if |
encoding |
character string declaring the encoding used on |
check |
logical: if |
A data frame containing a representation of the data in the file.
Takuya Yanagida [email protected]
Muthen, L. K., & Muthen, B. O. (1998-2017). Mplus User's Guide (8th ed.). Muthen & Muthen.
read.dta
, write.dta
, read.sav
,
write.sav
, read.xlsx
, write.xlsx
## Not run: # Example 1: Read Mplus data file and variable names extracted from the Mplus input file dat <- read.mplus("Mplus_Data.dat", input = "Mplus_Input.inp") # Example 2: Read Mplus data file and variable names extracted from the Mplus input file, # print variable names on the console dat <- read.mplus("Mplus_Data.dat", input = "Mplus_Input.inp", print = TRUE) # Example 3: Read variable names extracted from the Mplus input file varnames <- read.mplus(input = "Mplus_Input.inp", return.var = TRUE) ## End(Not run)
## Not run: # Example 1: Read Mplus data file and variable names extracted from the Mplus input file dat <- read.mplus("Mplus_Data.dat", input = "Mplus_Input.inp") # Example 2: Read Mplus data file and variable names extracted from the Mplus input file, # print variable names on the console dat <- read.mplus("Mplus_Data.dat", input = "Mplus_Input.inp", print = TRUE) # Example 3: Read variable names extracted from the Mplus input file varnames <- read.mplus(input = "Mplus_Input.inp", return.var = TRUE) ## End(Not run)
This function calls the read_spss
function in the haven package
by Hadley Wickham, Evan Miller and Danny Smith (2023) to read an SPSS file.
read.sav(file, use.value.labels = FALSE, use.missings = TRUE, formats = FALSE, label = FALSE, labels = FALSE, missing = FALSE, widths = FALSE, as.data.frame = TRUE, check = TRUE)
read.sav(file, use.value.labels = FALSE, use.missings = TRUE, formats = FALSE, label = FALSE, labels = FALSE, missing = FALSE, widths = FALSE, as.data.frame = TRUE, check = TRUE)
file |
a character string indicating the name of the SPSS data file
with or without file extension '.sav', e.g., |
use.value.labels |
logical: if |
use.missings |
logical: if |
formats |
logical: if |
label |
logical: if |
labels |
logical: if |
missing |
logical: if |
widths |
logical: if |
as.data.frame |
logical: if |
check |
logical: if |
Returns a data frame or tibble.
Hadley Wickham, Evan Miller and Danny Smith
Wickham H, Miller E, & Smith D (2023). haven: Import and Export 'SPSS', 'Stata' and 'SAS' Files. R package version 2.5.3. https://CRAN.R-project.org/package=haven
read.dta
, write.dta
, read.xlsx
,
write.xlsx
, read.mplus
, write.mplus
## Not run: # Example 1: Read SPSS data file read.sav("SPSS_Data.sav") read.sav("SPSS_Data") # Example 2: Read SPSS data file, convert variables with value labels into factors read.sav("SPSS_Data.sav", use.value.labels = TRUE) # Example 3: Read SPSS data file, user-defined missing values are not converted into NAs read.sav("SPSS_Data.sav", use.missing = FALSE) # Example 4: Read SPSS data file as tibble read.sav("SPSS_Data.sav", as.data.frame = FALSE) ## End(Not run)
## Not run: # Example 1: Read SPSS data file read.sav("SPSS_Data.sav") read.sav("SPSS_Data") # Example 2: Read SPSS data file, convert variables with value labels into factors read.sav("SPSS_Data.sav", use.value.labels = TRUE) # Example 3: Read SPSS data file, user-defined missing values are not converted into NAs read.sav("SPSS_Data.sav", use.missing = FALSE) # Example 4: Read SPSS data file as tibble read.sav("SPSS_Data.sav", as.data.frame = FALSE) ## End(Not run)
This function calls the read_xlsx()
function in the readxl package
by Hadley Wickham and Jennifer Bryan (2019) to read an Excel file (.xlsx).
read.xlsx(file, sheet = NULL, header = TRUE, range = NULL, coltypes = c("skip", "guess", "logical", "numeric", "date", "text", "list"), na = "", trim = TRUE, skip = 0, nmax = Inf, guessmax = min(1000, nmax), progress = readxl::readxl_progress(), name.repair = "unique", as.data.frame = TRUE, check = TRUE)
read.xlsx(file, sheet = NULL, header = TRUE, range = NULL, coltypes = c("skip", "guess", "logical", "numeric", "date", "text", "list"), na = "", trim = TRUE, skip = 0, nmax = Inf, guessmax = min(1000, nmax), progress = readxl::readxl_progress(), name.repair = "unique", as.data.frame = TRUE, check = TRUE)
file |
a character string indicating the name of the Excel data
file with or without file extension '.xlsx', e.g., |
sheet |
a character string indicating the name of a sheet or a numeric value indicating the position of the sheet to read. By default the first sheet will be read. |
header |
logical: if |
range |
a character string indicating the cell range to read from,
e.g. typical Excel ranges like |
coltypes |
a character vector containing one entry per column from
these options |
na |
a character vector indicating strings to interpret as missing values. By default, blank cells will be treated as missing data. |
trim |
logical: if |
skip |
a numeric value indicating the minimum number of rows to
skip before reading anything, be it column names or data.
Leading empty rows are automatically skipped, so this is
a lower bound. Ignored if the argument |
nmax |
a numeric value indicating the maximum number of data rows
to read. Trailing empty rows are automatically skipped, so
this is an upper bound on the number of rows in the returned
data frame. Ignored if the argument |
guessmax |
a numeric value indicating the maximum number of data rows to use for guessing column types. |
progress |
display a progress spinner? By default, the spinner appears only in an interactive session, outside the context of knitting a document, and when the call is likely to run for several seconds or more. |
name.repair |
a character string indicating the handling of column names. By default, the function ensures column names are not empty and are unique. |
as.data.frame |
logical: if |
check |
logical: if |
Returns a data frame or tibble.
Hadley Wickham and Jennifer Bryan
Wickham H, Miller E, Smith D (2023). readxl: Read Excel Files. R package version 1.4.3. https://CRAN.R-project.org/package=readxl
read.dta
, write.dta
, read.sav
,
write.sav
, read.mplus
, write.mplus
## Not run: # Example 1: Read Excel file (.xlsx) read.xlsx("data.xlsx") # Example 1: Read Excel file (.xlsx), use default names as column names read.xlsx("data.xlsx", header = FALSE) # Example 2: Read Excel file (.xlsx), interpret -99 as missing values read.xlsx("data.xlsx", na = "-99") # Example 3: Read Excel file (.xlsx), use x1, x2, and x3 as column names read.xlsx("data.xlsx", header = c("x1", "x2", "x3")) # Example 4: Read Excel file (.xlsx), read cells A1:B5 read.xlsx("data.xlsx", range = "A1:B5") # Example 5: Read Excel file (.xlsx), skip 2 rows before reading data read.xlsx("data.xlsx", skip = 2) # Example 5: Read Excel file (.xlsx), return a tibble read.xlsx("data.xlsx", as.data.frame = FALSE) ## End(Not run)
## Not run: # Example 1: Read Excel file (.xlsx) read.xlsx("data.xlsx") # Example 1: Read Excel file (.xlsx), use default names as column names read.xlsx("data.xlsx", header = FALSE) # Example 2: Read Excel file (.xlsx), interpret -99 as missing values read.xlsx("data.xlsx", na = "-99") # Example 3: Read Excel file (.xlsx), use x1, x2, and x3 as column names read.xlsx("data.xlsx", header = c("x1", "x2", "x3")) # Example 4: Read Excel file (.xlsx), read cells A1:B5 read.xlsx("data.xlsx", range = "A1:B5") # Example 5: Read Excel file (.xlsx), skip 2 rows before reading data read.xlsx("data.xlsx", skip = 2) # Example 5: Read Excel file (.xlsx), return a tibble read.xlsx("data.xlsx", as.data.frame = FALSE) ## End(Not run)
This function recodes numeric vectors, character vectors, or factors according to recode specifications.
rec(..., data = NULL, spec, as.factor = FALSE, levels = NULL, append = TRUE, name = ".e", as.na = NULL, table = FALSE, check = TRUE)
rec(..., data = NULL, spec, as.factor = FALSE, levels = NULL, append = TRUE, name = ".e", as.na = NULL, table = FALSE, check = TRUE)
... |
a numeric vector, character vector, factor, matrix or data
frame. Alternatively, an expression indicating the variable
names in |
data |
a data frame when specifying one or more variables in the
argument |
spec |
a character string of recode specifications (see 'Details'). |
as.factor |
logical: if |
levels |
a character vector for specifying the levels in the returned factor. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
append |
logical: if |
name |
a character string or character vector indicating the names
of the recoded variables. By default, variables are named with the ending
|
table |
logical: if |
check |
logical: if |
Recode specifications appear in a character string, separated by semicolons
(see the examples below), of the form input = output. If an input value satisfies
more than one specification, then the first (from left to right) applies. If
no specification is satisfied, then the input value is carried over to the
result. NA
is allowed in input and output. Several recode specifications
are supported:
For example, spec = "0 = NA".
For example, spec = "c(7, 8, 9) = 'high'"
.
For example, spec = "7:9 = 'C'"
. The
special values lo
(lowest value) and hi
(highest value) may
appear in a range. For example, spec = "lo:10 = 1"
. Note that :
is not the R sequence operator. In addition you may not use :
with the
collect operator, e.g., spec = "c(1, 3, 5:7)"
will cause an error.
For example, spec = "0 = 1; else = NA"
. Everything
that does not fit a previous specification. Note that else
matches all
otherwise unspecified values on input, including NA
.
Returns a numeric vector or data frame with the same length or same number of
rows as ...
containing the recoded coded variable(s).
This function was adapted from the recode()
function in the car
package by John Fox and Sanford Weisberg (2019).
Takuya Yanagida [email protected]
Fox, J., & Weisberg S. (2019). An R Companion to Applied Regression (3rd ed.). Thousand Oaks CA: Sage. URL: https://socialsciences.mcmaster.ca/jfox/Books/Companion/
#---------------------------------------------------------------------------- # Numeric vector x.num <- c(1, 2, 4, 5, 6, 8, 12, 15, 19, 20) # Example 1a: Recode 5 = 50 and 19 = 190 rec(x.num, spec = "5 = 50; 19 = 190") # Example 1b: Recode 1, 2, and 5 = 100 and 4, 6, and 7 = 200 and else = 300 rec(x.num, spec = "c(1, 2, 5) = 100; c(4, 6, 7) = 200; else = 300") # Example 1c: Recode lowest value to 10 = 100 and 11 to highest value = 200 rec(x.num, spec = "lo:10 = 100; 11:hi = 200") # Example 1d: Recode 5 = 50 and 19 = 190 and check recoding rec(x.num, spec = "5 = 50; 19 = 190", table = TRUE) #---------------------------------------------------------------------------- # Character vector x.chr <- c("a", "c", "f", "j", "k") # Example 2a: Recode a to x rec(x.chr, spec = "'a' = 'X'") # Example 2b: Recode a and f to x, c and j to y, and else to z rec(x.chr, spec = "c('a', 'f') = 'x'; c('c', 'j') = 'y'; else = 'z'") # Example 2c: Recode a to x and coerce to a factor rec(x.chr, spec = "'a' = 'X'", as.factor = TRUE) #---------------------------------------------------------------------------- # Factor x.fac <- factor(c("a", "b", "a", "c", "d", "d", "b", "b", "a")) # Example 3a: Recode a to x, factor levels ordered alphabetically rec(x.fac, spec = "'a' = 'x'") # Example 3b: Recode a to x, user-defined factor levels rec(x.fac, spec = "'a' = 'x'", levels = c("x", "b", "c", "d")) #---------------------------------------------------------------------------- # Multiple variables dat <- data.frame(x1.num = c(1, 2, 4, 5, 6), x2.num = c(5, 19, 2, 6, 3), x1.chr = c("a", "c", "f", "j", "k"), x2.chr = c("b", "c", "a", "d", "k"), x1.fac = factor(c("a", "b", "a", "c", "d")), x2.fac = factor(c("b", "a", "d", "c", "e"))) # Example 4a: Recode numeric vector and attach to 'dat' dat <- cbind(dat, rec(dat[, c("x1.num", "x2.num")], spec = "5 = 50; 19 = 190")) # Example 4b: Alternative specification using the 'data' argument, rec(x1.num, x2.num, data = dat, spec = "5 = 50; 19 = 190") # Example 4c: Recode character vector and attach to 'dat' dat <- cbind(dat, rec(dat[, c("x1.chr", "x2.chr")], spec = "'a' = 'X'")) # Example 4d: Recode factor vector and attach to 'dat' dat <- cbind(dat, rec(dat[, c("x1.fac", "x2.fac")], spec = "'a' = 'X'"))
#---------------------------------------------------------------------------- # Numeric vector x.num <- c(1, 2, 4, 5, 6, 8, 12, 15, 19, 20) # Example 1a: Recode 5 = 50 and 19 = 190 rec(x.num, spec = "5 = 50; 19 = 190") # Example 1b: Recode 1, 2, and 5 = 100 and 4, 6, and 7 = 200 and else = 300 rec(x.num, spec = "c(1, 2, 5) = 100; c(4, 6, 7) = 200; else = 300") # Example 1c: Recode lowest value to 10 = 100 and 11 to highest value = 200 rec(x.num, spec = "lo:10 = 100; 11:hi = 200") # Example 1d: Recode 5 = 50 and 19 = 190 and check recoding rec(x.num, spec = "5 = 50; 19 = 190", table = TRUE) #---------------------------------------------------------------------------- # Character vector x.chr <- c("a", "c", "f", "j", "k") # Example 2a: Recode a to x rec(x.chr, spec = "'a' = 'X'") # Example 2b: Recode a and f to x, c and j to y, and else to z rec(x.chr, spec = "c('a', 'f') = 'x'; c('c', 'j') = 'y'; else = 'z'") # Example 2c: Recode a to x and coerce to a factor rec(x.chr, spec = "'a' = 'X'", as.factor = TRUE) #---------------------------------------------------------------------------- # Factor x.fac <- factor(c("a", "b", "a", "c", "d", "d", "b", "b", "a")) # Example 3a: Recode a to x, factor levels ordered alphabetically rec(x.fac, spec = "'a' = 'x'") # Example 3b: Recode a to x, user-defined factor levels rec(x.fac, spec = "'a' = 'x'", levels = c("x", "b", "c", "d")) #---------------------------------------------------------------------------- # Multiple variables dat <- data.frame(x1.num = c(1, 2, 4, 5, 6), x2.num = c(5, 19, 2, 6, 3), x1.chr = c("a", "c", "f", "j", "k"), x2.chr = c("b", "c", "a", "d", "k"), x1.fac = factor(c("a", "b", "a", "c", "d")), x2.fac = factor(c("b", "a", "d", "c", "e"))) # Example 4a: Recode numeric vector and attach to 'dat' dat <- cbind(dat, rec(dat[, c("x1.num", "x2.num")], spec = "5 = 50; 19 = 190")) # Example 4b: Alternative specification using the 'data' argument, rec(x1.num, x2.num, data = dat, spec = "5 = 50; 19 = 190") # Example 4c: Recode character vector and attach to 'dat' dat <- cbind(dat, rec(dat[, c("x1.chr", "x2.chr")], spec = "'a' = 'X'")) # Example 4d: Recode factor vector and attach to 'dat' dat <- cbind(dat, rec(dat[, c("x1.fac", "x2.fac")], spec = "'a' = 'X'"))
This function restarts the RStudio session and is equivalent to using the menu
item Session - Restart R
.
restart()
restart()
The function call executeCommand("restartR")
in the package rstudioapi
is used to restart the R session. Note that the function restartSession()
in the package rstudioapi is not equivalent to the menu item
Session - Restart R
since it does not unload packages loaded during an
R session.
Takuya Yanagida [email protected]
Ushey, K., Allaire, J., Wickham, H., & Ritchie, G. (2022). rstudioapi: Safely access the RStudio API. R package version 0.14. https://CRAN.R-project.org/package=rstudioapi
## Not run: # Example 1: Restart the R Session restart() ## End(Not run)
## Not run: # Example 1: Restart the R Session restart() ## End(Not run)
This function reads all Mplus output files from latent class analysis in
subfolders to create a summary result table and bar charts for each latent
class solution separately. By default, the function reads output files in all
subfolders of the current working directory. Optionally, bar charts for each
latent class solution can be requested by setting the argument plot
to TRUE
. Note that subfolders with only one Mplus output file are
excluded.
result.lca(folder = getwd(), exclude = NULL, sort.n = TRUE, sort.p = TRUE, plot = FALSE, group.ind = TRUE, ci = TRUE, conf.level = 0.95, adjust = TRUE, axis.title = 7, axis.text = 7, levels = NULL, labels = NULL, ylim = NULL, ylab = "Mean Value", breaks = ggplot2::waiver(), error.width = 0.1, legend.title = 7, legend.text = 7, legend.key.size = 0.4, gray = FALSE, start = 0.15, end = 0.85, dpi = 600, width = "n.ind", height = 4, digits = 1, p.digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
result.lca(folder = getwd(), exclude = NULL, sort.n = TRUE, sort.p = TRUE, plot = FALSE, group.ind = TRUE, ci = TRUE, conf.level = 0.95, adjust = TRUE, axis.title = 7, axis.text = 7, levels = NULL, labels = NULL, ylim = NULL, ylab = "Mean Value", breaks = ggplot2::waiver(), error.width = 0.1, legend.title = 7, legend.text = 7, legend.key.size = 0.4, gray = FALSE, start = 0.15, end = 0.85, dpi = 600, width = "n.ind", height = 4, digits = 1, p.digits = 3, write = NULL, append = TRUE, check = TRUE, output = TRUE)
folder |
a character vector indicating the name of the subfolders to be excluded from the summary result table. |
exclude |
a character vector indicating the name of the subfolders excluded from the result tables. |
sort.n |
logical: if |
sort.p |
logical: if |
plot |
logical: if |
group.ind |
logical: if |
ci |
logical: if |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
adjust |
logical: if |
axis.title |
a numeric value specifying the size of the axis title. |
axis.text |
a numeric value specifying the size of the axis text |
levels |
a character string specifying the order of the indicator variables shown on the x-axis. |
labels |
a character string specifying the labels of the indicator variables shown on the x-axis. |
ylim |
a numeric vector of length two specifying limits of the y-axis. |
ylab |
a character string specifying the label of the y-axis. |
breaks |
a numeric vector specifying the points at which tick-marks are drawn at the y-axis. |
error.width |
a numeric vector specifying the width of the error bars. By default, the width of the error bars is 0.1 plus number of classes divided by 30. |
legend.title |
a numeric value specifying the size of the legend title. |
legend.text |
a numeric value specifying the size of the legend text. |
legend.key.size |
a numeric value specifying the size of the legend keys. |
gray |
logical: if |
start |
a numeric value between 0 and 1 specifying the gray value at the low end of the palette. |
end |
a numeric value between 0 and 1 specifying the gray value at the high end of the palette. |
dpi |
a numeric value specifying the plot resolution when saving the bar chart. |
width |
a numeric value specifying the width of the plot when saving the bar chart. By default, the width is number of indicators plus number of classes divided by 2. |
height |
a numeric value specifying the height of the plot when saving the bar chart. |
digits |
an integer value indicating the number of decimal places
to be used for displaying results. Note that the scaling
correction factor is displayed with |
p.digits |
an integer value indicating the number of decimal places to be used for displaying p-values, entropy value, and class proportions. |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
The result summary table comprises following entries:
"Folder"
: Subfolder from which the group of Mplus outputs files
were summarized.
"#Class"
: Number of classes (i.e., CLASSES ARE c(#Class)
).
"Conv"
: Model converged, TRUE
or FALSE
(i.e.,
THE MODEL ESTIMATION TERMINATED NORMALLY
.
"#Param"
: Number of estimated parameters (i.e.,
Number of Free Parameters
).
"logLik"
: Log-likelihood of the estimated model (i.e., H0 Value
).
"Scale"
: Scaling correction factor (i.e.,
H0 Scaling Correction Factor for
). Provided
only when ESTIMATOR IS MLR
.
"LL Rep"
: Best log-likelihood replicated, TRUE
or FALSE
(i.e., THE BEST LOGLIKELIHOOD VALUE HAS BEEN REPLICATED
).
"AIC"
: Akaike information criterion (i.e., Akaike (AIC)
).
"CAIC"
: Consistent AIC, not reported in the Mplus output, but
simply BIC + #Param
.
"BIC"
: Bayesian information criterion (i.e., Bayesian (BIC)
).
"Chi-Pear"
: Pearson chi-square test of model fit (i.e., Pearson Chi-Square
),
only available when indicators are count or ordered categorical.
"Chi-LRT"
: Likelihood ratio chi-square test of model fit (i.e., Likelihood Ratio Chi-Square
),
only available when indicators are count or ordered categorical.
"SABIC"
: Sample-size adjusted BIC (i.e., Sample-Size Adjusted BIC
).
"LMR-LRT"
: Significance value (p-value) of the Vuong-Lo-Mendell-Rubin test
(i.e., VUONG-LO-MENDELL-RUBIN LIKELIHOOD RATIO TEST
).
Provided only when OUTPUT: TECH11
.
"A-LRT"
: Significance value (p-value) of the Adjusted Lo-Mendell-Rubin Test
(i.e., LO-MENDELL-RUBIN ADJUSTED LRT TEST
).
Provided only when OUTPUT: TECH11
.
"BLRT"
: Significance value (p-value) of the bootstrapped
likelihood ratio test. Provided only when OUTPUT: TECH14
.
"Entropy"
: Sample-size adjusted BIC (i.e., Entropy
).
"p1"
: Class proportion of the first class based on the estimated
posterior probabilities (i.e., FINAL CLASS COUNTS AND PROPORTIONS
).
"p2"
: Class proportion of the second class based on the estimated
posterior probabilities (i.e., FINAL CLASS COUNTS AND PROPORTIONS
).
Returns an object, which is a list with following entries:
call |
function call |
type |
type of analysis |
output |
list with all Mplus outputs |
args |
specification of function arguments |
result |
list with result tables, i.e., |
Takuya Yanagida [email protected]
Masyn, K. E. (2013). Latent class analysis and finite mixture modeling. In T. D. Little (Ed.), The Oxford handbook of quantitative methods: Statistical analysis (pp. 551–611). Oxford University Press.
Muthen, L. K., & Muthen, B. O. (1998-2017). Mplus User's Guide (8th ed.). Muthen & Muthen.
mplus.lca
, mplus.run
, read.mplus
,
write.mplus
## Not run: # Load data set "HolzingerSwineford1939" in the lavaan package data("HolzingerSwineford1939", package = "lavaan") # Run LCA with k = 1 to k = 6 classes mplus.lca(HolzingerSwineford1939, ind = c("x1", "x2", "x3", "x4"), run.mplus = TRUE) # Example 1a: Read Mplus output files, create result table, write table, and save plots result.lca(write = "LCA.xlsx", plot = TRUE) # Example 1b: Write results into a text file result.lca(write = "LCA.txt") #------------------------------------------------------------------------------- # Example 2: Draw bar chart manually library(ggplot2) # Collect LCA results lca.result <- result.lca() # Result table with means means <- lca.result$result$mean # Extract results from variance-covariance structure A with 4 latent classes plotdat <- means[means$folder == "A_Invariant-Theta_Diagonal-Sigma" & means$nclass == 4, ] # Draw bar chart ggplot(plotdat, aes(ind, est, group = class, fill = class)) + geom_bar(stat = "identity", position = "dodge", color = "black", linewidth = 0.1) + geom_errorbar(aes(ymin = low, ymax = upp), width = 0.23, linewidth = 0.2, position = position_dodge(0.9)) + scale_x_discrete("") + scale_y_continuous("Mean Value", limits = c(0, 9), breaks = seq(0, 9, by = 1)) + labs(fill = "Latent Class") + guides(fill = guide_legend(nrow = 1L)) + theme(axis.title = element_text(size = 11), axis.text = element_text(size = 11), legend.position = "bottom", legend.key.size = unit(0.5 , 'cm'), legend.title = element_text(size = 11), legend.text = element_text(size = 11), legend.box.spacing = unit(-9L, "pt")) # Save bar chart ggsave("LCA_4-Class.png", dpi = 600, width = 6, height = 4) ## End(Not run)
## Not run: # Load data set "HolzingerSwineford1939" in the lavaan package data("HolzingerSwineford1939", package = "lavaan") # Run LCA with k = 1 to k = 6 classes mplus.lca(HolzingerSwineford1939, ind = c("x1", "x2", "x3", "x4"), run.mplus = TRUE) # Example 1a: Read Mplus output files, create result table, write table, and save plots result.lca(write = "LCA.xlsx", plot = TRUE) # Example 1b: Write results into a text file result.lca(write = "LCA.txt") #------------------------------------------------------------------------------- # Example 2: Draw bar chart manually library(ggplot2) # Collect LCA results lca.result <- result.lca() # Result table with means means <- lca.result$result$mean # Extract results from variance-covariance structure A with 4 latent classes plotdat <- means[means$folder == "A_Invariant-Theta_Diagonal-Sigma" & means$nclass == 4, ] # Draw bar chart ggplot(plotdat, aes(ind, est, group = class, fill = class)) + geom_bar(stat = "identity", position = "dodge", color = "black", linewidth = 0.1) + geom_errorbar(aes(ymin = low, ymax = upp), width = 0.23, linewidth = 0.2, position = position_dodge(0.9)) + scale_x_discrete("") + scale_y_continuous("Mean Value", limits = c(0, 9), breaks = seq(0, 9, by = 1)) + labs(fill = "Latent Class") + guides(fill = guide_legend(nrow = 1L)) + theme(axis.title = element_text(size = 11), axis.text = element_text(size = 11), legend.position = "bottom", legend.key.size = unit(0.5 , 'cm'), legend.title = element_text(size = 11), legend.text = element_text(size = 11), legend.box.spacing = unit(-9L, "pt")) # Save bar chart ggsave("LCA_4-Class.png", dpi = 600, width = 6, height = 4) ## End(Not run)
This function computes heteroscedasticity-consistent standard errors and
significance values for linear models estimated by using the lm()
function and generalized linear models estimated by using the glm()
function. For linear models the heteroscedasticity-robust F-test is computed
as well. By default, the function uses the HC4 estimator.
robust.coef(model, type = c("HC0", "HC1", "HC2", "HC3", "HC4", "HC4m", "HC5"), digits = 3, p.digits = 4, write = NULL, append = TRUE, check = TRUE, output = TRUE)
robust.coef(model, type = c("HC0", "HC1", "HC2", "HC3", "HC4", "HC4m", "HC5"), digits = 3, p.digits = 4, write = NULL, append = TRUE, check = TRUE, output = TRUE)
model |
a fitted model of class |
type |
a character string specifying the estimation type, where
|
digits |
an integer value indicating the number of decimal places
to be used for displaying results. Note that information
criteria and chi-square test statistic are printed with
|
p.digits |
an integer value indicating the number of decimal places to be used for displaying p-values. |
write |
a character string naming a file for writing the output into
either a text file with file extension |
append |
logical: if |
check |
logical: if |
output |
logical: if |
The family of heteroscedasticity-consistent (HC) standard errors estimator for the model parameters of a regression model is based on an HC covariance matrix of the parameter estimates and does not require the assumption of homoscedasticity. HC estimators approach the correct value with increasing sample size, even in the presence of heteroscedasticity. On the other hand, the OLS standard error estimator is biased and does not converge to the proper value when the assumption of homoscedasticity is violated (Darlington & Hayes, 2017).
White (1980) introduced
the idea of HC covariance matrix to econometricians and derived the asymptotically
justified form of the HC covariance matrix known as HC0 (Long & Ervin, 2000).
Simulation studies have shown that the HC0 estimator tends to underestimate the
true variance in small to moderately large samples () and in
the presence of leverage observations, which leads to an inflated
type I error risk (e.g., Cribari-Neto & Lima, 2014). The alternative estimators
HC1 to HC5 are asymptotically equivalent to HC0 but include finite-sample corrections,
which results in superior small sample properties compared to the HC0 estimator.
Long and Ervin (2000) recommended routinely using the HC3 estimator regardless
of a heteroscedasticity test. However, the HC3 estimator can be unreliable when
the data contains leverage observations. The HC4 estimator, on
the other hand, performs well with small samples, in the presence of high leverage
observations, and when errors are not normally distributed (Cribari-Neto, 2004).
In summary, it appears that the HC4 estimator performs the best in terms of
controlling the type I and type II error risk (Rosopa, 2013). As opposed to the
findings of Cribari-Neto et al. (2007), the HC5 estimator did not show any
substantial advantages over HC4. Both HC5 and HC4 performed similarly across
all the simulation conditions considered in the study (Ng & Wilcox, 2009).
Note that the F-test of significance on the multiple correlation coefficient R also assumes homoscedasticity of the errors. Violations of this assumption can result in a hypothesis test that is either liberal or conservative, depending on the form and severity of the heteroscedasticity.
Hayes (2007) argued that using a HC estimator instead of assuming homoscedasticity provides researchers with more confidence in the validity and statistical power of inferential tests in regression analysis. Hence, the HC3 or HC4 estimator should be used routinely when estimating regression models. If a HC estimator is not used as the default method of standard error estimation, researchers are advised to at least double-check the results by using an HC estimator to ensure that conclusions are not compromised by heteroscedasticity. However, the presence of heteroscedasticity suggests that the data is not adequately explained by the statistical model of estimated conditional means. Unless heteroscedasticity is believed to be solely caused by measurement error associated with the predictor variable(s), it should serve as warning to the researcher regarding the adequacy of the estimated model.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
model |
model specified in |
args |
specification of function arguments |
result |
list with results, i.e., |
This function is based on the vcovHC
function from the sandwich
package (Zeileis, Köll, & Graham, 2020) and the functions coeftest
and
waldtest
from the lmtest
package (Zeileis & Hothorn, 2002).
Takuya Yanagida [email protected]
Darlington, R. B., & Hayes, A. F. (2017). Regression analysis and linear models: Concepts, applications, and implementation. The Guilford Press.
Cribari-Neto, F. (2004). Asymptotic inference under heteroskedasticity of unknown form. Computational Statistics & Data Analysis, 45, 215-233. https://doi.org/10.1016/S0167-9473(02)00366-3
Cribari-Neto, F., & Lima, M. G. (2014). New heteroskedasticity-robust standard errors for the linear regression model. Brazilian Journal of Probability and Statistics, 28, 83-95.
Cribari-Neto, F., Souza, T., & Vasconcellos, K. L. P. (2007). Inference under heteroskedasticity and leveraged data. Communications in Statistics - Theory and Methods, 36, 1877-1888. https://doi.org/10.1080/03610920601126589
Hayes, A.F, & Cai, L. (2007). Using heteroscedasticity-consistent standard error estimators in OLS regression: An introduction and software implementation. Behavior Research Methods, 39, 709-722. https://doi.org/10.3758/BF03192961
Long, J.S., & Ervin, L.H. (2000). Using heteroscedasticity consistent standard errors in the linear regression model. The American Statistician, 54, 217-224. https://doi.org/10.1080/00031305.2000.10474549
Ng, M., & Wilcoy, R. R. (2009). Level robust methods based on the least squares regression estimator. Journal of Modern Applied Statistical Methods, 8, 284-395. https://doi.org/10.22237/jmasm/1257033840
Rosopa, P. J., Schaffer, M. M., & Schroeder, A. N. (2013). Managing heteroscedasticity in general linear models. Psychological Methods, 18(3), 335-351. https://doi.org/10.1037/a0032553
White, H. (1980). A heteroskedastic-consistent covariance matrix estimator and a direct test of heteroskedasticity. Econometrica, 48, 817-838. https://doi.org/10.2307/1912934
Zeileis, A., & Hothorn, T. (2002). Diagnostic checking in regression relationships. R News, 2(3), 7–10. http://CRAN.R-project.org/doc/Rnews/
Zeileis A, Köll S, & Graham N (2020). Various versatile variances: An object-oriented implementation of clustered covariances in R. Journal of Statistical Software, 95(1), 1-36. https://doi.org/10.18637/jss.v095.i01
dat <- data.frame(x1 = c(3, 2, 4, 9, 5, 3, 6, 4, 5, 6, 3, 5), x2 = c(1, 4, 3, 1, 2, 4, 3, 5, 1, 7, 8, 7), x3 = c(0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1), y1 = c(2, 7, 4, 4, 7, 8, 4, 2, 5, 1, 3, 8), y2 = c(0, 1, 0, 2, 0, 1, 0, 0, 1, 2, 1, 0)) #------------------------------------------------------------------------------- # Example 1: Linear model mod1 <- lm(y1 ~ x1 + x2 + x3, data = dat) robust.coef(mod1) #------------------------------------------------------------------------------- # Example 2: Generalized linear model mod2 <- glm(y2 ~ x1 + x2 + x3, data = dat, family = poisson()) robust.coef(mod2) ## Not run: #---------------------------------------------------------------------------- # Write Results # Example 3a: Write Results into a text file robust.coef(mod1, write = "Robust_Coef.txt", output = FALSE) # Example 3b: Write Results into an Excel file robust.coef(mod1, write = "Robust_Coef.xlsx", output = FALSE) result <- robust.coef(mod1, output = FALSE) write.result(result, "Robust_Coef.xlsx") ## End(Not run)
dat <- data.frame(x1 = c(3, 2, 4, 9, 5, 3, 6, 4, 5, 6, 3, 5), x2 = c(1, 4, 3, 1, 2, 4, 3, 5, 1, 7, 8, 7), x3 = c(0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1), y1 = c(2, 7, 4, 4, 7, 8, 4, 2, 5, 1, 3, 8), y2 = c(0, 1, 0, 2, 0, 1, 0, 0, 1, 2, 1, 0)) #------------------------------------------------------------------------------- # Example 1: Linear model mod1 <- lm(y1 ~ x1 + x2 + x3, data = dat) robust.coef(mod1) #------------------------------------------------------------------------------- # Example 2: Generalized linear model mod2 <- glm(y2 ~ x1 + x2 + x3, data = dat, family = poisson()) robust.coef(mod2) ## Not run: #---------------------------------------------------------------------------- # Write Results # Example 3a: Write Results into a text file robust.coef(mod1, write = "Robust_Coef.txt", output = FALSE) # Example 3b: Write Results into an Excel file robust.coef(mod1, write = "Robust_Coef.xlsx", output = FALSE) result <- robust.coef(mod1, output = FALSE) write.result(result, "Robust_Coef.xlsx") ## End(Not run)
This function computes r*wg(j) within-group agreement index for multi-item scales as described in Lindell, Brandt and Whitney (1999).
rwg.lindell(..., data = NULL, cluster, A = NULL, ranvar = NULL, z = TRUE, expand = TRUE, na.omit = FALSE, append = TRUE, name = "rwg", as.na = NULL, check = TRUE)
rwg.lindell(..., data = NULL, cluster, A = NULL, ranvar = NULL, z = TRUE, expand = TRUE, na.omit = FALSE, append = TRUE, name = "rwg", as.na = NULL, check = TRUE)
... |
a numeric vector or data frame. Alternatively, an expression
indicating the variable names in |
data |
a data frame when specifying one or more variables in the
argument |
cluster |
either a character string indicating the variable name of
the cluster variable in |
A |
a numeric value indicating the number of discrete response
options of the items from which the random variance is computed
based on |
ranvar |
a numeric value indicating the random variance to which the
mean of the item variance is divided. Note that either the
argument |
z |
logical: if |
expand |
logical: if |
na.omit |
logical: if |
append |
logical: if |
name |
a character string indicating the name of the variable appended
to the data frame specified in the argument |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
check |
logical: if |
The r*wg(j) index is calculated by dividing the mean of the item variance by
the expected random variance (i.e., null distribution). The default null distribution
in most research is the rectangular or uniform distribution calculated with
, where
is the number of discrete response
options of the items. However, what constitutes a reasonable standard for random
variance is highly debated. Note that the r*wg(j) allows that the mean of the
item variances to be larger than the expected random variances, i.e., r*wg(j)
values can be negative.
Note that the rwg.j.lindell()
function in the multilevel package
uses listwise deletion by default, while the rwg.lindell()
function uses
all available information to compute the r*wg(j) agreement index by default. In
order to obtain equivalent results in the presence of missing values, listwise
deletion (na.omit = TRUE
) needs to be applied.
Examples for the application of r*wg(j) within-group agreement index for multi-item scales can be found in Bardach, Yanagida, Schober and Lueftenegger (2018), Bardach, Lueftenegger, Yanagida, Schober and Spiel (2018), and Bardach, Lueftenegger, Yanagida, Spiel and Schober (2019).
Returns a numeric vector containing r*wg(j) agreement index for multi-item scales
with the same length as group
if expand = TRUE
or a data frame with
following entries if expand = FALSE
:
cluster |
cluster identifier |
n |
cluster size |
rwg.lindell |
r*wg(j) estimate for each group |
z.rwg.lindell |
Fisher z-transformed r*wg(j) estimate for each cluster |
Takuya Yanagida [email protected]
Bardach, L., Lueftenegger, M., Yanagida, T., & Schober, B. (2019). Achievement or agreement - Which comes first? Clarifying the temporal ordering of achievement and within-class consensus on classroom goal structures. Learning and Instruction, 61, 72-83. https://doi.org/10.1016/j.learninstruc.2019.01.003
Bardach, L., Lueftenegger, M., Yanagida, T., Schober, B. & Spiel, C. (2019). The role of within-class consensus on mastery goal structures in predicting socio-emotional outcomes. British Journal of Educational Psychology, 89, 239-258. https://doi.org/10.1111/bjep.12237
Bardach, L., Yanagida, T., Schober, B. & Lueftenegger, M. (2018). Within-class consensus on classroom goal structures: Relations to achievement and achievement goals in mathematics and language classes. Learning and Individual Differences, 67, 78-90. https://doi.org/10.1016/j.lindif.2018.07.002
Lindell, M. K., Brandt, C. J., & Whitney, D. J. (1999). A revised index of interrater agreement for multi-item ratings of a single target. Applied Psychological Measurement, 23, 127-135. https://doi.org/10.1177/01466219922031257
O'Neill, T. A. (2017). An overview of interrater agreement on Likert scales for researchers and practitioners. Frontiers in Psychology, 8, Article 777. https://doi.org/10.3389/fpsyg.2017.00777
dat <- data.frame(id = c(1, 2, 3, 4, 5, 6, 7, 8, 9), cluster = c(1, 1, 1, 2, 2, 2, 3, 3, 3), x1 = c(2, 3, 2, 1, 1, 2, 4, 3, 5), x2 = c(3, 2, 2, 1, 2, 1, 3, 2, 5), x3 = c(3, 1, 1, 2, 3, 3, 5, 5, 4)) # Example 1a: Compute Fisher z-transformed r*wg(j) for a multi-item scale # with A = 5 response options rwg.lindell(dat[, c("x1", "x2", "x3")], cluster = dat$cluster, A = 5) # Example 1b: Alternative specification using the 'data' argument, rwg.lindell(x1:x3, data = dat, cluster = "cluster", A = 5) # Example 2: Compute Fisher z-transformed r*wg(j) for a multi-item scale with a random variance of 2 rwg.lindell(dat[, c("x1", "x2", "x3")], cluster = dat$cluster, ranvar = 2) # Example 3: Compute r*wg(j) for a multi-item scale with A = 5 response options rwg.lindell(dat[, c("x1", "x2", "x3")], cluster = dat$cluster, A = 5, z = FALSE) # Example 4: Compute Fisher z-transformed r*wg(j) for a multi-item scale # with A = 5 response options, do not expand the vector rwg.lindell(dat[, c("x1", "x2", "x3")], cluster = dat$cluster, A = 5, expand = FALSE)
dat <- data.frame(id = c(1, 2, 3, 4, 5, 6, 7, 8, 9), cluster = c(1, 1, 1, 2, 2, 2, 3, 3, 3), x1 = c(2, 3, 2, 1, 1, 2, 4, 3, 5), x2 = c(3, 2, 2, 1, 2, 1, 3, 2, 5), x3 = c(3, 1, 1, 2, 3, 3, 5, 5, 4)) # Example 1a: Compute Fisher z-transformed r*wg(j) for a multi-item scale # with A = 5 response options rwg.lindell(dat[, c("x1", "x2", "x3")], cluster = dat$cluster, A = 5) # Example 1b: Alternative specification using the 'data' argument, rwg.lindell(x1:x3, data = dat, cluster = "cluster", A = 5) # Example 2: Compute Fisher z-transformed r*wg(j) for a multi-item scale with a random variance of 2 rwg.lindell(dat[, c("x1", "x2", "x3")], cluster = dat$cluster, ranvar = 2) # Example 3: Compute r*wg(j) for a multi-item scale with A = 5 response options rwg.lindell(dat[, c("x1", "x2", "x3")], cluster = dat$cluster, A = 5, z = FALSE) # Example 4: Compute Fisher z-transformed r*wg(j) for a multi-item scale # with A = 5 response options, do not expand the vector rwg.lindell(dat[, c("x1", "x2", "x3")], cluster = dat$cluster, A = 5, expand = FALSE)
This function saves a copy of the current script in RStudio. By default, a
folder called _R_Script_Archive
will be created to save the copy of
the current R script with the current date and time into the folder. Note that
the current R script needs to have a file location before the script can be
copied.
script.copy(file = NULL, folder = "_R_Script_Archive", create.folder = TRUE, time = TRUE, format = "%Y-%m-%d_%H%M", overwrite = TRUE, check = TRUE)
script.copy(file = NULL, folder = "_R_Script_Archive", create.folder = TRUE, time = TRUE, format = "%Y-%m-%d_%H%M", overwrite = TRUE, check = TRUE)
file |
a character string naming the file of the copy without
the file extension |
folder |
a character string naming the folder in which the file
of the copy is saved. If |
create.folder |
logical: if |
time |
logical: if |
format |
a character string indicating the format if the |
overwrite |
logical: if |
check |
logical: if |
This function uses the getSourceEditorContext()
function in the
rstudioapi package by Kevin Ushey, JJ Allaire, Hadley Wickham, and Gary
Ritchie (2023).
Takuya Yanagida [email protected]
Ushey, K., Allaire, J., Wickham, H., & Ritchie, G. (2023). rstudioapi: Safely access the RStudio API. R package version 0.15.0 https://CRAN.R-project.org/package=rstudioapi
script.new
, script.close
, script.open
, script.save
, setsource
## Not run: # Example 1: Save copy current R script into the folder '_R_Script_Archive' script.copy() # Exmample 2: Save current R script as 'R_Script.R' into the folder 'Archive' script.copy("R_Script", folder = "Archive", time = FALSE) ## End(Not run)
## Not run: # Example 1: Save copy current R script into the folder '_R_Script_Archive' script.copy() # Exmample 2: Save current R script as 'R_Script.R' into the folder 'Archive' script.copy("R_Script", folder = "Archive", time = FALSE) ## End(Not run)
This function opens a new R script, R markdown script, or SQL script in RStudio.
script.new(text = "", type = c("r", "rmarkdown", "sql"), position = rstudioapi::document_position(0, 0), run = FALSE, check = TRUE)
script.new(text = "", type = c("r", "rmarkdown", "sql"), position = rstudioapi::document_position(0, 0), run = FALSE, check = TRUE)
text |
a character vector indicating what text should be inserted in the new R script. By default, an empty script is opened. |
type |
a character string indicating the type of document to be
created, i.e., |
position |
|
run |
logical: if |
check |
logical: if |
This function uses the documentNew()
function in the rstudioapi
package by Kevin Ushey, JJ Allaire, Hadley Wickham, and Gary Ritchie (2023).
Takuya Yanagida [email protected]
Ushey, K., Allaire, J., Wickham, H., & Ritchie, G. (2023). rstudioapi: Safely access the RStudio API. R package version 0.15.0 https://CRAN.R-project.org/package=rstudioapi
script.close
, script.open
,
script.save
, script.copy
, setsource
## Not run: # Example 1: Open new R script file script.new() # Example 2: Open new R script file and run some code script.new("#---------------------------- # Example # Generate 100 random numbers rnorm(100)") ## End(Not run)
## Not run: # Example 1: Open new R script file script.new() # Example 2: Open new R script file and run some code script.new("#---------------------------- # Example # Generate 100 random numbers rnorm(100)") ## End(Not run)
The function script.open
opens an R script, R markdown script, or SQL
script in RStudio, the function script.close
closes an R script, and
the function script.save
saves an R script. Note that the R script need
to have a file location before the script can be saved.
script.open(path, line = 1, col = 1, cursor = TRUE, run = FALSE, echo = TRUE, max.length = 999, spaced = TRUE, check = TRUE) script.close(save = FALSE, check = TRUE) script.save(all = FALSE, check = TRUE)
script.open(path, line = 1, col = 1, cursor = TRUE, run = FALSE, echo = TRUE, max.length = 999, spaced = TRUE, check = TRUE) script.close(save = FALSE, check = TRUE) script.save(all = FALSE, check = TRUE)
path |
a character string indicating the path of the script. |
line |
a numeric value indicating the line in the script to navigate to. |
col |
a numeric value indicating the column in the script to navigate to. |
cursor |
logical: if |
run |
logical: if |
echo |
logical: if |
max.length |
a numeric value indicating the maximal number of characters output for the deparse of a single expression. |
spaced |
logical: if |
save |
logical: if |
all |
logical: if |
check |
logical: if |
This function uses the documentOpen()
, documentPath()
,
documentClose()
, documentSave()
, and documentSaveAll()
functions in the rstudioapi package by Kevin Ushey, JJ Allaire, Hadley
Wickham, and Gary Ritchie (2023).
Takuya Yanagida [email protected]
Ushey, K., Allaire, J., Wickham, H., & Ritchie, G. (2023). rstudioapi: Safely access the RStudio API. R package version 0.15.0 https://CRAN.R-project.org/package=rstudioapi
script.save
, script.copy
, setsource
## Not run: # Example 1: Open R script file script.open("script.R") # Example 2: Open R script file and run the code script.open("script.R", run = TRUE) # Example 3: Close current R script file script.close() # Example 4: Save current R script script.save() # Example 5: Save all R scripts script.save(all = TRUE) ## End(Not run)
## Not run: # Example 1: Open R script file script.open("script.R") # Example 2: Open R script file and run the code script.open("script.R", run = TRUE) # Example 3: Close current R script file script.close() # Example 4: Save current R script script.save() # Example 5: Save all R scripts script.save(all = TRUE) ## End(Not run)
This function sets the working directory to the source file location
(i.e., path of the current R script) in RStudio and is equivalent to using the
menu item Session - Set Working Directory - To Source File Location
.
Note that the R script needs to have a file location before this function can
be used.
setsource(path = TRUE, check = TRUE)
setsource(path = TRUE, check = TRUE)
path |
logical: if |
check |
logical: if |
Returns the path of the source file location.
This function uses the documentPath()
function in the
rstudioapi package by Kevin Ushey, JJ Allaire, Hadley Wickham, and Gary
Ritchie (2023).
Takuya Yanagida [email protected]
Ushey, K., Allaire, J., Wickham, H., & Ritchie, G. (2023). rstudioapi: Safely access the RStudio API. R package version 0.15.0 https://CRAN.R-project.org/package=rstudioapi
script.close
, script.new
, script.open
,
script.save
## Not run: # Example 1: Set working directory to the source file location setsource() # Example 2: Set working directory to the source file location # and assign path to an object path <- setsource() path ## End(Not run)
## Not run: # Example 1: Set working directory to the source file location setsource() # Example 2: Set working directory to the source file location # and assign path to an object path <- setsource() path ## End(Not run)
This function performs sample size computation for testing Pearson's product-moment correlation coefficient based on precision requirements (i.e., type-I-risk, type-II-risk and an effect size).
size.cor(rho, delta, alternative = c("two.sided", "less", "greater"), alpha = 0.05, beta = 0.1, write = NULL, append = TRUE, check = TRUE, output = TRUE)
size.cor(rho, delta, alternative = c("two.sided", "less", "greater"), alpha = 0.05, beta = 0.1, write = NULL, append = TRUE, check = TRUE, output = TRUE)
rho |
a number indicating the correlation coefficient under the null hypothesis, |
delta |
a numeric value indicating the minimum difference to be detected, |
alternative |
a character string specifying the alternative hypothesis,
must be one of |
alpha |
type-I-risk, |
beta |
type-II-risk, |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
matrix or data frame specified in |
args |
specification of function arguments |
result |
list with the result, i.e., optimal sample size |
Takuya Yanagida [email protected],
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. New York: John Wiley & Sons.
Rasch, D., Pilz, J., Verdooren, L. R., & Gebhardt, G. (2011). Optimal experimental design with R. Boca Raton: Chapman & Hall/CRC.
#------------------------------------------------------------------------------- # Example 1: Two-sided test # H0: rho = 0.3, H1: rho != 0.3 # alpha = 0.05, beta = 0.2, delta = 0.2 size.cor(rho = 0.3, delta = 0.2, alpha = 0.05, beta = 0.2) #------------------------------------------------------------------------------- # Example 2: One-sided test # H0: rho <= 0.3, H1: rho > 0.3 # alpha = 0.05, beta = 0.2, delta = 0.2 size.cor(rho = 0.3, delta = 0.2, alternative = "greater", alpha = 0.05, beta = 0.2)
#------------------------------------------------------------------------------- # Example 1: Two-sided test # H0: rho = 0.3, H1: rho != 0.3 # alpha = 0.05, beta = 0.2, delta = 0.2 size.cor(rho = 0.3, delta = 0.2, alpha = 0.05, beta = 0.2) #------------------------------------------------------------------------------- # Example 2: One-sided test # H0: rho <= 0.3, H1: rho > 0.3 # alpha = 0.05, beta = 0.2, delta = 0.2 size.cor(rho = 0.3, delta = 0.2, alternative = "greater", alpha = 0.05, beta = 0.2)
This function performs sample size computation for the one-sample and two-sample t-test based on precision requirements (i.e., type-I-risk, type-II-risk and an effect size).
size.mean(delta, sample = c("two.sample", "one.sample"), alternative = c("two.sided", "less", "greater"), alpha = 0.05, beta = 0.1, write = NULL, append = TRUE, check = TRUE, output = TRUE)
size.mean(delta, sample = c("two.sample", "one.sample"), alternative = c("two.sided", "less", "greater"), alpha = 0.05, beta = 0.1, write = NULL, append = TRUE, check = TRUE, output = TRUE)
delta |
a numeric value indicating the relative minimum difference
to be detected, |
sample |
a character string specifying one- or two-sample t-test,
must be one of |
alternative |
a character string specifying the alternative hypothesis,
must be one of |
alpha |
type-I-risk, |
beta |
type-II-risk, |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
matrix or data frame specified in |
args |
specification of function arguments |
result |
list with the result, i.e., optimal sample size |
Takuya Yanagida [email protected],
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. New York: John Wiley & Sons.
Rasch, D., Pilz, J., Verdooren, L. R., & Gebhardt, G. (2011). Optimal experimental design with R. Boca Raton: Chapman & Hall/CRC.
#------------------------------------------------------------------------------- # Example 1: Two-sided one-sample test # H0: mu = mu.0, H1: mu != mu.0 # alpha = 0.05, beta = 0.2, delta = 0.5 size.mean(delta = 0.5, sample = "one.sample", alternative = "two.sided", alpha = 0.05, beta = 0.2) #------------------------------------------------------------------------------- # Example 2: One-sided one-sample test # H0: mu <= mu.0, H1: mu > mu.0 # alpha = 0.05, beta = 0.2, delta = 0.5 size.mean(delta = 0.5, sample = "one.sample", alternative = "greater", alpha = 0.05, beta = 0.2) #------------------------------------------------------------------------------- # Example 3: Two-sided two-sample test # H0: mu.1 = mu.2, H1: mu.1 != mu.2 # alpha = 0.01, beta = 0.1, delta = 1 size.mean(delta = 1, sample = "two.sample", alternative = "two.sided", alpha = 0.01, beta = 0.1) #------------------------------------------------------------------------------- # Example 4: One-sided two-sample test # H0: mu.1 <= mu.2, H1: mu.1 > mu.2 # alpha = 0.01, beta = 0.1, delta = 1 size.mean(delta = 1, sample = "two.sample", alternative = "greater", alpha = 0.01, beta = 0.1)
#------------------------------------------------------------------------------- # Example 1: Two-sided one-sample test # H0: mu = mu.0, H1: mu != mu.0 # alpha = 0.05, beta = 0.2, delta = 0.5 size.mean(delta = 0.5, sample = "one.sample", alternative = "two.sided", alpha = 0.05, beta = 0.2) #------------------------------------------------------------------------------- # Example 2: One-sided one-sample test # H0: mu <= mu.0, H1: mu > mu.0 # alpha = 0.05, beta = 0.2, delta = 0.5 size.mean(delta = 0.5, sample = "one.sample", alternative = "greater", alpha = 0.05, beta = 0.2) #------------------------------------------------------------------------------- # Example 3: Two-sided two-sample test # H0: mu.1 = mu.2, H1: mu.1 != mu.2 # alpha = 0.01, beta = 0.1, delta = 1 size.mean(delta = 1, sample = "two.sample", alternative = "two.sided", alpha = 0.01, beta = 0.1) #------------------------------------------------------------------------------- # Example 4: One-sided two-sample test # H0: mu.1 <= mu.2, H1: mu.1 > mu.2 # alpha = 0.01, beta = 0.1, delta = 1 size.mean(delta = 1, sample = "two.sample", alternative = "greater", alpha = 0.01, beta = 0.1)
This function performs sample size computation for the one-sample and two-sample test for proportions based on precision requirements (i.e., type-I-risk, type-II-risk and an effect size).
size.prop(pi = 0.5, delta, sample = c("two.sample", "one.sample"), alternative = c("two.sided", "less", "greater"), alpha = 0.05, beta = 0.1, correct = FALSE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
size.prop(pi = 0.5, delta, sample = c("two.sample", "one.sample"), alternative = c("two.sided", "less", "greater"), alpha = 0.05, beta = 0.1, correct = FALSE, write = NULL, append = TRUE, check = TRUE, output = TRUE)
pi |
a number indicating the true value of the probability under the null hypothesis (one-sample test), |
delta |
minimum difference to be detected, |
sample |
a character string specifying one- or two-sample proportion test,
must be one of |
alternative |
a character string specifying the alternative hypothesis,
must be one of |
alpha |
type-I-risk, |
beta |
type-II-risk, |
correct |
a logical indicating whether continuity correction should be applied. |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
data |
matrix or data frame specified in |
args |
specification of function arguments |
result |
list with the result, i.e., optimal sample size |
Takuya Yanagida [email protected],
Fleiss, J. L., Levin, B., & Paik, M. C. (2003). Statistical methods for rates and proportions (3rd ed.). John Wiley & Sons.
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
Rasch, D., Pilz, J., Verdooren, L. R., & Gebhardt, G. (2011). Optimal experimental design with R. Chapman & Hall/CRC.
#------------------------------------------------------------------------------- # Example 1: Two-sided one-sample test # H0: pi = 0.5, H1: pi != 0.5 # alpha = 0.05, beta = 0.2, delta = 0.2 size.prop(pi = 0.5, delta = 0.2, sample = "one.sample", alternative = "two.sided", alpha = 0.05, beta = 0.2) #------------------------------------------------------------------------------- # Example 2: Two-sided one-sample test # H0: pi = 0.5, H1: pi != 0.5 # alpha = 0.05, beta = 0.2, delta = 0.2 # with continuity correction size.prop(pi = 0.5, delta = 0.2, sample = "one.sample", alternative = "two.sided", alpha = 0.05, beta = 0.2, correct = TRUE) #------------------------------------------------------------------------------- # Example 3: One-sided one-sample test # H0: pi <= 0.5, H1: pi > 0.5 # alpha = 0.05, beta = 0.2, delta = 0.2 size.prop(pi = 0.5, delta = 0.2, sample = "one.sample", alternative = "less", alpha = 0.05, beta = 0.2) #------------------------------------------------------------------------------- # Example 4: Two-sided two-sample test # H0: pi.1 = pi.2 = 0.5, H1: pi.1 != pi.2 # alpha = 0.01, beta = 0.1, delta = 0.2 size.prop(pi = 0.5, delta = 0.2, sample = "two.sample", alternative = "two.sided", alpha = 0.01, beta = 0.1) #------------------------------------------------------------------------------- # Example 5: One-sided two-sample test # H0: pi.1 <= pi.1 = 0.5, H1: pi.1 > pi.2 # alpha = 0.01, beta = 0.1, delta = 0.2 size.prop(pi = 0.5, delta = 0.2, sample = "two.sample", alternative = "greater", alpha = 0.01, beta = 0.1)
#------------------------------------------------------------------------------- # Example 1: Two-sided one-sample test # H0: pi = 0.5, H1: pi != 0.5 # alpha = 0.05, beta = 0.2, delta = 0.2 size.prop(pi = 0.5, delta = 0.2, sample = "one.sample", alternative = "two.sided", alpha = 0.05, beta = 0.2) #------------------------------------------------------------------------------- # Example 2: Two-sided one-sample test # H0: pi = 0.5, H1: pi != 0.5 # alpha = 0.05, beta = 0.2, delta = 0.2 # with continuity correction size.prop(pi = 0.5, delta = 0.2, sample = "one.sample", alternative = "two.sided", alpha = 0.05, beta = 0.2, correct = TRUE) #------------------------------------------------------------------------------- # Example 3: One-sided one-sample test # H0: pi <= 0.5, H1: pi > 0.5 # alpha = 0.05, beta = 0.2, delta = 0.2 size.prop(pi = 0.5, delta = 0.2, sample = "one.sample", alternative = "less", alpha = 0.05, beta = 0.2) #------------------------------------------------------------------------------- # Example 4: Two-sided two-sample test # H0: pi.1 = pi.2 = 0.5, H1: pi.1 != pi.2 # alpha = 0.01, beta = 0.1, delta = 0.2 size.prop(pi = 0.5, delta = 0.2, sample = "two.sample", alternative = "two.sided", alpha = 0.01, beta = 0.1) #------------------------------------------------------------------------------- # Example 5: One-sided two-sample test # H0: pi.1 <= pi.1 = 0.5, H1: pi.1 > pi.2 # alpha = 0.01, beta = 0.1, delta = 0.2 size.prop(pi = 0.5, delta = 0.2, sample = "two.sample", alternative = "greater", alpha = 0.01, beta = 0.1)
The function skewness
computes the skewness, the function kurtosis
computes the kurtosis.
skewness(..., data = NULL, as.na = NULL, check = TRUE) kurtosis(..., data = NULL, as.na = NULL, check = TRUE)
skewness(..., data = NULL, as.na = NULL, check = TRUE) kurtosis(..., data = NULL, as.na = NULL, check = TRUE)
... |
a numeric vector. Alternatively, an expression indicating the
variable names in |
data |
a data frame when specifying the variable in the argument
|
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
check |
logical: if |
The same method for estimating skewness and kurtosis is used in SAS and SPSS.
Missing values (NA
) are stripped before the computation. Note that at
least 3 observations are needed to compute skewness and at least 4 observations
are needed to compute excess kurtosis.
Returns the estimated skewness or kurtosis of x
.
Takuya Yanagida [email protected]
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. New York: John Wiley & Sons.
# Set seed of the random number generation set.seed(123) # Generate random numbers according to N(0, 1) x <- rnorm(100) # Example 1: Compute skewness skewness(x) # Example 2: Compute excess kurtosis kurtosis(x)
# Set seed of the random number generation set.seed(123) # Generate random numbers according to N(0, 1) x <- rnorm(100) # Example 1: Compute skewness skewness(x) # Example 2: Compute excess kurtosis kurtosis(x)
This function computes standardized coefficients for linear models estimated by using the lm()
function.
std.coef(model, print = c("all", "stdx", "stdy", "stdyx"), digits = 3, p.digits = 4, write = NULL, append = TRUE, check = TRUE, output = TRUE)
std.coef(model, print = c("all", "stdx", "stdy", "stdyx"), digits = 3, p.digits = 4, write = NULL, append = TRUE, check = TRUE, output = TRUE)
model |
a fitted model of class |
print |
a character vector indicating which results to show, i.e. |
digits |
an integer value indicating the number of decimal places to be used for displaying results. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
The slope can be standardized with respect to only
, only
, or both
and
:
standardizes with respect to
only and is interpreted as the change in
when
changes one standard deviation referred to as
.
standardizes with respect to
only and is interpreted as the change in
standard deviation units, referred to as
, when
changes one unit.
standardizes with respect to both
and
and is interpreted as the change
in
standard deviation units when
changes one standard deviation.
Note that the and the
standardizations are not suitable for the
slope of a binary predictor because a one standard deviation change in a binary variable is generally
not of interest (Muthen, Muthen, & Asparouhov, 2016).
The standardization of the slope in a regression model with an interaction term uses the
product of standard deviations
rather than the standard deviation of the product
for the interaction variable
(see Wen, Marsh & Hau, 2010). Likewise,
the standardization of the slope
in a polynomial regression model with a quadratic term
uses the product of standard deviations
rather than the standard deviation of the
product
for the quadratic term
.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
model |
model specified in |
args |
specification of function arguments |
result |
list with result tables, i.e., |
Takuya Yanagida [email protected]
Muthen, B. O., Muthen, L. K., & Asparouhov, T. (2016). Regression and mediation analysis using Mplus. Muthen & Muthen.
Wen, Z., Marsh, H. W., & Hau, K.-T. (2010). Structural equation models of latent interactions: An appropriate standardized solution and its scale-free properties. Structural Equation Modeling: A Multidisciplinary Journal, 17, 1-22. https://doi.org/10.1080/10705510903438872
dat <- data.frame(x1 = c(3, 2, 4, 9, 5, 3, 6, 4, 5, 6, 3, 5), x2 = c(1, 4, 3, 1, 2, 4, 3, 5, 1, 7, 8, 7), x3 = c(0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1), y = c(2, 7, 4, 4, 7, 8, 4, 2, 5, 1, 3, 8)) #------------------------------------------------------------------------------- # Linear model # Example 1: Regression model with continuous predictors mod.lm1 <- lm(y ~ x1 + x2, data = dat) std.coef(mod.lm1) # Example 2: Print all standardized coefficients std.coef(mod.lm1, print = "all") # Example 3: Regression model with dichotomous predictor mod.lm2 <- lm(y ~ x3, data = dat) std.coef(mod.lm2) # Example 4: Regression model with continuous and dichotomous predictors mod.lm3 <- lm(y ~ x1 + x2 + x3, data = dat) std.coef(mod.lm3) # Example 5: Regression model with continuous predictors and an interaction term mod.lm4 <- lm(y ~ x1*x2, data = dat) # Example 6: Regression model with a quadratic term mod.lm5 <- lm(y ~ x1 + I(x1^2), data = dat) std.coef(mod.lm5) #------------------------------------------------------------------------------- # Example 7: Write Results into an Excel file ## Not run: mod.lm1 <- lm(y ~ x1 + x2, data = dat) std.coef(mod.lm1, write = "Std_Coef.xlsx", output = FALSE) result <- std.coef(mod.lm1, output = FALSE) write.result(result, "Std_Coef.xlsx") ## End(Not run)
dat <- data.frame(x1 = c(3, 2, 4, 9, 5, 3, 6, 4, 5, 6, 3, 5), x2 = c(1, 4, 3, 1, 2, 4, 3, 5, 1, 7, 8, 7), x3 = c(0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1), y = c(2, 7, 4, 4, 7, 8, 4, 2, 5, 1, 3, 8)) #------------------------------------------------------------------------------- # Linear model # Example 1: Regression model with continuous predictors mod.lm1 <- lm(y ~ x1 + x2, data = dat) std.coef(mod.lm1) # Example 2: Print all standardized coefficients std.coef(mod.lm1, print = "all") # Example 3: Regression model with dichotomous predictor mod.lm2 <- lm(y ~ x3, data = dat) std.coef(mod.lm2) # Example 4: Regression model with continuous and dichotomous predictors mod.lm3 <- lm(y ~ x1 + x2 + x3, data = dat) std.coef(mod.lm3) # Example 5: Regression model with continuous predictors and an interaction term mod.lm4 <- lm(y ~ x1*x2, data = dat) # Example 6: Regression model with a quadratic term mod.lm5 <- lm(y ~ x1 + I(x1^2), data = dat) std.coef(mod.lm5) #------------------------------------------------------------------------------- # Example 7: Write Results into an Excel file ## Not run: mod.lm1 <- lm(y ~ x1 + x2, data = dat) std.coef(mod.lm1, write = "Std_Coef.xlsx", output = FALSE) result <- std.coef(mod.lm1, output = FALSE) write.result(result, "Std_Coef.xlsx") ## End(Not run)
This function performs Levene's test for homogeneity of variance across two or more independent groups.
test.levene(formula, data, method = c("median", "mean"), conf.level = 0.95, hypo = TRUE, descript = TRUE, plot = FALSE, violin.alpha = 0.3, violin.trim = FALSE, box = TRUE, box.alpha = 0.2, box.width = 0.2, jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.2, gray = FALSE, start = 0.9, end = 0.4, color = NULL, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), title = "", subtitle = "", digits = 2, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
test.levene(formula, data, method = c("median", "mean"), conf.level = 0.95, hypo = TRUE, descript = TRUE, plot = FALSE, violin.alpha = 0.3, violin.trim = FALSE, box = TRUE, box.alpha = 0.2, box.width = 0.2, jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.2, gray = FALSE, start = 0.9, end = 0.4, color = NULL, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), title = "", subtitle = "", digits = 2, p.digits = 3, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE)
formula |
a formula of the form |
data |
a matrix or data frame containing the variables in the
formula |
method |
a character string specifying the method to compute the
center of each group, i.e. |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
hypo |
logical: if |
descript |
logical: if |
plot |
logical: if |
violin.alpha |
a numeric value indicating the opacity of the violins. |
violin.trim |
logical: if |
box |
logical: if |
box.alpha |
a numeric value indicating the opacity of the boxplots. |
box.width |
a numeric value indicating the width of the boxplots. |
jitter |
logical: if |
jitter.size |
a numeric value indicating the |
jitter.width |
a numeric value indicating the amount of horizontal jitter. |
jitter.height |
a numeric value indicating the amount of vertical jitter. |
jitter.alpha |
a numeric value indicating the opacity of the jittered data points. |
gray |
logical: if |
start |
a numeric value between 0 and 1, graphical parameter to specify the gray value at the low end of the palette. |
end |
a numeric value between 0 and 1, graphical parameter to specify the gray value at the high end of the palette. |
color |
a character vector, indicating the color of the violins and the boxes. By default, default ggplot2 colors are used. |
xlab |
a character string specifying the labels for the x-axis. |
ylab |
a character string specifying the labels for the y-axis. |
ylim |
a numeric vector of length two specifying limits of the limits of the y-axis. |
breaks |
a numeric vector specifying the points at which tick-marks are drawn at the y-axis. |
title |
a character string specifying the text for the title for the plot. |
subtitle |
a character string specifying the text for the subtitle for the plot. |
digits |
an integer value indicating the number of decimal places to be used for displaying results. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
Levene's test is equivalent to a one-way analysis of variance (ANOVA) with the
absolute deviations of observations from the mean of each group as dependent
variable (center = "mean"
). Brown and Forsythe (1974) modified the
Levene's test by using the absolute deviations of observations from the median
(center = "median"
). By default, the Levene's test uses the absolute
deviations of observations from the median.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
formula |
formula of the current analysis |
data |
data frame specified in |
plot |
ggplot2 object for plotting the results |
args |
specification of function arguments |
result |
list with result tables, i.e., |
Takuya Yanagida [email protected]
Brown, M. B., & Forsythe, A. B. (1974). Robust tests for the equality of variances. Journal of the American Statistical Association, 69, 364-367.
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
dat <- data.frame(y = c(2, 3, 4, 5, 5, 7, 8, 4, 5, 2, 4, 3), group = c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3)) # Example 1: Levene's test based on the median with 95% confidence interval test.levene(y ~ group, data = dat) # Example 2: Levene's test based on the arithmetic mean with 95% confidence interval test.levene(y ~ group, data = dat, method = "mean") # Example 3: Levene's test based on the median with 99% confidence interval test.levene(y ~ group, data = dat, conf.level = 0.99) ## Not run: # Example 4: Write results into a text file test.levene(y ~ group, data = dat, write = "Levene.txt") # Example 5: Levene's test based on the median with 95 # plot results test.levene(y ~ group, data = dat, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Levene-test.png", dpi = 600, width = 5, height = 6) # Levene's test based on the median with 95 # extract plot p <- test.levene(y ~ group, data = dat, output = FALSE)$plot p # Example 6: Extract data plotdat <- test.levene(y ~ group, data = dat, output = FALSE)$data # Draw violin and boxplots in line with the default setting of test.levene() ggplot(plotdat, aes(group, y, fill = group)) + geom_violin(alpha = 0.3, trim = FALSE) + geom_boxplot(alpha = 0.2, width = 0.2) + geom_jitter(alpha = 0.2, width = 0.05, size = 1.25) + theme_bw() + guides(fill = "none") ## End(Not run)
dat <- data.frame(y = c(2, 3, 4, 5, 5, 7, 8, 4, 5, 2, 4, 3), group = c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3)) # Example 1: Levene's test based on the median with 95% confidence interval test.levene(y ~ group, data = dat) # Example 2: Levene's test based on the arithmetic mean with 95% confidence interval test.levene(y ~ group, data = dat, method = "mean") # Example 3: Levene's test based on the median with 99% confidence interval test.levene(y ~ group, data = dat, conf.level = 0.99) ## Not run: # Example 4: Write results into a text file test.levene(y ~ group, data = dat, write = "Levene.txt") # Example 5: Levene's test based on the median with 95 # plot results test.levene(y ~ group, data = dat, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Levene-test.png", dpi = 600, width = 5, height = 6) # Levene's test based on the median with 95 # extract plot p <- test.levene(y ~ group, data = dat, output = FALSE)$plot p # Example 6: Extract data plotdat <- test.levene(y ~ group, data = dat, output = FALSE)$data # Draw violin and boxplots in line with the default setting of test.levene() ggplot(plotdat, aes(group, y, fill = group)) + geom_violin(alpha = 0.3, trim = FALSE) + geom_boxplot(alpha = 0.2, width = 0.2) + geom_jitter(alpha = 0.2, width = 0.05, size = 1.25) + theme_bw() + guides(fill = "none") ## End(Not run)
This function performs one-sample, two-sample, and paired-sample t-tests and provides descriptive statistics, effect size measure, and a plot showing error bars for (difference-adjusted) confidence intervals with jittered data points.
test.t(x, ...) ## Default S3 method: test.t(x, y = NULL, mu = 0, paired = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, hypo = TRUE, descript = TRUE, effsize = FALSE, weighted = FALSE, cor = TRUE, ref = NULL, correct = FALSE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), line = TRUE, line.type = 3, line.size = 0.8, jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE,check = TRUE, output = TRUE, ...) ## S3 method for class 'formula' test.t(formula, data, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, hypo = TRUE, descript = TRUE, effsize = FALSE, weighted = FALSE, cor = TRUE, ref = NULL, correct = FALSE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
test.t(x, ...) ## Default S3 method: test.t(x, y = NULL, mu = 0, paired = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, hypo = TRUE, descript = TRUE, effsize = FALSE, weighted = FALSE, cor = TRUE, ref = NULL, correct = FALSE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), line = TRUE, line.type = 3, line.size = 0.8, jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE,check = TRUE, output = TRUE, ...) ## S3 method for class 'formula' test.t(formula, data, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, hypo = TRUE, descript = TRUE, effsize = FALSE, weighted = FALSE, cor = TRUE, ref = NULL, correct = FALSE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
x |
a numeric vector of data values. |
... |
further arguments to be passed to or from methods. |
y |
a numeric vector of data values. |
mu |
a numeric value indicating the population mean under the
null hypothesis. Note that the argument |
paired |
logical: if |
alternative |
a character string specifying the alternative hypothesis,
must be one of |
hypo |
logical: if |
descript |
logical: if |
effsize |
logical: if |
weighted |
logical: if |
cor |
logical: if |
ref |
character string |
correct |
logical: if |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
plot |
logical: if |
point.size |
a numeric value indicating the |
adjust |
logical: if |
error.width |
a numeric value indicating the horizontal bar width of the error bar. |
xlab |
a character string specifying the labels for the x-axis. |
ylab |
a character string specifying the labels for the y-axis. |
ylim |
a numeric vector of length two specifying limits of the limits of the y-axis. |
breaks |
a numeric vector specifying the points at which tick-marks are drawn at the y-axis. |
line |
logical: if |
line.type |
an integer value or character string specifying the line type for the line representing the population mean under the null hypothesis, i.e., 0 = blank, 1 = solid, 2 = dashed, 3 = dotted, 4 = dotdash, 5 = longdash, 6 = twodash. |
line.size |
a numeric value indicating the |
jitter |
logical: if |
jitter.size |
a numeric value indicating the |
jitter.width |
a numeric value indicating the amount of horizontal jitter. |
jitter.height |
a numeric value indicating the amount of vertical jitter. |
jitter.alpha |
a numeric value indicating the opacity of the jittered data points. |
title |
a character string specifying the text for the title for the plot. |
subtitle |
a character string specifying the text for the subtitle for the plot. |
digits |
an integer value indicating the number of decimal places to be used for displaying descriptive statistics and confidence interval. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
formula |
in case of two sample t-test (i.e., |
data |
a matrix or data frame containing the variables in the
formula |
By default, Cohen's d based on the non-weighted
standard deviation (i.e., weighted = FALSE
) which does not assume homogeneity
of variance is computed (see Delacre et al., 2021) when requesting an effect size
measure (i.e., effsize = TRUE
). Cohen's d based on the pooled standard
deviation assuming equality of variances between groups can be requested by
specifying weighted = TRUE
.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
sample |
type of sample, i.e., one-, two-, or paired sample |
formula |
formula of the current analysis |
data |
data frame specified in |
plot |
ggplot2 object for plotting the results |
args |
specification of function arguments |
result |
result table |
Takuya Yanagida [email protected]
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
Delacre, M., Lakens, D., Ley, C., Liu, L., & Leys, C. (2021). Why Hedges' g*s based on the non-pooled standard deviation should be reported with Welch's t-test. https://doi.org/10.31234/osf.io/tu6mp
aov.b
, aov.w
, test.welch
, test.z
,
test.levene
, cohens.d
, ci.mean.diff
,
ci.mean
dat1 <- data.frame(group = c(1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2), x = c(3, 1, 4, 2, 5, 3, 2, 3, 6, 6, 3, NA)) #------------------------------------------------------------------------------- # One-Sample Design # Example 1a: Two-sided one-sample t-test # population mean = 3 test.t(dat1$x, mu = 3) # Example 1b: One-sided one-sample t-test # population mean = 3, population standard deviation = 1.2 test.t(dat1$x, mu = 3, alternative = "greater") # Example 1c: Two-sided one-sample t-test # population mean = 3, convert value 3 to NA test.t(dat1$x, mu = 3, as.na = 3) # Example 1d: Two-sided one-sample t-test # population mean = 3, print Cohen's d test.t(dat1$x, sigma = 1.2, mu = 3, effsize = TRUE) # Example 1e: Two-sided one-sample t-test # population mean = 3, print Cohen's d with small sample correction factor test.t(dat1$x, sigma = 1.2, mu = 3, effsize = TRUE, correct = TRUE) # Example 1f: Two-sided one-sample t-test # population mean = 3, # do not print hypotheses and descriptive statistics test.t(dat1$x, sigma = 1.2, mu = 3, hypo = FALSE, descript = FALSE) # Example 1g: Two-sided one-sample t-test # print descriptive statistics with 3 digits and p-value with 5 digits test.t(dat1$x, mu = 3, digits = 3, p.digits = 5) ## Not run: # Example 1h: Two-sided one-sample t-test # population mean = 3, plot results test.t(dat1$x, mu = 3, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("One-sample_t-test.png", dpi = 600, width = 3, height = 6) # Example 1i: Two-sided one-sample t-test # population mean = 3, extract plot p <- test.t(dat1$x, mu = 3, output = FALSE)$plot p # Extract data plotdat <- data.frame(x = test.t(dat1$x, mu = 3, output = FALSE)$data[[1]]) # Draw plot in line with the default setting of test.t() ggplot(plotdat, aes(0, x)) + geom_point(stat = "summary", fun = "mean", size = 4) + stat_summary(fun.data = "mean_cl_normal", geom = "errorbar", width = 0.20) + scale_x_continuous(name = NULL, limits = c(-2, 2)) + scale_y_continuous(name = NULL) + geom_hline(yintercept = 3, linetype = 3, linewidth = 0.8) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5), axis.text.x = element_blank(), axis.ticks.x = element_blank()) ## End(Not run) #------------------------------------------------------------------------------- # Two-Sample Design # Example 2a: Two-sided two-sample t-test test.t(x ~ group, data = dat1) # Example 2b: One-sided two-sample t-test test.t(x ~ group, data = dat1, alternative = "greater") # Example 2c: Two-sided two-sample t-test # print Cohen's d with weighted pooled SD test.t(x ~ group, data = dat1, effsize = TRUE) # Example 2d: Two-sided two-sample t-test # print Cohen's d with unweighted pooled SD test.t(x ~ group, data = dat1, effsize = TRUE, weighted = FALSE) # Example 2e: Two-sided two-sample t-test # print Cohen's d with weighted pooled SD and # small sample correction factor test.t(x ~ group, data = dat1, effsize = TRUE, correct = TRUE) # Example 2f: Two-sided two-sample t-test # print Cohen's d with SD of the reference group 1 test.t(x ~ group, data = dat1, effsize = TRUE, ref = 1) # Example 2f: Two-sided two-sample t-test # print Cohen's d with weighted pooled SD and # small sample correction factor test.t(x ~ group, data = dat1, effsize = TRUE, correct = TRUE) # Example 2h: Two-sided two-sample t-test # do not print hypotheses and descriptive statistics, test.t(x ~ group, data = dat1, descript = FALSE, hypo = FALSE) # Example 2i: Two-sided two-sample t-test # print descriptive statistics with 3 digits and p-value with 5 digits test.t(x ~ group, data = dat1, digits = 3, p.digits = 5) ## Not run: # Example 2j: Two-sided two-sample t-test # Plot results test.t(x ~ group, data = dat1, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Two-sample_t-test.png", dpi = 600, width = 4, height = 6) # Example 2k: Two-sided two-sample t-test # extract plot p <- test.t(x ~ group, data = dat1, output = FALSE)$plot p # Extract data used to plot results plotdat <- test.t(x ~ group, data = dat1, output = FALSE)$data # Draw plot in line with the default setting of test.t() ggplot(plotdat, aes(factor(group), x)) + geom_point(stat = "summary", fun = "mean", size = 4) + stat_summary(fun.data = "mean_cl_normal", geom = "errorbar", width = 0.20) + scale_x_discrete(name = NULL) + scale_y_continuous(name = "y") + labs(title = "", subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5)) ## End(Not run) #----------------- group1 <- c(3, 1, 4, 2, 5, 3, 6, 7) group2 <- c(5, 2, 4, 3, 1) # Example 2l: Two-sided two-sample t-test test.t(group1, group2) #------------------------------------------------------------------------------- # Paired-Sample Design dat2 <- data.frame(pre = c(1, 3, 2, 5, 7), post = c(2, 2, 1, 6, 8)) # Example 3a: Two-sided paired-sample t-test test.t(dat2$pre, dat2$post, paired = TRUE) # Example 3b: One-sided paired-sample t-test test.t(dat2$pre, dat2$post, paired = TRUE, alternative = "greater") # Example 3c: Two-sided paired-sample t-test # convert value 1 to NA test.t(dat2$pre, dat2$post, as.na = 1, paired = TRUE) # Example 3d: Two-sided paired-sample t-test # print Cohen's d based on the standard deviation of the difference scores test.t(dat2$pre, dat2$post, paired = TRUE, effsize = TRUE) # Example 3e: Two-sided paired-sample t-test # print Cohen's d based on the standard deviation of the difference scores # with small sample correction factor test.t(dat2$pre, dat2$post, paired = TRUE, effsize = TRUE, correct = TRUE) # Example 3f: Two-sided paired-sample t-test # print Cohen's d controlling for the correlation between measures test.t(dat2$pre, dat2$post, paired = TRUE, effsize = TRUE, weighted = FALSE) # Example 3g: Two-sided paired-sample t-test # print Cohen's d controlling for the correlation between measures # with small sample correction factor test.t(dat2$pre, dat2$post, paired = TRUE, effsize = TRUE, weighted = FALSE, correct = TRUE) # Example 3h: Two-sided paired-sample t-test # print Cohen's d ignoring the correlation between measures test.t(dat2$pre, dat2$post, paired = TRUE, effsize = TRUE, weighted = FALSE, cor = FALSE) # Example 3i: Two-sided paired-sample t-test # do not print hypotheses and descriptive statistics test.t(dat2$pre, dat2$post, paired = TRUE, hypo = FALSE, descript = FALSE) # Example 3j: Two-sided paired-sample t-test # population standard deviation of difference score = 1.2 # print descriptive statistics with 3 digits and p-value with 5 digits test.t(dat2$pre, dat2$post, paired = TRUE, digits = 3, p.digits = 5) ## Not run: # Example 3k: Two-sided paired-sample t-test # Plot results test.t(dat2$pre, dat2$post, paired = TRUE, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Paired-sample_t-test.png", dpi = 600, width = 3, height = 6) # Example 3l: Two-sided paired-sample t-test # Extract plot p <- test.t(dat2$pre, dat2$post, paired = TRUE, output = FALSE)$plot p # Extract data used to plot results plotdat <- data.frame(test.t(dat2$pre, dat2$post, paired = TRUE, output = FALSE)$data) # Difference score plotdat$diff <- plotdat$y - plotdat$x # Draw plot in line with the default setting of test.t() ggplot(plotdat, aes(0, diff)) + geom_point(stat = "summary", fun = "mean", size = 4) + stat_summary(fun.data = "mean_cl_normal", geom = "errorbar", width = 0.20) + scale_x_discrete(name = NULL) + scale_y_continuous(name = NULL) + geom_hline(yintercept = 0, linetype = 3, linewidth = 0.8) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5), axis.text.x = element_blank(), axis.ticks.x = element_blank()) ## End(Not run)
dat1 <- data.frame(group = c(1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2), x = c(3, 1, 4, 2, 5, 3, 2, 3, 6, 6, 3, NA)) #------------------------------------------------------------------------------- # One-Sample Design # Example 1a: Two-sided one-sample t-test # population mean = 3 test.t(dat1$x, mu = 3) # Example 1b: One-sided one-sample t-test # population mean = 3, population standard deviation = 1.2 test.t(dat1$x, mu = 3, alternative = "greater") # Example 1c: Two-sided one-sample t-test # population mean = 3, convert value 3 to NA test.t(dat1$x, mu = 3, as.na = 3) # Example 1d: Two-sided one-sample t-test # population mean = 3, print Cohen's d test.t(dat1$x, sigma = 1.2, mu = 3, effsize = TRUE) # Example 1e: Two-sided one-sample t-test # population mean = 3, print Cohen's d with small sample correction factor test.t(dat1$x, sigma = 1.2, mu = 3, effsize = TRUE, correct = TRUE) # Example 1f: Two-sided one-sample t-test # population mean = 3, # do not print hypotheses and descriptive statistics test.t(dat1$x, sigma = 1.2, mu = 3, hypo = FALSE, descript = FALSE) # Example 1g: Two-sided one-sample t-test # print descriptive statistics with 3 digits and p-value with 5 digits test.t(dat1$x, mu = 3, digits = 3, p.digits = 5) ## Not run: # Example 1h: Two-sided one-sample t-test # population mean = 3, plot results test.t(dat1$x, mu = 3, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("One-sample_t-test.png", dpi = 600, width = 3, height = 6) # Example 1i: Two-sided one-sample t-test # population mean = 3, extract plot p <- test.t(dat1$x, mu = 3, output = FALSE)$plot p # Extract data plotdat <- data.frame(x = test.t(dat1$x, mu = 3, output = FALSE)$data[[1]]) # Draw plot in line with the default setting of test.t() ggplot(plotdat, aes(0, x)) + geom_point(stat = "summary", fun = "mean", size = 4) + stat_summary(fun.data = "mean_cl_normal", geom = "errorbar", width = 0.20) + scale_x_continuous(name = NULL, limits = c(-2, 2)) + scale_y_continuous(name = NULL) + geom_hline(yintercept = 3, linetype = 3, linewidth = 0.8) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5), axis.text.x = element_blank(), axis.ticks.x = element_blank()) ## End(Not run) #------------------------------------------------------------------------------- # Two-Sample Design # Example 2a: Two-sided two-sample t-test test.t(x ~ group, data = dat1) # Example 2b: One-sided two-sample t-test test.t(x ~ group, data = dat1, alternative = "greater") # Example 2c: Two-sided two-sample t-test # print Cohen's d with weighted pooled SD test.t(x ~ group, data = dat1, effsize = TRUE) # Example 2d: Two-sided two-sample t-test # print Cohen's d with unweighted pooled SD test.t(x ~ group, data = dat1, effsize = TRUE, weighted = FALSE) # Example 2e: Two-sided two-sample t-test # print Cohen's d with weighted pooled SD and # small sample correction factor test.t(x ~ group, data = dat1, effsize = TRUE, correct = TRUE) # Example 2f: Two-sided two-sample t-test # print Cohen's d with SD of the reference group 1 test.t(x ~ group, data = dat1, effsize = TRUE, ref = 1) # Example 2f: Two-sided two-sample t-test # print Cohen's d with weighted pooled SD and # small sample correction factor test.t(x ~ group, data = dat1, effsize = TRUE, correct = TRUE) # Example 2h: Two-sided two-sample t-test # do not print hypotheses and descriptive statistics, test.t(x ~ group, data = dat1, descript = FALSE, hypo = FALSE) # Example 2i: Two-sided two-sample t-test # print descriptive statistics with 3 digits and p-value with 5 digits test.t(x ~ group, data = dat1, digits = 3, p.digits = 5) ## Not run: # Example 2j: Two-sided two-sample t-test # Plot results test.t(x ~ group, data = dat1, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Two-sample_t-test.png", dpi = 600, width = 4, height = 6) # Example 2k: Two-sided two-sample t-test # extract plot p <- test.t(x ~ group, data = dat1, output = FALSE)$plot p # Extract data used to plot results plotdat <- test.t(x ~ group, data = dat1, output = FALSE)$data # Draw plot in line with the default setting of test.t() ggplot(plotdat, aes(factor(group), x)) + geom_point(stat = "summary", fun = "mean", size = 4) + stat_summary(fun.data = "mean_cl_normal", geom = "errorbar", width = 0.20) + scale_x_discrete(name = NULL) + scale_y_continuous(name = "y") + labs(title = "", subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5)) ## End(Not run) #----------------- group1 <- c(3, 1, 4, 2, 5, 3, 6, 7) group2 <- c(5, 2, 4, 3, 1) # Example 2l: Two-sided two-sample t-test test.t(group1, group2) #------------------------------------------------------------------------------- # Paired-Sample Design dat2 <- data.frame(pre = c(1, 3, 2, 5, 7), post = c(2, 2, 1, 6, 8)) # Example 3a: Two-sided paired-sample t-test test.t(dat2$pre, dat2$post, paired = TRUE) # Example 3b: One-sided paired-sample t-test test.t(dat2$pre, dat2$post, paired = TRUE, alternative = "greater") # Example 3c: Two-sided paired-sample t-test # convert value 1 to NA test.t(dat2$pre, dat2$post, as.na = 1, paired = TRUE) # Example 3d: Two-sided paired-sample t-test # print Cohen's d based on the standard deviation of the difference scores test.t(dat2$pre, dat2$post, paired = TRUE, effsize = TRUE) # Example 3e: Two-sided paired-sample t-test # print Cohen's d based on the standard deviation of the difference scores # with small sample correction factor test.t(dat2$pre, dat2$post, paired = TRUE, effsize = TRUE, correct = TRUE) # Example 3f: Two-sided paired-sample t-test # print Cohen's d controlling for the correlation between measures test.t(dat2$pre, dat2$post, paired = TRUE, effsize = TRUE, weighted = FALSE) # Example 3g: Two-sided paired-sample t-test # print Cohen's d controlling for the correlation between measures # with small sample correction factor test.t(dat2$pre, dat2$post, paired = TRUE, effsize = TRUE, weighted = FALSE, correct = TRUE) # Example 3h: Two-sided paired-sample t-test # print Cohen's d ignoring the correlation between measures test.t(dat2$pre, dat2$post, paired = TRUE, effsize = TRUE, weighted = FALSE, cor = FALSE) # Example 3i: Two-sided paired-sample t-test # do not print hypotheses and descriptive statistics test.t(dat2$pre, dat2$post, paired = TRUE, hypo = FALSE, descript = FALSE) # Example 3j: Two-sided paired-sample t-test # population standard deviation of difference score = 1.2 # print descriptive statistics with 3 digits and p-value with 5 digits test.t(dat2$pre, dat2$post, paired = TRUE, digits = 3, p.digits = 5) ## Not run: # Example 3k: Two-sided paired-sample t-test # Plot results test.t(dat2$pre, dat2$post, paired = TRUE, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Paired-sample_t-test.png", dpi = 600, width = 3, height = 6) # Example 3l: Two-sided paired-sample t-test # Extract plot p <- test.t(dat2$pre, dat2$post, paired = TRUE, output = FALSE)$plot p # Extract data used to plot results plotdat <- data.frame(test.t(dat2$pre, dat2$post, paired = TRUE, output = FALSE)$data) # Difference score plotdat$diff <- plotdat$y - plotdat$x # Draw plot in line with the default setting of test.t() ggplot(plotdat, aes(0, diff)) + geom_point(stat = "summary", fun = "mean", size = 4) + stat_summary(fun.data = "mean_cl_normal", geom = "errorbar", width = 0.20) + scale_x_discrete(name = NULL) + scale_y_continuous(name = NULL) + geom_hline(yintercept = 0, linetype = 3, linewidth = 0.8) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5), axis.text.x = element_blank(), axis.ticks.x = element_blank()) ## End(Not run)
This function performs Welch's two-sample t-test and Welch's ANOVA including Games-Howell post hoc test for multiple comparison and provides descriptive statistics, effect size measures, and a plot showing error bars for difference-adjusted confidence intervals with jittered data points.
test.welch(formula, data, alternative = c("two.sided", "less", "greater"), posthoc = FALSE, conf.level = 0.95, hypo = TRUE, descript = TRUE, effsize = FALSE, weighted = FALSE, ref = NULL, correct = FALSE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
test.welch(formula, data, alternative = c("two.sided", "less", "greater"), posthoc = FALSE, conf.level = 0.95, hypo = TRUE, descript = TRUE, effsize = FALSE, weighted = FALSE, ref = NULL, correct = FALSE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
formula |
a formula of the form |
data |
a matrix or data frame containing the variables in the
formula |
alternative |
a character string specifying the alternative hypothesis,
must be one of |
posthoc |
logical: if |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
hypo |
logical: if |
descript |
logical: if |
effsize |
logical: if |
weighted |
logical: if |
ref |
a numeric value or character string indicating the reference group. The standard deviation of the reference group is used to standardized the mean difference to compute Cohen's d. |
correct |
logical: if |
plot |
logical: if |
point.size |
a numeric value indicating the |
adjust |
logical: if |
error.width |
a numeric value indicating the horizontal bar width of the error bar. |
xlab |
a character string specifying the labels for the x-axis. |
ylab |
a character string specifying the labels for the y-axis. |
ylim |
a numeric vector of length two specifying limits of the limits of the y-axis. |
breaks |
a numeric vector specifying the points at which tick-marks are drawn at the y-axis. |
jitter |
logical: if |
jitter.size |
a numeric value indicating the |
jitter.width |
a numeric value indicating the amount of horizontal jitter. |
jitter.height |
a numeric value indicating the amount of vertical jitter. |
jitter.alpha |
a numeric value indicating the opacity of the jittered data points. |
title |
a character string specifying the text for the title for the plot. |
digits |
an integer value indicating the number of decimal places to be used for displaying descriptive statistics and confidence interval. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
... |
further arguments to be passed to or from methods. |
subtitle |
a character string specifying the text for the subtitle for the plot. |
By default, Cohen's d based on the non-weighted
standard deviation (i.e., weighted = FALSE
) which does not assume homogeneity
of variance is computed (see Delacre et al., 2021) when requesting an effect size
measure (i.e., effsize = TRUE
). Cohen's d based on the pooled standard
deviation assuming equality of variances between groups can be requested by
specifying weighted = TRUE
.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
sample |
type of sample, i.e., two- or multiple sample |
formula |
formula of the current analysis |
data |
data frame specified in |
plot |
ggplot2 object for plotting the results |
args |
specification of function arguments |
result |
result table |
Takuya Yanagida [email protected]
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
Delacre, M., Lakens, D., Ley, C., Liu, L., & Leys, C. (2021). Why Hedges' g*s based on the non-pooled standard deviation should be reported with Welch's t-test. https://doi.org/10.31234/osf.io/tu6mp
test.t
, test.z
, test.levene
,
aov.b
, cohens.d
, ci.mean.diff
,
ci.mean
dat1 <- data.frame(group1 = c(1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2), group2 = c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3), y = c(3, 1, 4, 2, 5, 3, 2, 3, 6, 6, 3, NA)) #------------------------------------------------------------------------------- # Two-Sample Design # Example 1a: Two-sided two-sample Welch-test test.welch(y ~ group1, data = dat1) # Example 1b: One-sided two-sample Welch-test test.welch(y ~ group1, data = dat1, alternative = "greater") # Example 1c: Two-sided two-sample Welch-test # print Cohen's d with weighted pooled SD test.welch(y ~ group1, data = dat1, effsize = TRUE) # Example 1d: Two-sided two-sample Welch-test # print Cohen's d with unweighted pooled SD test.welch(y ~ group1, data = dat1, effsize = TRUE, weighted = FALSE) # Example 1e: Two-sided two-sample Welch-test # print Cohen's d with weighted pooled SD and # small sample correction factor test.welch(y ~ group1, data = dat1, effsize = TRUE, correct = TRUE) # Example 1f: Two-sided two-sample Welch-test # print Cohen's d with SD of the reference group 1 test.welch(y ~ group1, data = dat1, effsize = TRUE, ref = 1) # Example 1g: Two-sided two-sample Welch-test # print Cohen's d with weighted pooled SD and # small sample correction factor test.welch(y ~ group1, data = dat1, effsize = TRUE, correct = TRUE) # Example 1h: Two-sided two-sample Welch-test # do not print hypotheses and descriptive statistics, test.welch(y ~ group1, data = dat1, descript = FALSE, hypo = FALSE) # Example 1i: Two-sided two-sample Welch-test # print descriptive statistics with 3 digits and p-value with 5 digits test.welch(y ~ group1, data = dat1, digits = 3, p.digits = 5) ## Not run: # Example 1j: Two-sided two-sample Welch-test # plot results test.welch(y ~ group1, data = dat1, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Two-sample_Welch-test.png", dpi = 600, width = 4, height = 6) # Example 1k: Two-sided two-sample Welch-test # extract plot p <- test.welch(y ~ group1, data = dat1, output = FALSE)$plot p # Extract data plotdat <- test.welch(y ~ group1, data = dat1, output = FALSE)$data # Draw plot in line with the default setting of test.welch() ggplot(plotdat, aes(factor(group), y)) + geom_point(stat = "summary", fun = "mean", size = 4) + stat_summary(fun.data = "mean_cl_normal", geom = "errorbar", width = 0.20) + scale_x_discrete(name = NULL) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5)) ## End(Not run) #------------------------------------------------------------------------------- # Multiple-Sample Design # Example 2a: Welch's ANOVA test.welch(y ~ group2, data = dat1) # Example 2b: Welch's ANOVA # print eta-squared and omega-squared test.welch(y ~ group2, data = dat1, effsize = TRUE) # Example 2c: Welch's ANOVA # do not print hypotheses and descriptive statistics, test.welch(y ~ group2, data = dat1, descript = FALSE, hypo = FALSE) ## Not run: # Example 2d: Welch's ANOVA # plot results test.welch(y ~ group2, data = dat1, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Multiple-sample_Welch-test.png", dpi = 600, width = 4.5, height = 6) # Example 2e: Welch's ANOVA # extract plot p <- test.welch(y ~ group2, data = dat1, output = FALSE)$plot p # Extract data plotdat <- test.welch(y ~ group2, data = dat1, output = FALSE)$data # Draw plot in line with the default setting of test.welch() ggplot(plotdat, aes(group, y)) + geom_point(stat = "summary", fun = "mean", size = 4) + stat_summary(fun.data = "mean_cl_normal", geom = "errorbar", width = 0.20) + scale_x_discrete(name = NULL) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5)) ## End(Not run)
dat1 <- data.frame(group1 = c(1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2), group2 = c(1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3), y = c(3, 1, 4, 2, 5, 3, 2, 3, 6, 6, 3, NA)) #------------------------------------------------------------------------------- # Two-Sample Design # Example 1a: Two-sided two-sample Welch-test test.welch(y ~ group1, data = dat1) # Example 1b: One-sided two-sample Welch-test test.welch(y ~ group1, data = dat1, alternative = "greater") # Example 1c: Two-sided two-sample Welch-test # print Cohen's d with weighted pooled SD test.welch(y ~ group1, data = dat1, effsize = TRUE) # Example 1d: Two-sided two-sample Welch-test # print Cohen's d with unweighted pooled SD test.welch(y ~ group1, data = dat1, effsize = TRUE, weighted = FALSE) # Example 1e: Two-sided two-sample Welch-test # print Cohen's d with weighted pooled SD and # small sample correction factor test.welch(y ~ group1, data = dat1, effsize = TRUE, correct = TRUE) # Example 1f: Two-sided two-sample Welch-test # print Cohen's d with SD of the reference group 1 test.welch(y ~ group1, data = dat1, effsize = TRUE, ref = 1) # Example 1g: Two-sided two-sample Welch-test # print Cohen's d with weighted pooled SD and # small sample correction factor test.welch(y ~ group1, data = dat1, effsize = TRUE, correct = TRUE) # Example 1h: Two-sided two-sample Welch-test # do not print hypotheses and descriptive statistics, test.welch(y ~ group1, data = dat1, descript = FALSE, hypo = FALSE) # Example 1i: Two-sided two-sample Welch-test # print descriptive statistics with 3 digits and p-value with 5 digits test.welch(y ~ group1, data = dat1, digits = 3, p.digits = 5) ## Not run: # Example 1j: Two-sided two-sample Welch-test # plot results test.welch(y ~ group1, data = dat1, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Two-sample_Welch-test.png", dpi = 600, width = 4, height = 6) # Example 1k: Two-sided two-sample Welch-test # extract plot p <- test.welch(y ~ group1, data = dat1, output = FALSE)$plot p # Extract data plotdat <- test.welch(y ~ group1, data = dat1, output = FALSE)$data # Draw plot in line with the default setting of test.welch() ggplot(plotdat, aes(factor(group), y)) + geom_point(stat = "summary", fun = "mean", size = 4) + stat_summary(fun.data = "mean_cl_normal", geom = "errorbar", width = 0.20) + scale_x_discrete(name = NULL) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5)) ## End(Not run) #------------------------------------------------------------------------------- # Multiple-Sample Design # Example 2a: Welch's ANOVA test.welch(y ~ group2, data = dat1) # Example 2b: Welch's ANOVA # print eta-squared and omega-squared test.welch(y ~ group2, data = dat1, effsize = TRUE) # Example 2c: Welch's ANOVA # do not print hypotheses and descriptive statistics, test.welch(y ~ group2, data = dat1, descript = FALSE, hypo = FALSE) ## Not run: # Example 2d: Welch's ANOVA # plot results test.welch(y ~ group2, data = dat1, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Multiple-sample_Welch-test.png", dpi = 600, width = 4.5, height = 6) # Example 2e: Welch's ANOVA # extract plot p <- test.welch(y ~ group2, data = dat1, output = FALSE)$plot p # Extract data plotdat <- test.welch(y ~ group2, data = dat1, output = FALSE)$data # Draw plot in line with the default setting of test.welch() ggplot(plotdat, aes(group, y)) + geom_point(stat = "summary", fun = "mean", size = 4) + stat_summary(fun.data = "mean_cl_normal", geom = "errorbar", width = 0.20) + scale_x_discrete(name = NULL) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5)) ## End(Not run)
This function performs one-sample, two-sample, and paired-sample z-tests and provides descriptive statistics, effect size measure, and a plot showing error bars for (difference-adjusted) confidence intervals with jittered data points.
test.z(x, ...) ## Default S3 method: test.z(x, y = NULL, sigma = NULL, sigma2 = NULL, mu = 0, paired = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, hypo = TRUE, descript = TRUE, effsize = FALSE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), line = TRUE, line.type = 3, line.size = 0.8, jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...) ## S3 method for class 'formula' test.z(formula, data, sigma = NULL, sigma2 = NULL, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, hypo = TRUE, descript = TRUE, effsize = FALSE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
test.z(x, ...) ## Default S3 method: test.z(x, y = NULL, sigma = NULL, sigma2 = NULL, mu = 0, paired = FALSE, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, hypo = TRUE, descript = TRUE, effsize = FALSE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), line = TRUE, line.type = 3, line.size = 0.8, jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...) ## S3 method for class 'formula' test.z(formula, data, sigma = NULL, sigma2 = NULL, alternative = c("two.sided", "less", "greater"), conf.level = 0.95, hypo = TRUE, descript = TRUE, effsize = FALSE, plot = FALSE, point.size = 4, adjust = TRUE, error.width = 0.1, xlab = NULL, ylab = NULL, ylim = NULL, breaks = ggplot2::waiver(), jitter = TRUE, jitter.size = 1.25, jitter.width = 0.05, jitter.height = 0, jitter.alpha = 0.1, title = "", subtitle = "Confidence Interval", digits = 2, p.digits = 4, as.na = NULL, write = NULL, append = TRUE, check = TRUE, output = TRUE, ...)
x |
a numeric vector of data values. |
... |
further arguments to be passed to or from methods. |
y |
a numeric vector of data values. |
sigma |
a numeric vector indicating the population standard deviation(s).
In case of two-sample z-test, equal standard deviations are
assumed when specifying one value for the argument |
sigma2 |
a numeric vector indicating the population variance(s). In
case of two-sample z-test, equal variances are assumed when
specifying one value for the argument |
mu |
a numeric value indicating the population mean under the null
hypothesis. Note that the argument |
paired |
logical: if |
alternative |
a character string specifying the alternative hypothesis,
must be one of |
hypo |
logical: if |
descript |
logical: if |
effsize |
logical: if |
conf.level |
a numeric value between 0 and 1 indicating the confidence level of the interval. |
plot |
logical: if |
point.size |
a numeric value indicating the |
adjust |
logical: if |
error.width |
a numeric value indicating the horizontal bar width of the error bar. |
xlab |
a character string specifying the labels for the x-axis. |
ylab |
a character string specifying the labels for the y-axis. |
ylim |
a numeric vector of length two specifying limits of the limits of the y-axis. |
breaks |
a numeric vector specifying the points at which tick-marks are drawn at the y-axis. |
line |
logical: if |
line.type |
an integer value or character string specifying the line type for the line representing the population mean under the null hypothesis, i.e., 0 = blank, 1 = solid, 2 = dashed, 3 = dotted, 4 = dotdash, 5 = longdash, 6 = twodash. |
line.size |
a numeric value indicating the |
jitter |
logical: if |
jitter.size |
a numeric value indicating the |
jitter.width |
a numeric value indicating the amount of horizontal jitter. |
jitter.height |
a numeric value indicating the amount of vertical jitter. |
jitter.alpha |
a numeric value indicating the opacity of the jittered data points. |
title |
a character string specifying the text for the title for the plot. |
subtitle |
a character string specifying the text for the subtitle for the plot. |
digits |
an integer value indicating the number of decimal places to be used for displaying descriptive statistics and confidence interval. |
p.digits |
an integer value indicating the number of decimal places to be used for displaying the p-value. |
as.na |
a numeric vector indicating user-defined missing values,
i.e. these values are converted to |
write |
a character string naming a text file with file extension
|
append |
logical: if |
check |
logical: if |
output |
logical: if |
formula |
in case of two sample z-test (i.e., |
data |
a matrix or data frame containing the variables in the formula
|
Cohen's d reported when argument effsize = TRUE
is based on the population
standard deviation specified in sigma
or the square root of the population
variance specified in sigma2
. In a one-sample and paired-sample design,
Cohen's d is the mean of the difference scores divided by the population standard
deviation of the difference scores (i.e., equivalent to Cohen's according
to Lakens, 2013). In a two-sample design, Cohen's d is the difference between
means of the two groups of observations divided by either the population standard
deviation when assuming and specifying equal standard deviations or the unweighted
pooled population standard deviation when assuming and specifying unequal standard
deviations.
Returns an object of class misty.object
, which is a list with following
entries:
call |
function call |
type |
type of analysis |
sample |
type of sample, i.e., one-, two-, or paired sample |
formula |
formula of the current analysis |
data |
data frame specified in |
plot |
ggplot2 object for plotting the results |
args |
specification of function arguments |
result |
result table |
Takuya Yanagida [email protected]
Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4, 1-12. https://doi.org/10.3389/fpsyg.2013.00863
Rasch, D., Kubinger, K. D., & Yanagida, T. (2011). Statistics in psychology - Using R and SPSS. John Wiley & Sons.
test.t
, aov.b
, aov.w
, test.welch
,
cohens.d
, ci.mean.diff
, ci.mean
dat1 <- data.frame(group = c(1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2), x = c(3, 1, 4, 2, 5, 3, 2, 3, 6, 4, 3, NA)) #------------------------------------------------------------------------------- # One-Sample Design # Example 1a: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 test.z(dat1$x, sigma = 1.2, mu = 3) # Example 1b: Two-sided one-sample z-test # population mean = 3, population variance = 1.44 test.z(dat1$x, sigma2 = 1.44, mu = 3) # Example 1c: One-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 test.z(dat1$x, sigma = 1.2, mu = 3, alternative = "greater") # Example 1d: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 # convert value 3 to NA test.z(dat1$x, sigma = 1.2, mu = 3, as.na = 3) # Example 1e: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 # print Cohen's d test.z(dat1$x, sigma = 1.2, mu = 3, effsize = TRUE) # Example 1f: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 # do not print hypotheses and descriptive statistics test.z(dat1$x, sigma = 1.2, mu = 3, hypo = FALSE, descript = FALSE) # Example 1g: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 # print descriptive statistics with 3 digits and p-value with 5 digits test.z(dat1$x, sigma = 1.2, mu = 3, digits = 3, p.digits = 5) ## Not run: # Example 1h: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 # plot results test.z(dat1$x, sigma = 1.2, mu = 3, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("One-sample_z-test.png", dpi = 600, width = 3, height = 6) # Example 1i: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 # extract plot p <- test.z(dat1$x, sigma = 1.2, mu = 3, output = FALSE)$plot p # Extract data plotdat <- data.frame(test.z(dat1$x, sigma = 1.2, mu = 3, output = FALSE)$data[[1]]) # Extract results result <- test.z(dat1$x, sigma = 1.2, mu = 3, output = FALSE)$result # Draw plot in line with the default setting of test.z() ggplot(plotdat, aes(0, x)) + geom_point(data = result, aes(x = 0L, m), size = 4) + geom_errorbar(data = result, aes(x = 0L, y = m, ymin = m.low, ymax = m.upp), width = 0.2) + scale_x_continuous(name = NULL, limits = c(-2, 2)) + scale_y_continuous(name = NULL) + geom_hline(yintercept = 3, linetype = 3, linewidth = 0.8) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5), axis.text.x = element_blank(), axis.ticks.x = element_blank()) ## End(Not run) #------------------------------------------------------------------------------- # Two-Sample Design # Example 2a: Two-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption test.z(x ~ group, sigma = 1.2, data = dat1) # Example 2b: Two-sided two-sample z-test # population standard deviation (SD) = 1.2 and 1.5, unequal SD assumption test.z(x ~ group, sigma = c(1.2, 1.5), data = dat1) # Example 2c: Two-sided two-sample z-test # population variance (Var) = 1.44 and 2.25, unequal Var assumption test.z(x ~ group, sigma2 = c(1.44, 2.25), data = dat1) # Example 2d: One-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption test.z(x ~ group, sigma = 1.2, data = dat1, alternative = "greater") # Example 2e: Two-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption # print Cohen's d test.z(x ~ group, sigma = 1.2, data = dat1, effsize = TRUE) # Example 2f: Two-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption # do not print hypotheses and descriptive statistics, # print Cohen's d test.z(x ~ group, sigma = 1.2, data = dat1, descript = FALSE, hypo = FALSE) # Example 2g: Two-sided two-sample z-test # population mean = 3, population standard deviation = 1.2 # print descriptive statistics with 3 digits and p-value with 5 digits test.z(x ~ group, sigma = 1.2, data = dat1, digits = 3, p.digits = 5) ## Not run: # Example 2h: Two-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption # plot results test.z(x ~ group, sigma = 1.2, data = dat1, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Two-sample_z-test.png", dpi = 600, width = 4, height = 6) # Example 2i: Two-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption # extract plot p <- test.z(x ~ group, sigma = 1.2, data = dat1, output = FALSE)$plot p ## End(Not run) #----------------- group1 <- c(3, 1, 4, 2, 5, 3, 6, 7) group2 <- c(5, 2, 4, 3, 1) # Example 2j: Two-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption test.z(group1, group2, sigma = 1.2) #------------------------------------------------------------------------------- # Paired-Sample Design dat2 <- data.frame(pre = c(1, 3, 2, 5, 7), post = c(2, 2, 1, 6, 8), stringsAsFactors = FALSE) # Example 3a: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE) # Example 3b: Two-sided paired-sample z-test # population variance of difference score = 1.44 test.z(dat2$pre, dat2$post, sigma2 = 1.44, paired = TRUE) # Example 3c: One-sided paired-sample z-test # population standard deviation of difference score = 1.2 test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, alternative = "greater") # Example 3d: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 # convert value 1 to NA test.z(dat2$pre, dat2$post, sigma = 1.2, as.na = 1, paired = TRUE) # Example 3e: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 # print Cohen's d test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, effsize = TRUE) # Example 3f: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 # do not print hypotheses and descriptive statistics test.z(dat2$pre, dat2$post, sigma = 1.2, mu = 3, paired = TRUE, hypo = FALSE, descript = FALSE) # Example 3g: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 # print descriptive statistics with 3 digits and p-value with 5 digits test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, digits = 3, p.digits = 5) ## Not run: # Example 3h: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 # plot results test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Paired-sample_z-test.png", dpi = 600, width = 3, height = 6) # Example 3i: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 # extract plot p <- test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, output = FALSE)$plot p # Extract data plotdat <- data.frame(test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, output = FALSE)$data) # Difference score plotdat$diff <- plotdat$y - plotdat$x # Extract results result <- test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, output = FALSE)$result # Draw plot in line with the default setting of test.t() ggplot(plotdat, aes(0, diff)) + geom_point(data = result, aes(x = 0, m.diff), size = 4) + geom_errorbar(data = result, aes(x = 0L, y = m.diff, ymin = m.low, ymax = m.upp), width = 0.2) + scale_x_continuous(name = NULL, limits = c(-2, 2)) + scale_y_continuous(name = "y") + geom_hline(yintercept = 0, linetype = 3, linewidth = 0.8) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5), axis.text.x = element_blank(), axis.ticks.x = element_blank()) ## End(Not run)
dat1 <- data.frame(group = c(1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2), x = c(3, 1, 4, 2, 5, 3, 2, 3, 6, 4, 3, NA)) #------------------------------------------------------------------------------- # One-Sample Design # Example 1a: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 test.z(dat1$x, sigma = 1.2, mu = 3) # Example 1b: Two-sided one-sample z-test # population mean = 3, population variance = 1.44 test.z(dat1$x, sigma2 = 1.44, mu = 3) # Example 1c: One-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 test.z(dat1$x, sigma = 1.2, mu = 3, alternative = "greater") # Example 1d: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 # convert value 3 to NA test.z(dat1$x, sigma = 1.2, mu = 3, as.na = 3) # Example 1e: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 # print Cohen's d test.z(dat1$x, sigma = 1.2, mu = 3, effsize = TRUE) # Example 1f: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 # do not print hypotheses and descriptive statistics test.z(dat1$x, sigma = 1.2, mu = 3, hypo = FALSE, descript = FALSE) # Example 1g: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 # print descriptive statistics with 3 digits and p-value with 5 digits test.z(dat1$x, sigma = 1.2, mu = 3, digits = 3, p.digits = 5) ## Not run: # Example 1h: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 # plot results test.z(dat1$x, sigma = 1.2, mu = 3, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("One-sample_z-test.png", dpi = 600, width = 3, height = 6) # Example 1i: Two-sided one-sample z-test # population mean = 3, population standard deviation = 1.2 # extract plot p <- test.z(dat1$x, sigma = 1.2, mu = 3, output = FALSE)$plot p # Extract data plotdat <- data.frame(test.z(dat1$x, sigma = 1.2, mu = 3, output = FALSE)$data[[1]]) # Extract results result <- test.z(dat1$x, sigma = 1.2, mu = 3, output = FALSE)$result # Draw plot in line with the default setting of test.z() ggplot(plotdat, aes(0, x)) + geom_point(data = result, aes(x = 0L, m), size = 4) + geom_errorbar(data = result, aes(x = 0L, y = m, ymin = m.low, ymax = m.upp), width = 0.2) + scale_x_continuous(name = NULL, limits = c(-2, 2)) + scale_y_continuous(name = NULL) + geom_hline(yintercept = 3, linetype = 3, linewidth = 0.8) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5), axis.text.x = element_blank(), axis.ticks.x = element_blank()) ## End(Not run) #------------------------------------------------------------------------------- # Two-Sample Design # Example 2a: Two-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption test.z(x ~ group, sigma = 1.2, data = dat1) # Example 2b: Two-sided two-sample z-test # population standard deviation (SD) = 1.2 and 1.5, unequal SD assumption test.z(x ~ group, sigma = c(1.2, 1.5), data = dat1) # Example 2c: Two-sided two-sample z-test # population variance (Var) = 1.44 and 2.25, unequal Var assumption test.z(x ~ group, sigma2 = c(1.44, 2.25), data = dat1) # Example 2d: One-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption test.z(x ~ group, sigma = 1.2, data = dat1, alternative = "greater") # Example 2e: Two-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption # print Cohen's d test.z(x ~ group, sigma = 1.2, data = dat1, effsize = TRUE) # Example 2f: Two-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption # do not print hypotheses and descriptive statistics, # print Cohen's d test.z(x ~ group, sigma = 1.2, data = dat1, descript = FALSE, hypo = FALSE) # Example 2g: Two-sided two-sample z-test # population mean = 3, population standard deviation = 1.2 # print descriptive statistics with 3 digits and p-value with 5 digits test.z(x ~ group, sigma = 1.2, data = dat1, digits = 3, p.digits = 5) ## Not run: # Example 2h: Two-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption # plot results test.z(x ~ group, sigma = 1.2, data = dat1, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Two-sample_z-test.png", dpi = 600, width = 4, height = 6) # Example 2i: Two-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption # extract plot p <- test.z(x ~ group, sigma = 1.2, data = dat1, output = FALSE)$plot p ## End(Not run) #----------------- group1 <- c(3, 1, 4, 2, 5, 3, 6, 7) group2 <- c(5, 2, 4, 3, 1) # Example 2j: Two-sided two-sample z-test # population standard deviation (SD) = 1.2, equal SD assumption test.z(group1, group2, sigma = 1.2) #------------------------------------------------------------------------------- # Paired-Sample Design dat2 <- data.frame(pre = c(1, 3, 2, 5, 7), post = c(2, 2, 1, 6, 8), stringsAsFactors = FALSE) # Example 3a: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE) # Example 3b: Two-sided paired-sample z-test # population variance of difference score = 1.44 test.z(dat2$pre, dat2$post, sigma2 = 1.44, paired = TRUE) # Example 3c: One-sided paired-sample z-test # population standard deviation of difference score = 1.2 test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, alternative = "greater") # Example 3d: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 # convert value 1 to NA test.z(dat2$pre, dat2$post, sigma = 1.2, as.na = 1, paired = TRUE) # Example 3e: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 # print Cohen's d test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, effsize = TRUE) # Example 3f: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 # do not print hypotheses and descriptive statistics test.z(dat2$pre, dat2$post, sigma = 1.2, mu = 3, paired = TRUE, hypo = FALSE, descript = FALSE) # Example 3g: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 # print descriptive statistics with 3 digits and p-value with 5 digits test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, digits = 3, p.digits = 5) ## Not run: # Example 3h: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 # plot results test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, plot = TRUE) # Load ggplot2 package library(ggplot2) # Save plot, ggsave() from the ggplot2 package ggsave("Paired-sample_z-test.png", dpi = 600, width = 3, height = 6) # Example 3i: Two-sided paired-sample z-test # population standard deviation of difference score = 1.2 # extract plot p <- test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, output = FALSE)$plot p # Extract data plotdat <- data.frame(test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, output = FALSE)$data) # Difference score plotdat$diff <- plotdat$y - plotdat$x # Extract results result <- test.z(dat2$pre, dat2$post, sigma = 1.2, paired = TRUE, output = FALSE)$result # Draw plot in line with the default setting of test.t() ggplot(plotdat, aes(0, diff)) + geom_point(data = result, aes(x = 0, m.diff), size = 4) + geom_errorbar(data = result, aes(x = 0L, y = m.diff, ymin = m.low, ymax = m.upp), width = 0.2) + scale_x_continuous(name = NULL, limits = c(-2, 2)) + scale_y_continuous(name = "y") + geom_hline(yintercept = 0, linetype = 3, linewidth = 0.8) + labs(subtitle = "Two-Sided 95 theme_bw() + theme(plot.subtitle = element_text(hjust = 0.5), axis.text.x = element_blank(), axis.ticks.x = element_blank()) ## End(Not run)
This function writes a data frame or matrix into a Stata data file.
write.dta(x, file = "Stata_Data.dta", version = 14, label = NULL, str.thres = 2045, adjust.tz = TRUE, check = TRUE)
write.dta(x, file = "Stata_Data.dta", version = 14, label = NULL, str.thres = 2045, adjust.tz = TRUE, check = TRUE)
x |
a matrix or data frame to be written in Stata, vectors are coerced to a data frame. |
file |
a character string naming a file with or without file extension
'.dta', e.g., |
version |
Stats file version to use. Supports versions 8-15. |
label |
dataset label to use, or |
str.thres |
any character vector with a maximum length greater than
|
adjust.tz |
this argument controls how the timezone of date-time values
is treated when writing, see 'Details' in the
in the |
check |
logical: if |
This function is a modified copy of the read_dta()
function in the
haven package by Hadley Wickham, Evan Miller and Danny Smith (2023).
Hadley Wickham, Evan Miller and Danny Smith
Wickham H, Miller E, Smith D (2023). haven: Import and Export 'SPSS', 'Stata' and 'SAS' Files. R package version 2.5.3. https://CRAN.R-project.org/package=haven
read.dta
, write.sav
, write.mplus
,
write.xlsx
## Not run: # Example 1: Write data frame 'mtcars' into the State data file 'mtcars.dta' write.dta(mtcars, "mtcars.dta") ## End(Not run)
## Not run: # Example 1: Write data frame 'mtcars' into the State data file 'mtcars.dta' write.dta(mtcars, "mtcars.dta") ## End(Not run)
This function writes a matrix or data frame to a tab-delimited file without variable names, a Mplus input template, and a text file with variable names. Note that only numeric variables are allowed, i.e., non-numeric variables will be removed from the data set. Missing data will be coded as a single numeric value.
write.mplus(x, file = "Mplus_Data.dat", data = TRUE, input = TRUE, var = FALSE, na = -99, check = TRUE)
write.mplus(x, file = "Mplus_Data.dat", data = TRUE, input = TRUE, var = FALSE, na = -99, check = TRUE)
x |
a matrix or data frame to be written to a tab-delimited file. |
file |
a character string naming a file with or without the file extension
'.dat', e.g., |
data |
logical: if |
input |
logical: if |
var |
logical: if |
na |
a numeric value or character string representing missing values
( |
check |
logical: if |
Returns a character string indicating the variable names for the Mplus input file.
Takuya Yanagida [email protected]
Muthen, L. K., & Muthen, B. O. (1998-2017). Mplus User's Guide (8th ed.). Muthen & Muthen.
read.mplus
, mplus.run
, write.sav
,
write.xlsx
, write.dta
## Not run: # Example 1: Write Mplus Data File and a Mplus input template write.mplus(mtcars) # Example 2: Write Mplus Data File "mtcars.dat" and a Mplus input template "mtcars_INPUT.inp", # missing values coded with -999, # write variable names in a text file called "mtcars_VARNAMES.inp" write.mplus(mtcars, file = "mtcars.dat", var = TRUE, na = -999) ## End(Not run)
## Not run: # Example 1: Write Mplus Data File and a Mplus input template write.mplus(mtcars) # Example 2: Write Mplus Data File "mtcars.dat" and a Mplus input template "mtcars_INPUT.inp", # missing values coded with -999, # write variable names in a text file called "mtcars_VARNAMES.inp" write.mplus(mtcars, file = "mtcars.dat", var = TRUE, na = -999) ## End(Not run)
This function writes the results of a misty object (misty.object
)
into a Excel file.
write.result(x, file = "Results.xlsx", tri = x$args$tri, digits = x$args$digits, p.digits = x$args$p.digits, icc.digits = x$args$icc.digits, r.digits = x$args$r.digits, ess.digits = x$args$ess.digits, mcse.digits = x$args$mcse.digits, check = TRUE)
write.result(x, file = "Results.xlsx", tri = x$args$tri, digits = x$args$digits, p.digits = x$args$p.digits, icc.digits = x$args$icc.digits, r.digits = x$args$r.digits, ess.digits = x$args$ess.digits, mcse.digits = x$args$mcse.digits, check = TRUE)
x |
misty object ( |
file |
a character string naming a file with or without file extension
'.xlsx', e.g., |
tri |
a character string or character vector indicating which triangular
of the matrix to show on the console, i.e., |
digits |
an integer value indicating the number of decimal places digits to be used for displaying results. |
p.digits |
an integer indicating the number of decimal places to be used for displaying p-values. |
icc.digits |
an integer indicating the number of decimal places to be used for displaying intraclass correlation coefficients. |
r.digits |
an integer value indicating the number of decimal places to be used for displaying R-hat values. |
ess.digits |
an integer value indicating the number of decimal places to be used for displaying effective sample sizes. |
mcse.digits |
an integer value indicating the number of decimal places to be used for displaying monte carlo standard errors. |
check |
logical: if |
Currently the function supports result objects from the function
blimp.bayes
, cor.matrix
, crosstab
, descript
, dominance.manual
,
dominance
, effsize
, freq
, item.alpha
, item.cfa
,
item.invar
, item.omega
, result.lca
, multilevel.cfa
,
multilevel.cor
, multilevel.descript
, mplus.bayes
,
multilevel.fit
, multilevel.invar
, multilevel.omega
,
na.auxiliary
, na.coverage
, na.descript
, na.pattern
,
robust.coef
, and std.coef
.
Takuya Yanagida [email protected]
blimp.bayes
, cor.matrix
, crosstab
, descript
,
dominance.manual
, dominance
, effsize
,
freq
, item.alpha
, item.cfa
, item.invar
,
item.omega
, result.lca
, mplus.bayes
, multilevel.cfa
,
multilevel.cor
, multilevel.descript
, multilevel.fit
,
multilevel.invar
, multilevel.omega
, na.auxiliary
,
na.coverage
, na.descript
, na.pattern
,
robust.coef
, std.coef
## Not run: #---------------------------------------------------------------------------- # Example 1: item.cfa() function # Load data set "HolzingerSwineford1939" in the lavaan package data("HolzingerSwineford1939", package = "lavaan") result <- item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")], output = FALSE) write.result(result, "CFA.xlsx") #---------------------------------------------------------------------------- # Example 2: multilevel.descript() function # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") result <- multilevel.descript(y1:y3, data = Demo.twolevel, cluster = "cluster", output = FALSE) write.result(result, "Multilevel_Descript.xlsx") ## End(Not run)
## Not run: #---------------------------------------------------------------------------- # Example 1: item.cfa() function # Load data set "HolzingerSwineford1939" in the lavaan package data("HolzingerSwineford1939", package = "lavaan") result <- item.cfa(HolzingerSwineford1939[, c("x1", "x2", "x3")], output = FALSE) write.result(result, "CFA.xlsx") #---------------------------------------------------------------------------- # Example 2: multilevel.descript() function # Load data set "Demo.twolevel" in the lavaan package data("Demo.twolevel", package = "lavaan") result <- multilevel.descript(y1:y3, data = Demo.twolevel, cluster = "cluster", output = FALSE) write.result(result, "Multilevel_Descript.xlsx") ## End(Not run)
This function writes a data frame or matrix into a SPSS file by either using the
write_sav()
function in the haven package by Hadley Wickham and Evan
Miller (2019) or the free software PSPP.
write.sav(x, file = "SPSS_Data.sav", var.attr = NULL, pspp.path = NULL, digits = 2, write.csv = FALSE, sep = c(";", ","), na = "", write.sps = FALSE, check = TRUE)
write.sav(x, file = "SPSS_Data.sav", var.attr = NULL, pspp.path = NULL, digits = 2, write.csv = FALSE, sep = c(";", ","), na = "", write.sps = FALSE, check = TRUE)
x |
a matrix or data frame to be written in SPSS, vectors are coerced to a data frame. |
file |
a character string naming a file with or without file extension
'.sav', e.g., |
var.attr |
a matrix or data frame with variable attributes used in the
SPSS file, only 'variable labels' (column name |
pspp.path |
a character string indicating the path where the PSPP folder
is located on the computer, e.g. |
digits |
an integer value indicating the number of decimal places shown in the SPSS file for non-integer variables. |
write.csv |
logical: if |
sep |
a character string for specifying the CSV file, either |
na |
a character string for specifying missing values in the CSV file. |
write.sps |
logical: if |
check |
logical: if |
If arguments pspp.path
is not specified (i.e., pspp.path = NULL
),
write_sav()
function in the haven is used. Otherwise the object x
is written as CSV file, which is subsequently imported into SPSS using the free
software PSPP by executing a SPSS syntax written in R. Note that PSPP
needs to be installed on your computer when using the pspp.path
argument.
A SPSS file with 'variable labels', 'value labels', and 'user-missing values' is
written by specifying the var.attr
argument. Note that the number of rows
in the matrix or data frame specified in var.attr
needs to match with the
number of columns in the data frame or matrix specified in x
, i.e., each
row in var.attr
represents the variable attributes of the corresponding
variable in x
. In addition, column names of the matrix or data frame
specified in var.attr
needs to be labeled as label
for 'variable
labels, values
for 'value labels', and missing
for 'user-missing
values'.
Labels for the values are defined in the column values
of the matrix or
data frame in var.attr
using the equal-sign (e.g., 0 = female
) and
are separated by a semicolon (e.g., 0 = female; 1 = male
).
User-missing values are defined in the column missing
of the matrix or
data frame in var.attr
, either specifying one user-missing value (e.g.,
-99
) or more than one but up to three user-missing values separated
by a semicolon (e.g., -77; -99
.
Part of the function using PSPP was adapted from the write.pspp()
function in the miceadds package by Alexander Robitzsch, Simon Grund and
Thorsten Henke (2019).
Takuya Yanagida [email protected]
GNU Project (2018). GNU PSPP for GNU/Linux (Version 1.2.0). Boston, MA: Free Software Foundation. https://www.gnu.org/software/pspp/
Wickham H., & Miller, E. (2019). haven: Import and Export 'SPSS', 'Stata' and 'SAS' Files. R package version 2.2.0.
Robitzsch, A., Grund, S., & Henke, T. (2019). miceadds: Some additional multiple imputation functions, especially for mice. R package version 3.4-17.
read.sav
, write.xlsx
, write.dta
,
write.mplus
## Not run: dat <- data.frame(id = 1:5, gender = c(NA, 0, 1, 1, 0), age = c(16, 19, 17, NA, 16), status = c(1, 2, 3, 1, 4), score = c(511, 506, 497, 502, 491)) # Example 1: Write SPSS file using the haven package write.sav(dat, file = "Dataframe_haven.sav") # Example 2: Write SPSS file using PSPP, # write CSV file and SPSS syntax along with the SPSS file write.sav(dat, file = "Dataframe_PSPP.sav", pspp.path = "C:/Program Files/PSPP", write.csv = TRUE, write.sps = TRUE) # Example 3: Specify variable attributes # Note that it is recommended to manually specify the variables attritbues in a CSV or # Excel file which is subsequently read into R attr <- data.frame(# Variable names var = c("id", "gender", "age", "status", "score"), # Variable labels label = c("Identification number", "Gender", "Age in years", "Migration background", "Achievement test score"), # Value labels values = c("", "0 = female; 1 = male", "", "1 = Austria; 2 = former Yugoslavia; 3 = Turkey; 4 = other", ""), # User-missing values missing = c("", "-99", "-99", "-99", "-99"), stringsAsFactors = FALSE) # Example 4: Write SPSS file with variable attributes using the haven package write.sav(dat, file = "Dataframe_haven_Attr.sav", var.attr = attr) # Example 5: Write SPSS with variable attributes using PSPP write.sav(dat, file = "Dataframe_PSPP_Attr.sav", var.attr = attr, pspp.path = "C:/Program Files/PSPP") ## End(Not run)
## Not run: dat <- data.frame(id = 1:5, gender = c(NA, 0, 1, 1, 0), age = c(16, 19, 17, NA, 16), status = c(1, 2, 3, 1, 4), score = c(511, 506, 497, 502, 491)) # Example 1: Write SPSS file using the haven package write.sav(dat, file = "Dataframe_haven.sav") # Example 2: Write SPSS file using PSPP, # write CSV file and SPSS syntax along with the SPSS file write.sav(dat, file = "Dataframe_PSPP.sav", pspp.path = "C:/Program Files/PSPP", write.csv = TRUE, write.sps = TRUE) # Example 3: Specify variable attributes # Note that it is recommended to manually specify the variables attritbues in a CSV or # Excel file which is subsequently read into R attr <- data.frame(# Variable names var = c("id", "gender", "age", "status", "score"), # Variable labels label = c("Identification number", "Gender", "Age in years", "Migration background", "Achievement test score"), # Value labels values = c("", "0 = female; 1 = male", "", "1 = Austria; 2 = former Yugoslavia; 3 = Turkey; 4 = other", ""), # User-missing values missing = c("", "-99", "-99", "-99", "-99"), stringsAsFactors = FALSE) # Example 4: Write SPSS file with variable attributes using the haven package write.sav(dat, file = "Dataframe_haven_Attr.sav", var.attr = attr) # Example 5: Write SPSS with variable attributes using PSPP write.sav(dat, file = "Dataframe_PSPP_Attr.sav", var.attr = attr, pspp.path = "C:/Program Files/PSPP") ## End(Not run)
This function calls the write_xlsx()
function in the writexl package
by Jeroen Ooms to write an Excel file (.xlsx).
write.xlsx(x, file = "Excel_Data.xlsx", col.names = TRUE, format = FALSE, use.zip64 = FALSE, check = TRUE)
write.xlsx(x, file = "Excel_Data.xlsx", col.names = TRUE, format = FALSE, use.zip64 = FALSE, check = TRUE)
x |
a matrix, data frame or (named) list of matrices or data frames that will be written in the Excel file. |
file |
a character string naming a file with or without file extension
'.xlsx', e.g., |
col.names |
logical: if |
format |
logical: if |
use.zip64 |
logical: if |
check |
logical: if |
This function supports strings, numbers, booleans, and dates.
The function was adapted from the write_xlsx()
function in the writexl
package by Jeroen Ooms (2021).
Jeroen Ooms
Jeroen O. (2021). writexl: Export Data Frames to Excel 'xlsx' Format. R package version 1.4.0. https://CRAN.R-project.org/package=writexl
read.xlsx
, write.sav
, write.dta
,
write.mplus
## Not run: # Example 1: Write Excel file (.xlsx) dat <- data.frame(id = 1:5, gender = c(NA, 0, 1, 1, 0), age = c(16, 19, 17, NA, 16), status = c(1, 2, 3, 1, 4), score = c(511, 506, 497, 502, 491)) write.xlsx(dat, file = "Excel.xlsx") # Example 2: Write Excel file with multiple sheets (.xlsx) write.xlsx(list(cars = cars, mtcars = mtcars), file = "Excel_Sheets.xlsx") ## End(Not run)
## Not run: # Example 1: Write Excel file (.xlsx) dat <- data.frame(id = 1:5, gender = c(NA, 0, 1, 1, 0), age = c(16, 19, 17, NA, 16), status = c(1, 2, 3, 1, 4), score = c(511, 506, 497, 502, 491)) write.xlsx(dat, file = "Excel.xlsx") # Example 2: Write Excel file with multiple sheets (.xlsx) write.xlsx(list(cars = cars, mtcars = mtcars), file = "Excel_Sheets.xlsx") ## End(Not run)