Title: | Sample Size Estimation Functions for Studies of Interobserver Agreement |
---|---|
Description: | Contains basic tools for sample size estimation in studies of interobserver/interrater agreement (reliability). Includes functions for both the power-based and confidence interval-based methods, with binary or multinomial outcomes and two through six raters. |
Authors: | Michael A Rotondi <[email protected]> |
Maintainer: | Michael A Rotondi <[email protected]> |
License: | GPL (>= 2) |
Version: | 1.2 |
Built: | 2024-11-15 06:49:43 UTC |
Source: | CRAN |
This function provides detailed sample size estimation information to determine
the number of subjects required using the confidence interval perspective to sample
size estimation for . This version assumes that the outcome has three categories.
CI3Cats(kappa0, kappaL, kappaU=NA, props, raters=2, alpha=0.05)
CI3Cats(kappa0, kappaL, kappaU=NA, props, raters=2, alpha=0.05)
kappa0 |
The anticipated preliminary value of |
kappaL |
The desired expected lower bound for a two-sided 100(1 - |
kappaU |
The desired expected upper confidence limit for |
props |
The anticipated prevalence of the desired trait. Note that the elements of the three element vector must be non-negative and sum to one. |
raters |
The number of raters that are available. This function allows between 2 and 6 raters. |
alpha |
The desired type I error rate. |
This function provides detailed sample size estimation computation for studies of interobserver agreement with three outcomes. This function employs the confidence interval perspective, determining the correct sample size that provides the specified expected confidence limits. Sample size estimation is based on the precision of the estimate, instead of a simple hypothesis testing perspective. Note that a warning message is provided if any of the expected cell counts are less than 5.
N |
The calculated sample size. |
kappa0 |
The specified anticipated value of |
kappaL |
The specified expected lower limit. |
kappaU |
The specified expected upper limit. |
props |
The anticipated proportions of individuals with the outcomes of interest. |
raters |
The number of raters. |
alpha |
The desired type I error rate. |
ChiCrit |
The critical value that is required for sample size estimation. It is typically not required and is not displayed in the summary output. |
Michael Rotondi, [email protected]
Rotondi MA, Donner A. (2012). A Confidence Interval Approach to Sample Size Estimation for Interobserver Agreement Studies with Multiple Raters and Outcomes. Journal of Clinical Epidemiology, 65:778-784.
Donner A, Rotondi MA. (2010). Sample Size Requirements for Interval Estimation of the Kappa Statistic for Interobserver Agreement Studies with a Binary Outcome and Multiple Raters. International Journal of Biostatistics 6:31.
Altaye M, Donner A, Klar N. (2001). Procedures for Assessing Interobserver Agreement among Multiple Raters. Biometrics 57:584-588.
Donner A. (1999). Sample Size Requirements for Interval Estimation of the Intraclass Kappa Statistic. Communication in Statistics 28:415-429.
Bartfay E, Donner A. (2001). Statistical Inferences for Interobserver Agreement Studies with Nominal Outcome Data. The Statistician 50:135-146.
Donner A, Eliasziw M. (1987) Sample size requirements for reliability studies. Statistics in Medicine 6:441-448.
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 with precision of 0.2 on each side, in a study of interobserver agreement (3 raters). Further suppose that the prevalence of the traits are 0.30, 0.2, 0.5. ## End(Not run) CI3Cats(kappa0=0.4, kappaL=0.3, kappaU=0.6, props=c(0.30, 0.2, 0.5), raters=3, alpha=0.05);
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 with precision of 0.2 on each side, in a study of interobserver agreement (3 raters). Further suppose that the prevalence of the traits are 0.30, 0.2, 0.5. ## End(Not run) CI3Cats(kappa0=0.4, kappaL=0.3, kappaU=0.6, props=c(0.30, 0.2, 0.5), raters=3, alpha=0.05);
This function provides detailed sample size estimation information to determine
the number of subjects required using the confidence interval perspective to sample
size estimation for . This version assumes that the outcome has four categories.
CI4Cats(kappa0, kappaL, kappaU=NA, props, raters=2, alpha=0.05)
CI4Cats(kappa0, kappaL, kappaU=NA, props, raters=2, alpha=0.05)
kappa0 |
The preliminary (anticipated) value of |
kappaL |
The desired expected lower bound for a two-sided 100(1 - |
kappaU |
The desired expected upper confidence limit for |
props |
The anticipated prevalence of the desired trait. Note that the elements of the four element vector must be non-negative and sum to one. |
raters |
The number of raters that are available. This function allows between 2 and 6 raters. |
alpha |
The desired type I error rate. |
This function provides detailed sample size estimation computation for studies of interobserver agreement with four outcomes. This function employs the confidence interval perspective, determining the correct sample size that provides the specified expected confidence limits. Sample size estimation is based on the precision of the estimate, instead of a simple hypothesis testing perspective. Note that a warning message is provided if any of the expected cell counts are less than 5.
N |
The calculated sample size. |
kappa0 |
The specified anticipated value of |
kappaL |
The specified expected lower limit. |
kappaU |
The specified expected upper limit. |
props |
The anticipated proportions of individuals with the outcomes of interest. |
raters |
The number of raters. |
alpha |
The desired type I error rate. |
ChiCrit |
The critical value that is required for sample size estimation. It is typically not required and is not displayed in the summary output. |
Michael Rotondi, [email protected]
Rotondi MA, Donner A. (2012). A Confidence Interval Approach to Sample Size Estimation for Interobserver Agreement Studies with Multiple Raters and Outcomes. Journal of Clinical Epidemiology, 65:778-784.
Donner A, Rotondi MA. (2010). Sample Size Requirements for Interval Estimation of the Kappa Statistic for Interobserver Agreement Studies with a Binary Outcome and Multiple Raters. International Journal of Biostatistics 6:31.
Altaye M, Donner A, Klar N. (2001). Procedures for Assessing Interobserver Agreement among Multiple Raters. Biometrics 57:584-588.
Donner A. (1999). Sample Size Requirements for Interval Estimation of the Intraclass Kappa Statistic. Communication in Statistics 28:415-429.
Bartfay E, Donner A. (2001). Statistical Inferences for Interobserver Agreement Studies with Nominal Outcome Data. The Statistician 50:135-146.
Donner A, Eliasziw M. (1987) Sample size requirements for reliability studies. Statistics in Medicine 6:441-448.
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 with precision of 0.1 on each side, in a study of interobserver agreement. Further suppose that the prevalence of the traits are 0.30, 0.2, 0.2, 0.3. ## End(Not run) CI4Cats(kappa0=0.4, kappaL=0.3, kappaU=0.5, props=c(0.30, 0.2, 0.2, 0.3), alpha=0.05);
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 with precision of 0.1 on each side, in a study of interobserver agreement. Further suppose that the prevalence of the traits are 0.30, 0.2, 0.2, 0.3. ## End(Not run) CI4Cats(kappa0=0.4, kappaL=0.3, kappaU=0.5, props=c(0.30, 0.2, 0.2, 0.3), alpha=0.05);
This function provides detailed sample size estimation information to determine
the number of subjects required using the confidence interval perspective to sample
size estimation for . This version assumes that the outcome has five categories.
CI5Cats(kappa0, kappaL, kappaU=NA, props, raters=2, alpha=0.05)
CI5Cats(kappa0, kappaL, kappaU=NA, props, raters=2, alpha=0.05)
kappa0 |
The anticipated preliminary value of |
kappaL |
The desired expected lower bound for a two-sided 100(1 - |
kappaU |
The desired expected upper confidence limit for |
props |
The anticipated prevalence of the desired traits. Note that the elements of the five element vector must be non-negative and sum to one. |
raters |
The number of raters that are available. This function allows between 2 and 6 raters. |
alpha |
The desired type I error rate. |
This function provides detailed sample size estimation computation for studies of interobserver agreement with five outcomes. This function employs the confidence interval perspective, determining the correct sample size that provides the specified expected confidence limits. Sample size estimation is based on the precision of the estimate, instead of a simple hypothesis testing perspective. Note that a warning message is provided if any of the expected cell counts are less than 5.
N |
The calculated sample size. |
kappa0 |
The specified anticipated value of |
kappaL |
The specified expected lower limit. |
kappaU |
The specified expected upper limit. |
props |
The anticipated proportions of individuals with the outcomes of interest. |
raters |
The number of raters. |
alpha |
The desired type I error rate. |
ChiCrit |
The critical value that is required for sample size estimation. It is typically not required and is not displayed in the summary output. |
Michael Rotondi, [email protected]
Rotondi MA, Donner A. (2012). A Confidence Interval Approach to Sample Size Estimation for Interobserver Agreement Studies with Multiple Raters and Outcomes. Journal of Clinical Epidemiology, 65:778-784.
Donner A, Rotondi MA. (2010). Sample Size Requirements for Interval Estimation of the Kappa Statistic for Interobserver Agreement Studies with a Binary Outcome and Multiple Raters. International Journal of Biostatistics 6:31.
Altaye M, Donner A, Klar N. (2001). Procedures for Assessing Interobserver Agreement among Multiple Raters. Biometrics 57:584-588.
Donner A. (1999). Sample Size Requirements for Interval Estimation of the Intraclass Kappa Statistic. Communication in Statistics 28:415-429.
Bartfay E, Donner A. (2001). Statistical Inferences for Interobserver Agreement Studies with Nominal Outcome Data. The Statistician 50:135-146.
Donner A, Eliasziw M. (1987) Sample size requirements for reliability studies. Statistics in Medicine 6:441-448.
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 with precision of 0.1 on each side, in a study of interobserver agreement. Further suppose that the prevalence of the traits are 0.13, 0.17, 0.2, 0.2, 0.3. ## End(Not run) CI5Cats(kappa0=0.4, kappaL=0.3, kappaU=0.5, props=c(0.13, 0.17, 0.2, 0.2, 0.3), alpha=0.05);
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 with precision of 0.1 on each side, in a study of interobserver agreement. Further suppose that the prevalence of the traits are 0.13, 0.17, 0.2, 0.2, 0.3. ## End(Not run) CI5Cats(kappa0=0.4, kappaL=0.3, kappaU=0.5, props=c(0.13, 0.17, 0.2, 0.2, 0.3), alpha=0.05);
This function provides detailed sample size estimation information to determine
the number of subjects required using the confidence interval perspective to sample
size estimation for . This version assumes that the outcome has two categories.
CIBinary(kappa0, kappaL, kappaU=NA, props, raters=2, alpha=0.05)
CIBinary(kappa0, kappaL, kappaU=NA, props, raters=2, alpha=0.05)
kappa0 |
The preliminary value of |
kappaL |
The desired expected lower bound for a two-sided 100(1 - |
kappaU |
The desired expected upper confidence limit for kappa. |
props |
The anticipated prevalence of the desired trait. Note that specifying props as either a single value, or two values that sum to one provides the same result. |
raters |
The number of raters that are available. This function allows between 2 and 6 raters. |
alpha |
The desired type I error rate. |
This function provides detailed sample size estimation computation for studies of interobserver agreement with a binary outcome. This function employs the confidence interval perspective, determining the correct sample size that provides the specified expected confidence limits. Sample size estimation is based on the precision of the estimate, instead of a simple hypothesis testing perspective. Note that a warning message is provided if any of the expected cell counts are less than 5.
N |
The calculated sample size. |
kappa0 |
The specified anticipated value of |
kappaL |
The specified expected lower limit. |
kappaU |
The specified expected upper limit. |
props |
The anticipated proportion of individuals with the outcome. |
raters |
The number of raters. |
alpha |
The desired type I error rate. |
ChiCrit |
The critical value that is required for sample size estimation. It is typically not required and is not displayed in the summary output. |
Michael Rotondi, [email protected]
Rotondi MA, Donner A. (2012). A Confidence Interval Approach to Sample Size Estimation for Interobserver Agreement Studies with Multiple Raters and Outcomes. Journal of Clinical Epidemiology, 65:778-784.
Donner A, Rotondi MA. (2010). Sample Size Requirements for Interval Estimation of the Kappa Statistic for Interobserver Agreement Studies with a Binary Outcome and Multiple Raters. International Journal of Biostatistics 6:31.
Altaye M, Donner A, Klar N. (2001). Procedures for Assessing Interobserver Agreement among Multiple Raters. Biometrics 57:584-588.
Donner A. (1999). Sample Size Requirements for Interval Estimation of the Intraclass Kappa Statistic. Communication in Statistics 28:415-429.
Bartfay E, Donner A. (2001). Statistical Inferences for Interobserver Agreement Studies with Nominal Outcome Data. The Statistician 50:135-146.
Donner A, Eliasziw M. (1987) Sample size requirements for reliability studies. Statistics in Medicine 6:441-448.
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 with precision of 0.1 on each side, in a study of interobserver agreement. Further suppose that the prevalence of the trait of interest is 0.30. ## End(Not run) CIBinary(kappa0=0.4, kappaL=0.3, kappaU=0.5, props=0.30, alpha=0.05);
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 with precision of 0.1 on each side, in a study of interobserver agreement. Further suppose that the prevalence of the trait of interest is 0.30. ## End(Not run) CIBinary(kappa0=0.4, kappaL=0.3, kappaU=0.5, props=0.30, alpha=0.05);
This function provides the potential lower bound for a 100(1 - ) % confidence interval
that can be calculated for a fixed sample size, n, and an anticipated
value of
. This version assumes that the outcome has three categories.
FixedN3Cats(kappa0, n, props, raters=2, alpha=0.05)
FixedN3Cats(kappa0, n, props, raters=2, alpha=0.05)
kappa0 |
The preliminary value of |
n |
The total number of available subjects. |
props |
The anticipated prevalence of the desired trait. Note that the elements of the three element vector must be non-negative and sum to one. |
raters |
The number of raters that are available. This function allows between 2 and 6 raters. |
alpha |
The desired type I error rate. |
This function calculates the expected lower bound of a one-sided
confidence interval for a fixed sample size, n, and an anticipated
value of , kappa0. This function can illustrate the amount of precision
available in the estimation of
for a fixed sample size. Note that a warning message is provided if any of the expected cell counts are less than 5.
n |
The specified sample size. |
kappa0 |
The specified anticipated value of |
kappaL |
The calculated expected lower limit. |
props |
The anticipated proportion of individuals with the outcome. |
raters |
The number of raters. |
alpha |
The desired type I error rate. |
ChiCrit |
The critical value that is required for sample size estimation. It is typically not required and is not displayed in the summary output. |
Michael Rotondi, [email protected]
Rotondi MA, Donner A. (2012). A Confidence Interval Approach to Sample Size Estimation for Interobserver Agreement Studies with Multiple Raters and Outcomes. Journal of Clinical Epidemiology, 65:778-784.
Donner A, Rotondi MA. (2010). Sample Size Requirements for Interval Estimation of the Kappa Statistic for Interobserver Agreement Studies with a Binary Outcome and Multiple Raters. International Journal of Biostatistics 6:31.
Altaye M, Donner A, Klar N. (2001). Procedures for Assessing Interobserver Agreement among Multiple Raters. Biometrics 57:584-588.
Donner A. (1999). Sample Size Requirements for Interval Estimation of the Intraclass Kappa Statistic. Communication in Statistics 28:415-429.
Bartfay E, Donner A. (2001). Statistical Inferences for Interobserver Agreement Studies with Nominal Outcome Data. The Statistician 50:135-146.
Donner A, Eliasziw M. (1987) Sample size requirements for reliability studies. Statistics in Medicine 6:441-448.
## Not run: Suppose an investigator would like to determine the expected lower bound for kappa0=0.7 assuming he has access to 80 subjects and 5 raters. Further suppose that the prevalence of the trait is 0.50. ## End(Not run) FixedN3Cats(kappa0=0.7, n=80, props=c(0.33, 0.34, 0.33), alpha=0.05, raters=5);
## Not run: Suppose an investigator would like to determine the expected lower bound for kappa0=0.7 assuming he has access to 80 subjects and 5 raters. Further suppose that the prevalence of the trait is 0.50. ## End(Not run) FixedN3Cats(kappa0=0.7, n=80, props=c(0.33, 0.34, 0.33), alpha=0.05, raters=5);
This function provides the potential lower bound for a 100(1 - ) % confidence interval
that can be calculated for a fixed sample size, n, and an anticipated
value of
, kappa0. This version assumes that the outcome of interest has four levels.
FixedN4Cats(kappa0, n, props, raters=2, alpha=0.05)
FixedN4Cats(kappa0, n, props, raters=2, alpha=0.05)
kappa0 |
The anticipated value of |
n |
The total number of available subjects. |
props |
The anticipated prevalence of the desired trait. Note that the elements of the four element vector must be non-negative and sum to one. |
raters |
The number of raters that are available. This function allows between 2 and 6 raters. |
alpha |
The desired type I error rate. |
This function calculates the expected lower bound of a one-sided
confidence interval for a fixed sample size, n, and an anticipated
value of , kappa0. This function can illustrate the amount of precision
available in the estimation of
for a fixed sample size. Note that a warning message is provided if any of the expected cell counts are less than 5.
n |
The specified sample size. |
kappa0 |
The specified anticipated value of |
kappaL |
The calculated expected lower limit. |
props |
The anticipated proportion of individuals with the outcome. |
raters |
The number of raters. |
alpha |
The desired type I error rate. |
ChiCrit |
The critical value that is required for sample size estimation. It is typically not required and is not displayed in the summary output. |
Michael Rotondi, [email protected]
Rotondi MA, Donner A. (2012). A Confidence Interval Approach to Sample Size Estimation for Interobserver Agreement Studies with Multiple Raters and Outcomes. Journal of Clinical Epidemiology, 65:778-784.
Donner A, Rotondi MA. (2010). Sample Size Requirements for Interval Estimation of the Kappa Statistic for Interobserver Agreement Studies with a Binary Outcome and Multiple Raters. International Journal of Biostatistics 6:31.
Altaye M, Donner A, Klar N. (2001). Procedures for Assessing Interobserver Agreement among Multiple Raters. Biometrics 57:584-588.
Donner A. (1999). Sample Size Requirements for Interval Estimation of the Intraclass Kappa Statistic. Communication in Statistics 28:415-429.
Bartfay E, Donner A. (2001). Statistical Inferences for Interobserver Agreement Studies with Nominal Outcome Data. The Statistician 50:135-146.
Donner A, Eliasziw M. (1987) Sample size requirements for reliability studies. Statistics in Medicine 6:441-448.
## Not run: Suppose an investigator would like to determine the expected lower bound for kappa0=0.7 assuming he has access to 80 subjects and 5 raters. Further suppose that the prevalence of the traits of interest are 0.4, 0.4, 0.1, 0.1. ## End(Not run) FixedN4Cats(kappa0=0.7, n=80, props=c(0.4, 0.4, 0.1, 0.1), alpha=0.05, raters=5);
## Not run: Suppose an investigator would like to determine the expected lower bound for kappa0=0.7 assuming he has access to 80 subjects and 5 raters. Further suppose that the prevalence of the traits of interest are 0.4, 0.4, 0.1, 0.1. ## End(Not run) FixedN4Cats(kappa0=0.7, n=80, props=c(0.4, 0.4, 0.1, 0.1), alpha=0.05, raters=5);
This function provides the potential lower bound for a 100(1 - ) % confidence interval
that can be calculated for a fixed sample size, n, and an anticipated
value of
, kappa0. This version assumes that the outcome of interest has five levels.
FixedN5Cats(kappa0, n, props, raters=2, alpha=0.05)
FixedN5Cats(kappa0, n, props, raters=2, alpha=0.05)
kappa0 |
The anticipated preliminary value of |
n |
The total number of available subjects. |
props |
The anticipated prevalence of the desired traits. Note that the elements of the five element vector must be non-negative and sum to one. |
raters |
The number of raters that are available. This function allows between 2 and 6 raters. |
alpha |
The desired type I error rate. |
This function calculates the expected lower bound of a one-sided
confidence interval for a fixed sample size, n, and an anticipated
value of , kappa0. This function can illustrate the amount of precision
available in the estimation of kappa for a fixed sample size. Note that a warning message is provided if any of the expected cell counts are less than 5.
n |
The specified sample size. |
kappa0 |
The specified anticipated value of |
kappaL |
The calculated expected lower limit. |
props |
The anticipated proportion of individuals with the outcome. |
raters |
The number of raters. |
alpha |
The desired type I error rate. |
ChiCrit |
The critical value that is required for sample size estimation. It is typically not required and is not displayed in the summary output. |
Michael Rotondi, [email protected]
Rotondi MA, Donner A. (2012). A Confidence Interval Approach to Sample Size Estimation for Interobserver Agreement Studies with Multiple Raters and Outcomes. Journal of Clinical Epidemiology, 65:778-784.
Donner A, Rotondi MA. (2010). Sample Size Requirements for Interval Estimation of the Kappa Statistic for Interobserver Agreement Studies with a Binary Outcome and Multiple Raters. International Journal of Biostatistics 6:31.
Altaye M, Donner A, Klar N. (2001). Procedures for Assessing Interobserver Agreement among Multiple Raters. Biometrics 57:584-588.
Donner A. (1999). Sample Size Requirements for Interval Estimation of the Intraclass Kappa Statistic. Communication in Statistics 28:415-429.
Bartfay E, Donner A. (2001). Statistical Inferences for Interobserver Agreement Studies with Nominal Outcome Data. The Statistician 50:135-146.
Donner A, Eliasziw M. (1987) Sample size requirements for reliability studies. Statistics in Medicine 6:441-448.
## Not run: Suppose an investigator would like to determine the expected lower bound for kappa0=0.6 assuming he has access to 150 subjects and 2 raters. Further suppose that the prevalence of the traits of interest are 0.4, 0.2, 0.2, 0.1, 0.1. ## End(Not run) FixedN5Cats(kappa0=0.6, n=150, props=c(0.4, 0.2, 0.2, 0.1, 0.1), alpha=0.05, raters=2);
## Not run: Suppose an investigator would like to determine the expected lower bound for kappa0=0.6 assuming he has access to 150 subjects and 2 raters. Further suppose that the prevalence of the traits of interest are 0.4, 0.2, 0.2, 0.1, 0.1. ## End(Not run) FixedN5Cats(kappa0=0.6, n=150, props=c(0.4, 0.2, 0.2, 0.1, 0.1), alpha=0.05, raters=2);
This function provides the potential lower bound for a 100(1 - ) % confidence interval
that can be calculated for a fixed sample size, n, and an anticipated
value of
, kappa0. This version assumes that the outcome of interest is binary.
FixedNBinary(kappa0, n, props, raters=2, alpha=0.05)
FixedNBinary(kappa0, n, props, raters=2, alpha=0.05)
kappa0 |
The preliminary value of |
n |
The total number of available subjects. |
props |
The anticipated prevalence of the desired trait. Note that specifying props as either a single value, or two values that some to one, provides the same result. |
raters |
The number of raters that are available. This function allows between 2 and 6 raters. |
alpha |
The desired type I error rate. |
This function calculates the expected lower bound of a one-sided
confidence interval for a fixed sample size, n, and an anticipated
value of , kappa0. This function can illustrate the amount of precision
available in the estimation of kappa for a fixed sample size. Note that a warning message is provided if any of the expected cell counts are less than 5.
n |
The specified sample size. |
kappa0 |
The specified anticipated value of |
kappaL |
The calculated expected lower limit. |
props |
The anticipated proportion of individuals with the outcome. |
raters |
The number of raters. |
alpha |
The desired type I error rate. |
ChiCrit |
The critical value that is required for sample size estimation. It is typically not required and is not displayed in the summary output. |
Michael Rotondi, [email protected]
Rotondi MA, Donner A. (2012). A Confidence Interval Approach to Sample Size Estimation for Interobserver Agreement Studies with Multiple Raters and Outcomes. Journal of Clinical Epidemiology, 65:778-784.
Donner A, Rotondi MA. (2010). Sample Size Requirements for Interval Estimation of the Kappa Statistic for Interobserver Agreement Studies with a Binary Outcome and Multiple Raters. International Journal of Biostatistics 6:31.
Altaye M, Donner A, Klar N. (2001). Procedures for Assessing Interobserver Agreement among Multiple Raters. Biometrics 57:584-588.
Donner A. (1999). Sample Size Requirements for Interval Estimation of the Intraclass Kappa Statistic. Communication in Statistics 28:415-429.
Bartfay E, Donner A. (2001). Statistical Inferences for Interobserver Agreement Studies with Nominal Outcome Data. The Statistician 50:135-146.
Donner A, Eliasziw M. (1987) Sample size requirements for reliability studies. Statistics in Medicine 6:441-448.
## Not run: Suppose an investigator would like to determine the expected lower bound for kappa0=0.7 assuming he has access to 100 subjects and 4 raters. Further suppose that the prevalence of the trait is 0.50. ## End(Not run) FixedNBinary(kappa0=0.7, n=100, props=0.50, alpha=0.05, raters=4);
## Not run: Suppose an investigator would like to determine the expected lower bound for kappa0=0.7 assuming he has access to 100 subjects and 4 raters. Further suppose that the prevalence of the trait is 0.50. ## End(Not run) FixedNBinary(kappa0=0.7, n=100, props=0.50, alpha=0.05, raters=4);
This function provides detailed sample size estimation information to determine
the number of subjects that are required to test the hypothesis vs.
,
at two-sided significance level
, with power,
. This version assumes that the outcome is multinomial with three levels.
Power3Cats(kappa0, kappa1, props, raters=2, alpha=0.05, power=0.80)
Power3Cats(kappa0, kappa1, props, raters=2, alpha=0.05, power=0.80)
kappa0 |
The null hypothesis for the |
kappa1 |
The alternate hypothesis for the |
props |
The anticipated prevalence of the desired traits. Note that this three element vector must sum to one. |
raters |
The number of raters that are available. This function allows between 2 and 6 raters. |
alpha |
The desired type I error rate. |
power |
The desired level of power, recall power = 1 - type II error. |
This function provides detailed sample size estimation tools for studies
of interobserver agreement with three levels. This function employs the
power approach, rejecting in favour of
for a pre-specified significance
level and power. Note that a warning message is provided if any of the expected cell counts are less than 5.
N |
The calculated sample size. |
kappa0 |
The specified null hypothesis. |
kappa1 |
The specified alternative hypothesis. |
props |
The anticipated proportion of individuals with the outcome. |
raters |
The number of raters. |
alpha |
The desired type I error rate. |
power |
The desired level of power, recall power = 1 - type II error. |
Michael Rotondi, [email protected]
Rotondi MA, Donner A. (2012). A Confidence Interval Approach to Sample Size Estimation for Interobserver Agreement Studies with Multiple Raters and Outcomes. Journal of Clinical Epidemiology, 65:778-784.
Donner A, Rotondi MA. (2010). Sample Size Requirements for Interval Estimation of the Kappa Statistic for Interobserver Agreement Studies with a Binary Outcome and Multiple Raters. International Journal of Biostatistics 6:31.
Altaye M, Donner A, Klar N. (2001). Procedures for Assessing Interobserver Agreement among Multiple Raters. Biometrics 57:584-588.
Donner A. (1999). Sample Size Requirements for Interval Estimation of the Intraclass Kappa Statistic. Communication in Statistics 28:415-429.
Bartfay E, Donner A. (2001). Statistical Inferences for Interobserver Agreement Studies with Nominal Outcome Data. The Statistician 50:135-146.
Donner A, Eliasziw M. (1987) Sample size requirements for reliability studies. Statistics in Medicine 6:441-448.
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 vs. kappa1=0.6 with alpha=0.05 and power=0.80 in a study of interobserver agreement. Further suppose that the prevalence of the categories is 0.30, 0.60 and 0.10. ## End(Not run) Power3Cats(kappa0=0.4, kappa1=0.6, props=c(0.30, 0.60, 0.10), alpha=0.05, power=0.80);
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 vs. kappa1=0.6 with alpha=0.05 and power=0.80 in a study of interobserver agreement. Further suppose that the prevalence of the categories is 0.30, 0.60 and 0.10. ## End(Not run) Power3Cats(kappa0=0.4, kappa1=0.6, props=c(0.30, 0.60, 0.10), alpha=0.05, power=0.80);
This function provides detailed sample size estimation information to determine
the number of subjects that are required to test the hypothesis vs.
, at two-sided
significance level
, with power,
. This version assumes that the outcome is multinomial with four levels.
Power4Cats(kappa0, kappa1, props, raters=2, alpha=0.05, power=0.80)
Power4Cats(kappa0, kappa1, props, raters=2, alpha=0.05, power=0.80)
kappa0 |
The null hypothesis for the |
kappa1 |
The alternate hypothesis for the |
props |
The anticipated prevalence of the desired traits. Note that this four element vector must sum to one. |
raters |
The number of raters that are available. This function allows between 2 and 6 raters. |
alpha |
The desired type I error rate. |
power |
The desired level of power, recall power = 1 - type II error. |
This function provides detailed sample size estimation tools for studies
of interobserver agreement with four levels. This function employs the
power approach, rejecting in favour of
for a pre-specified significance
level and power. Note that a warning message is provided if any of the expected cell counts are less than 5.
N |
The calculated sample size. |
kappa0 |
The specified null hypothesis. |
kappa1 |
The specified alternative hypothesis. |
props |
The anticipated proportion of individuals with the outcome. |
raters |
The number of raters. |
alpha |
The desired type I error rate. |
power |
The desired level of power, recall power = 1 - type II error. |
Michael Rotondi, [email protected]
Rotondi MA, Donner A. (2012). A Confidence Interval Approach to Sample Size Estimation for Interobserver Agreement Studies with Multiple Raters and Outcomes. Journal of Clinical Epidemiology, 65:778-784.
Donner A, Rotondi MA. (2010). Sample Size Requirements for Interval Estimation of the Kappa Statistic for Interobserver Agreement Studies with a Binary Outcome and Multiple Raters. International Journal of Biostatistics 6:31.
Altaye M, Donner A, Klar N. (2001). Procedures for Assessing Interobserver Agreement among Multiple Raters. Biometrics 57:584-588.
Donner A. (1999). Sample Size Requirements for Interval Estimation of the Intraclass Kappa Statistic. Communication in Statistics 28:415-429.
Bartfay E, Donner A. (2001). Statistical Inferences for Interobserver Agreement Studies with Nominal Outcome Data. The Statistician 50:135-146.
Donner A, Eliasziw M. (1987) Sample size requirements for reliability studies. Statistics in Medicine 6:441-448.
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 vs. kappa1=0.6 with alpha=0.05 and power=0.80 in a study of interobserver agreement. Further suppose that the prevalence of the categories is 0.30, 0.30, 0.30 and 0.10. ## End(Not run) Power4Cats(kappa0=0.4, kappa1=0.6, props=c(0.30, 0.30, 0.30, 0.10), alpha=0.05, power=0.80);
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 vs. kappa1=0.6 with alpha=0.05 and power=0.80 in a study of interobserver agreement. Further suppose that the prevalence of the categories is 0.30, 0.30, 0.30 and 0.10. ## End(Not run) Power4Cats(kappa0=0.4, kappa1=0.6, props=c(0.30, 0.30, 0.30, 0.10), alpha=0.05, power=0.80);
This function provides detailed sample size estimation information to determine
the number of subjects that are required to test the hypothesis vs.
, at two-sided
significance level
, with power,
. This version assumes that the outcome is multinomial with five levels.
Power5Cats(kappa0, kappa1, props, raters=2, alpha=0.05, power=0.80)
Power5Cats(kappa0, kappa1, props, raters=2, alpha=0.05, power=0.80)
kappa0 |
The null hypothesis for the |
kappa1 |
The alternate hypothesis for the |
props |
The anticipated prevalence of the desired traits. Note that this five element vector must sum to one. |
raters |
The number of raters that are available. This function allows between 2 and 6 raters. |
alpha |
The desired type I error rate. |
power |
The desired level of power, recall power = 1 - type II error. |
This function provides detailed sample size estimation tools for studies
of interobserver agreement with five levels. This function employs the
power approach, rejecting in favour of
for a pre-specified significance
level and power. Note that a warning message is provided if any of the expected cell counts are less than 5.
N |
The calculated sample size. |
kappa0 |
The specified null hypothesis. |
kappa1 |
The specified alternative hypothesis. |
props |
The anticipated proportion of individuals with the outcome. |
raters |
The number of raters. |
alpha |
The desired type I error rate. |
power |
The desired level of power, recall power = 1 - type II error. |
Michael Rotondi, [email protected]
Rotondi MA, Donner A. (2012). A Confidence Interval Approach to Sample Size Estimation for Interobserver Agreement Studies with Multiple Raters and Outcomes. Journal of Clinical Epidemiology, 65:778-784.
Donner A, Rotondi MA. (2010). Sample Size Requirements for Interval Estimation of the Kappa Statistic for Interobserver Agreement Studies with a Binary Outcome and Multiple Raters. International Journal of Biostatistics 6:31.
Altaye M, Donner A, Klar N. (2001). Procedures for Assessing Interobserver Agreement among Multiple Raters. Biometrics 57:584-588.
Donner A. (1999). Sample Size Requirements for Interval Estimation of the Intraclass Kappa Statistic. Communication in Statistics 28:415-429.
Bartfay E, Donner A. (2001). Statistical Inferences for Interobserver Agreement Studies with Nominal Outcome Data. The Statistician 50:135-146.
Donner A, Eliasziw M. (1987) Sample size requirements for reliability studies. Statistics in Medicine 6:441-448.
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 vs. kappa1=0.6 with alpha=0.05 and power=0.80 in a study of interobserver agreement. Further suppose that the prevalence of the categories is 0.30, 0.20, 0.10, 0.30 and 0.10. ## End(Not run) Power5Cats(kappa0=0.4, kappa1=0.6, props=c(0.30, 0.20, 0.10, 0.30, 0.10), alpha=0.05, power=0.80);
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 vs. kappa1=0.6 with alpha=0.05 and power=0.80 in a study of interobserver agreement. Further suppose that the prevalence of the categories is 0.30, 0.20, 0.10, 0.30 and 0.10. ## End(Not run) Power5Cats(kappa0=0.4, kappa1=0.6, props=c(0.30, 0.20, 0.10, 0.30, 0.10), alpha=0.05, power=0.80);
This function provides detailed sample size estimation information to determine
the number of subjects that are required to test the hypothesis vs.
, at two-sided
significance level
, with power,
.
PowerBinary(kappa0, kappa1, props, raters=2, alpha=0.05, power=0.80)
PowerBinary(kappa0, kappa1, props, raters=2, alpha=0.05, power=0.80)
kappa0 |
The null hypothesis for the |
kappa1 |
The alternate hypothesis for the |
props |
The anticipated prevalence of the desired trait. Note that specifying props as either a single value, or two values that some to one, provides the same result. |
raters |
The number of raters that are available. This function allows between 2 and 6 raters. |
alpha |
The desired type I error rate. |
power |
The desired level of power, recall power = 1 - type II error. |
This function provides detailed sample size estimation tools for studies
of interobserver agreement with a binary outcome. This function employs the
power approach, rejecting in favour of
for a pre-specified significance
level and power. Note that a warning message is provided if any of the expected cell counts are less than 5.
N |
The calculated sample size. |
kappa0 |
The specified null hypothesis. |
kappa1 |
The specified alternative hypothesis. |
props |
The anticipated proportion of individuals with the outcome. |
raters |
The number of raters. |
alpha |
The desired type I error rate. |
power |
The desired level of power, recall power = 1 - type II error. |
Michael Rotondi, [email protected]
Rotondi MA, Donner A. (2012). A Confidence Interval Approach to Sample Size Estimation for Interobserver Agreement Studies with Multiple Raters and Outcomes. Journal of Clinical Epidemiology, 65:778-784.
Donner A, Rotondi MA. (2010). Sample Size Requirements for Interval Estimation of the Kappa Statistic for Interobserver Agreement Studies with a Binary Outcome and Multiple Raters. International Journal of Biostatistics 6:31.
Altaye M, Donner A, Klar N. (2001). Procedures for Assessing Interobserver Agreement among Multiple Raters. Biometrics 57:584-588.
Donner A. (1999). Sample Size Requirements for Interval Estimation of the Intraclass Kappa Statistic. Communication in Statistics 28:415-429.
Bartfay E, Donner A. (2001). Statistical Inferences for Interobserver Agreement Studies with Nominal Outcome Data. The Statistician 50:135-146.
Donner A, Eliasziw M. (1987) Sample size requirements for reliability studies. Statistics in Medicine 6:441-448.
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 vs. kappa1=0.6 with alpha=0.05 and power=0.80 in a study of interobserver agreement. Further suppose that the prevalence of the trait is 0.30. ## End(Not run) PowerBinary(kappa0=0.4, kappa1=0.6, props=0.30, alpha=0.05, power=0.80);
## Not run: Suppose an investigator would like to determine the required sample size to test kappa0=0.4 vs. kappa1=0.6 with alpha=0.05 and power=0.80 in a study of interobserver agreement. Further suppose that the prevalence of the trait is 0.30. ## End(Not run) PowerBinary(kappa0=0.4, kappa1=0.6, props=0.30, alpha=0.05, power=0.80);