Package: irrCAC 1.0

Kilem L. Gwet

irrCAC: Computing Chance-Corrected Agreement Coefficients (CAC)

Calculates various chance-corrected agreement coefficients (CAC) among 2 or more raters are provided. Among the CAC coefficients covered are Cohen's kappa, Conger's kappa, Fleiss' kappa, Brennan-Prediger coefficient, Gwet's AC1/AC2 coefficients, and Krippendorff's alpha. Multiple sets of weights are proposed for computing weighted analyses. All of these statistical procedures are described in details in Gwet, K.L. (2014,ISBN:978-0970806284): "Handbook of Inter-Rater Reliability," 4th edition, Advanced Analytics, LLC.

Authors:Kilem L. Gwet, Ph.D.

irrCAC_1.0.tar.gz
irrCAC_1.0.tar.gz(r-4.5-noble)irrCAC_1.0.tar.gz(r-4.4-noble)
irrCAC_1.0.tgz(r-4.4-emscripten)irrCAC_1.0.tgz(r-4.3-emscripten)
irrCAC.pdf |irrCAC.html
irrCAC/json (API)

# Install 'irrCAC' in R:
install.packages('irrCAC', repos = c('https://cran.r-universe.dev', 'https://cloud.r-project.org'))

Peer review:

Datasets:
  • altman - Dataset describing the Altman's Benchmarking Scale
  • cac.ben.gerry - Ratings of 12 units from 2 raters named Ben and Gerry
  • cac.dist.g1g2 - Distribution of 4 raters by subject and by category, for 14 Subjects that belong to 2 groups "G1" and "G2"
  • cac.dist4cat - Distribution of 4 raters by Category and Subject - Subjects allocated in 2 groups A and B.
  • cac.raw.g1g2 - Dataset of raw ratings from 4 Raters on 14 Subjects that belong to 2 groups named "G1" and "G2"
  • cac.raw.gender - Rating Data from 4 Raters and 15 human Subjects, 9 of whom are female and 6 males.
  • cac.raw4raters - Rating Data from 4 Raters and 12 Subjects.
  • cac.raw5obser - Scores assigned by 5 observers to 20 experimental units.
  • cont3x3abstractors - Distribution of 100 pregnant women by pregnancy type and by abstractor.
  • cont4x4diagnosis - Distribution of 223 Psychiatric Patients by Type of of Psychiatric Disorder and Diagnosis Method.
  • distrib.6raters - Distribution of 6 psychiatrists by Subject/patient and diagnosis Category.
  • fleiss - Dataset describing Fleiss' Benchmarking Scale
  • landis.koch - Dataset describing the Landis & Koch Benchmarking Scale

This package does not link to any Github/Gitlab/R-forge repository. No issue tracker or development information is available.

4.73 score 3 packages 48 scripts 521 downloads 4 mentions 28 exports 0 dependencies

Last updated 5 years agofrom:41b40b431b. Checks:OK: 1 NOTE: 1. Indexed: yes.

TargetResultDate
Doc / VignettesOKDec 10 2024
R-4.5-linuxNOTEDec 10 2024

Exports:altman.bfbipolar.weightsbp.coeff.distbp.coeff.rawbp2.tablecircular.weightsconger.kappa.rawfleiss.bffleiss.kappa.distfleiss.kappa.rawgwet.ac1.distgwet.ac1.rawgwet.ac1.tableidentity.weightskappa2.tablekrippen.alpha.distkrippen.alpha.rawkrippen2.tablelandis.koch.bflinear.weightsordinal.weightspa.coeff.distpa.coeff.rawpa2.tablequadratic.weightsradical.weightsratio.weightsscott2.table

Dependencies:

irrCAC-benchmarking

Rendered frombenchmarking.Rmdusingknitr::rmarkdownon Dec 10 2024.

Last update: 2019-09-23
Started: 2019-09-23

Calculating Chance-corrected Agreement Coefficients (CAC)

Rendered fromoverview.Rmdusingknitr::rmarkdownon Dec 10 2024.

Last update: 2019-09-23
Started: 2019-09-23

Weighted Chance-corrected Agreement Coefficients

Rendered fromweighting.Rmdusingknitr::rmarkdownon Dec 10 2024.

Last update: 2019-09-23
Started: 2019-09-23

Readme and manuals

Help Manual

Help pageTopics
Dataset describing the Altman's Benchmarking Scalealtman
Computing Altman's Benchmark Scale Membership Probabilitiesaltman.bf
Function for computing the Bipolar Weightsbipolar.weights
Brennan-Prediger's agreement coefficient among multiple raters (2, 3, +) when the input dataset is the distribution of raters by subject and category.bp.coeff.dist
Brennan \& Prediger's (BP) agreement coefficient for an arbitrary number of raters (2, 3, +) when the input data represent the raw ratings reported for each subject and each rater.bp.coeff.raw
Brenann-Prediger coefficient for 2 ratersbp2.table
Ratings of 12 units from 2 raters named Ben and Gerrycac.ben.gerry
Distribution of 4 raters by subject and by category, for 14 Subjects that belong to 2 groups "G1" and "G2"cac.dist.g1g2
Distribution of 4 raters by Category and Subject - Subjects allocated in 2 groups A and B.cac.dist4cat
Dataset of raw ratings from 4 Raters on 14 Subjects that belong to 2 groups named "G1" and "G2"cac.raw.g1g2
Rating Data from 4 Raters and 15 human Subjects, 9 of whom are female and 6 males.cac.raw.gender
Rating Data from 4 Raters and 12 Subjects.cac.raw4raters
Scores assigned by 5 observers to 20 experimental units.cac.raw5obser
Function for computing the Circular Weightscircular.weights
Conger's generalized kappa coefficient for an arbitrary number of raters (2, 3, +) when the input data represent the raw ratings reported for each subject and each rater.conger.kappa.raw
Distribution of 100 pregnant women by pregnancy type and by abstractor.cont3x3abstractors
Distribution of 223 Psychiatric Patients by Type of of Psychiatric Disorder and Diagnosis Method.cont4x4diagnosis
Distribution of 6 psychiatrists by Subject/patient and diagnosis Category.distrib.6raters
Dataset describing Fleiss' Benchmarking Scalefleiss
Computing Fleiss Benchmark Scale Membership Probabilitiesfleiss.bf
Fleiss' agreement coefficient among multiple raters (2, 3, +) when the input dataset is the distribution of raters by subject and category.fleiss.kappa.dist
Fleiss' generalized kappa among multiple raters (2, 3, +) when the input data represent the raw ratings reported for each subject and each rater.fleiss.kappa.raw
Gwet's AC1/AC2 agreement coefficient among multiple raters (2, 3, +) when the input dataset is the distribution of raters by subject and category.gwet.ac1.dist
Gwet's AC1/AC2 agreement coefficient among multiple raters (2, 3, +) when the input data represent the raw ratings reported for each subject and each rater.gwet.ac1.raw
Gwet's AC1/AC2 coefficient for 2 ratersgwet.ac1.table
Function for computing the Identity Weightsidentity.weights
Kappa coefficient for 2 raterskappa2.table
Krippendorff's agreement coefficient among multiple raters (2, 3, +) when the input dataset is the distribution of raters by subject and category.krippen.alpha.dist
Krippendorff's alpha coefficient for an arbitrary number of raters (2, 3, +) when the input data represent the raw ratings reported for each subject and each rater.krippen.alpha.raw
Krippendorff's Alpha coefficient for 2 raterskrippen2.table
Dataset describing the Landis & Koch Benchmarking Scalelandis.koch
Computing Landis-Koch Benchmark Scale Membership Probabilitieslandis.koch.bf
Function for computing the Linear Weightslinear.weights
Function for computing the Ordinal Weightsordinal.weights
Percent agreement coefficient among multiple raters (2, 3, +) when the input dataset is the distribution of raters by subject and category.pa.coeff.dist
Percent agreement among multiple raters (2, 3, +) when the input data represent the raw ratings reported for each subject and each rater.pa.coeff.raw
Percent Agreement coefficient for 2 raterspa2.table
Function for computing the Quadratic Weightsquadratic.weights
Function for computing the Radical Weightsradical.weights
Function for computing the Ratio Weightsratio.weights
Scott's coefficient for 2 ratersscott2.table
An r function for trimming leading and trealing blankstrim