Package 'AUC'

Title: Threshold Independent Performance Measures for Probabilistic Classifiers
Description: Various functions to compute the area under the curve of selected measures: The area under the sensitivity curve (AUSEC), the area under the specificity curve (AUSPC), the area under the accuracy curve (AUACC), and the area under the receiver operating characteristic curve (AUROC). Support for visualization and partial areas is included.
Authors: Michel Ballings and Dirk Van den Poel
Maintainer: Michel Ballings <[email protected]>
License: GPL (>= 2)
Version: 0.3.2
Built: 2024-11-01 11:25:13 UTC
Source: CRAN

Help Index


Threshold independent performance measures for probabilistic classifiers.

Description

Summary and plotting functions for threshold independent performance measures for probabilistic classifiers.

Details

This package includes functions to compute the area under the curve (function auc) of selected measures: The area under the sensitivity curve (AUSEC) (function sensitivity), the area under the specificity curve (AUSPC) (function specificity), the area under the accuracy curve (AUACC) (function accuracy), and the area under the receiver operating characteristic curve (AUROC) (function roc). The curves can also be visualized using the function plot. Support for partial areas is provided.

Auxiliary code in this package is adapted from the ROCR package. The measures available in this package are not available in the ROCR package or vice versa (except for the AUROC). As for the AUROC, we adapted the ROCR code to increase computational speed (so it can be used more effectively in objective functions). As a result less funtionality is offered (e.g., averaging cross validation runs). Please use the ROCR package for that purposes.

Author(s)

Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]

References

Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.

See Also

sensitivity, specificity, accuracy, roc, auc, plot

Examples

data(churn)

auc(sensitivity(churn$predictions,churn$labels))
auc(specificity(churn$predictions,churn$labels))
auc(accuracy(churn$predictions,churn$labels))
auc(roc(churn$predictions,churn$labels))

plot(sensitivity(churn$predictions,churn$labels))
plot(specificity(churn$predictions,churn$labels))
plot(accuracy(churn$predictions,churn$labels))
plot(roc(churn$predictions,churn$labels))

Compute the accuracy curve.

Description

This function computes the accuracy curve required for the auc function and the plot function.

Usage

accuracy(predictions, labels, perc.rank = TRUE)

Arguments

predictions

A numeric vector of classification probabilities (confidences, scores) of the positive event.

labels

A factor of observed class labels (responses) with the only allowed values {0,1}.

perc.rank

A logical. If TRUE (default) the percentile rank of the predictions is used.

Value

A list containing the following elements:

cutoffs

A numeric vector of threshold values

measure

A numeric vector of accuracy values corresponding to the threshold values

Author(s)

Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]

References

Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.

See Also

sensitivity, specificity, accuracy, roc, auc, plot

Examples

data(churn)

accuracy(churn$predictions,churn$labels)

Compute the area under the curve of a given performance measure.

Description

This function computes the area under the sensitivity curve (AUSEC), the area under the specificity curve (AUSPC), the area under the accuracy curve (AUACC), or the area under the receiver operating characteristic curve (AUROC).

Usage

auc(x, min = 0, max = 1)

Arguments

x

an object produced by one of the functions sensitivity, specificity, accuracy, or roc

min

a numeric value between 0 and 1, denoting the cutoff that defines the start of the area under the curve

max

a numeric value between 0 and 1, denoting the cutoff that defines the end of the area under the curve

Value

A numeric value between zero and one denoting the area under the curve

Author(s)

Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]

References

Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.

See Also

sensitivity, specificity, accuracy, roc, auc, plot

Examples

data(churn)

auc(sensitivity(churn$predictions,churn$labels))

auc(specificity(churn$predictions,churn$labels))

auc(accuracy(churn$predictions,churn$labels))

auc(roc(churn$predictions,churn$labels))

Display the NEWS file

Description

AUCNews shows the NEWS file of the AUC package.

Usage

AUCNews()

Value

None.


Churn data

Description

churn contains three variables: the churn predictions (probabilities) of two models, and observed churn

Usage

data(churn)

Format

A data frame with 1302 observations, and 3 variables: predictions, predictions2, churn.

Author(s)

Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]

References

Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.

Examples

data(churn)
str(churn)

Plot the sensitivity, specificity, accuracy and roc curves.

Description

This function plots the (partial) sensitivity, specificity, accuracy and roc curves.

Usage

## S3 method for class 'AUC'
plot(x, y = NULL, ..., type = "l", add = FALSE, min = 0, max = 1)

Arguments

x

an object produced by one of the functions sensitivity, specificity, accuracy, or roc

y

Not used.

...

Arguments to be passed to methods, such as graphical parameters. See ?plot

type

Type of plot. Default is line plot.

add

Logical. If TRUE the curve is added to an existing plot. If FALSE a new plot is created.

min

a numeric value between 0 and 1, denoting the cutoff that defines the start of the area under the curve

max

a numeric value between 0 and 1, denoting the cutoff that defines the end of the area under the curve

Author(s)

Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]

References

Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.

See Also

sensitivity, specificity, accuracy, roc, auc, plot

Examples

data(churn)

plot(sensitivity(churn$predictions,churn$labels))

plot(specificity(churn$predictions,churn$labels))

plot(accuracy(churn$predictions,churn$labels))

plot(roc(churn$predictions,churn$labels))

Compute the receiver operating characteristic (ROC) curve.

Description

This function computes the receiver operating characteristic (ROC) curve required for the auc function and the plot function.

Usage

roc(predictions, labels)

Arguments

predictions

A numeric vector of classification probabilities (confidences, scores) of the positive event.

labels

A factor of observed class labels (responses) with the only allowed values {0,1}.

Value

A list containing the following elements:

cutoffs

A numeric vector of threshold values

fpr

A numeric vector of false positive rates corresponding to the threshold values

tpr

A numeric vector of true positive rates corresponding to the threshold values

Author(s)

Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]

References

Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.

See Also

sensitivity, specificity, accuracy, roc, auc, plot

Examples

data(churn)

roc(churn$predictions,churn$labels)

Compute the sensitivity curve.

Description

This function computes the sensitivity curve required for the auc function and the plot function.

Usage

sensitivity(predictions, labels, perc.rank = TRUE)

Arguments

predictions

A numeric vector of classification probabilities (confidences, scores) of the positive event.

labels

A factor of observed class labels (responses) with the only allowed values {0,1}.

perc.rank

A logical. If TRUE (default) the percentile rank of the predictions is used.

Value

A list containing the following elements:

cutoffs

A numeric vector of threshold values

measure

A numeric vector of sensitivity values corresponding to the threshold values

Author(s)

Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]

References

Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.

See Also

sensitivity, specificity, accuracy, roc, auc, plot

Examples

data(churn)

sensitivity(churn$predictions,churn$labels)

Compute the specificity curve.

Description

This function computes the specificity curve required for the auc function and the plot function.

Usage

specificity(predictions, labels, perc.rank = TRUE)

Arguments

predictions

A numeric vector of classification probabilities (confidences, scores) of the positive event.

labels

A factor of observed class labels (responses) with the only allowed values {0,1}.

perc.rank

A logical. If TRUE (default) the percentile rank of the predictions is used.

Value

A list containing the following elements:

cutoffs

A numeric vector of threshold values

measure

A numeric vector of specificity values corresponding to the threshold values

Author(s)

Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]

References

Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.

See Also

sensitivity, specificity, accuracy, roc, auc, plot

Examples

data(churn)

specificity(churn$predictions,churn$labels)