Title: | Threshold Independent Performance Measures for Probabilistic Classifiers |
---|---|
Description: | Various functions to compute the area under the curve of selected measures: The area under the sensitivity curve (AUSEC), the area under the specificity curve (AUSPC), the area under the accuracy curve (AUACC), and the area under the receiver operating characteristic curve (AUROC). Support for visualization and partial areas is included. |
Authors: | Michel Ballings and Dirk Van den Poel |
Maintainer: | Michel Ballings <[email protected]> |
License: | GPL (>= 2) |
Version: | 0.3.2 |
Built: | 2024-11-01 11:25:13 UTC |
Source: | CRAN |
Summary and plotting functions for threshold independent performance measures for probabilistic classifiers.
This package includes functions to compute the area under the curve (function auc
) of selected measures: The area under
the sensitivity curve (AUSEC) (function sensitivity
), the area under the specificity curve
(AUSPC) (function specificity
), the area under the accuracy curve (AUACC) (function accuracy
), and
the area under the receiver operating characteristic curve (AUROC) (function roc
). The curves can also be
visualized using the function plot
. Support for partial areas is provided.
Auxiliary code in this package is adapted from the ROCR
package. The measures available in this package are not available in the
ROCR package or vice versa (except for the AUROC). As for the AUROC, we adapted the ROCR
code to increase computational speed
(so it can be used more effectively in objective functions). As a result less funtionality is offered (e.g., averaging cross validation runs).
Please use the ROCR
package for that purposes.
Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]
Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
sensitivity
, specificity
, accuracy
, roc
, auc
, plot
data(churn) auc(sensitivity(churn$predictions,churn$labels)) auc(specificity(churn$predictions,churn$labels)) auc(accuracy(churn$predictions,churn$labels)) auc(roc(churn$predictions,churn$labels)) plot(sensitivity(churn$predictions,churn$labels)) plot(specificity(churn$predictions,churn$labels)) plot(accuracy(churn$predictions,churn$labels)) plot(roc(churn$predictions,churn$labels))
data(churn) auc(sensitivity(churn$predictions,churn$labels)) auc(specificity(churn$predictions,churn$labels)) auc(accuracy(churn$predictions,churn$labels)) auc(roc(churn$predictions,churn$labels)) plot(sensitivity(churn$predictions,churn$labels)) plot(specificity(churn$predictions,churn$labels)) plot(accuracy(churn$predictions,churn$labels)) plot(roc(churn$predictions,churn$labels))
This function computes the accuracy curve required for the auc
function and the plot
function.
accuracy(predictions, labels, perc.rank = TRUE)
accuracy(predictions, labels, perc.rank = TRUE)
predictions |
A numeric vector of classification probabilities (confidences, scores) of the positive event. |
labels |
A factor of observed class labels (responses) with the only allowed values {0,1}. |
perc.rank |
A logical. If TRUE (default) the percentile rank of the predictions is used. |
A list containing the following elements:
cutoffs |
A numeric vector of threshold values |
measure |
A numeric vector of accuracy values corresponding to the threshold values |
Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]
Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
sensitivity
, specificity
, accuracy
, roc
, auc
, plot
data(churn) accuracy(churn$predictions,churn$labels)
data(churn) accuracy(churn$predictions,churn$labels)
This function computes the area under the sensitivity curve (AUSEC), the area under the specificity curve (AUSPC), the area under the accuracy curve (AUACC), or the area under the receiver operating characteristic curve (AUROC).
auc(x, min = 0, max = 1)
auc(x, min = 0, max = 1)
x |
an object produced by one of the functions |
min |
a numeric value between 0 and 1, denoting the cutoff that defines the start of the area under the curve |
max |
a numeric value between 0 and 1, denoting the cutoff that defines the end of the area under the curve |
A numeric value between zero and one denoting the area under the curve
Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]
Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
sensitivity
, specificity
, accuracy
, roc
, auc
, plot
data(churn) auc(sensitivity(churn$predictions,churn$labels)) auc(specificity(churn$predictions,churn$labels)) auc(accuracy(churn$predictions,churn$labels)) auc(roc(churn$predictions,churn$labels))
data(churn) auc(sensitivity(churn$predictions,churn$labels)) auc(specificity(churn$predictions,churn$labels)) auc(accuracy(churn$predictions,churn$labels)) auc(roc(churn$predictions,churn$labels))
AUCNews
shows the NEWS file of the AUC package.
AUCNews()
AUCNews()
None.
churn
contains three variables: the churn predictions (probabilities) of two models, and observed churn
data(churn)
data(churn)
A data frame with 1302 observations, and 3 variables: predictions
, predictions2
, churn
.
Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]
Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
data(churn) str(churn)
data(churn) str(churn)
This function plots the (partial) sensitivity, specificity, accuracy and roc curves.
## S3 method for class 'AUC' plot(x, y = NULL, ..., type = "l", add = FALSE, min = 0, max = 1)
## S3 method for class 'AUC' plot(x, y = NULL, ..., type = "l", add = FALSE, min = 0, max = 1)
x |
an object produced by one of the functions |
y |
Not used. |
... |
Arguments to be passed to methods, such as graphical parameters. See ?plot |
type |
Type of plot. Default is line plot. |
add |
Logical. If TRUE the curve is added to an existing plot. If FALSE a new plot is created. |
min |
a numeric value between 0 and 1, denoting the cutoff that defines the start of the area under the curve |
max |
a numeric value between 0 and 1, denoting the cutoff that defines the end of the area under the curve |
Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]
Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
sensitivity
, specificity
, accuracy
, roc
, auc
, plot
data(churn) plot(sensitivity(churn$predictions,churn$labels)) plot(specificity(churn$predictions,churn$labels)) plot(accuracy(churn$predictions,churn$labels)) plot(roc(churn$predictions,churn$labels))
data(churn) plot(sensitivity(churn$predictions,churn$labels)) plot(specificity(churn$predictions,churn$labels)) plot(accuracy(churn$predictions,churn$labels)) plot(roc(churn$predictions,churn$labels))
This function computes the receiver operating characteristic (ROC) curve required for the auc
function and the plot
function.
roc(predictions, labels)
roc(predictions, labels)
predictions |
A numeric vector of classification probabilities (confidences, scores) of the positive event. |
labels |
A factor of observed class labels (responses) with the only allowed values {0,1}. |
A list containing the following elements:
cutoffs |
A numeric vector of threshold values |
fpr |
A numeric vector of false positive rates corresponding to the threshold values |
tpr |
A numeric vector of true positive rates corresponding to the threshold values |
Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]
Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
sensitivity
, specificity
, accuracy
, roc
, auc
, plot
data(churn) roc(churn$predictions,churn$labels)
data(churn) roc(churn$predictions,churn$labels)
This function computes the sensitivity curve required for the auc
function and the plot
function.
sensitivity(predictions, labels, perc.rank = TRUE)
sensitivity(predictions, labels, perc.rank = TRUE)
predictions |
A numeric vector of classification probabilities (confidences, scores) of the positive event. |
labels |
A factor of observed class labels (responses) with the only allowed values {0,1}. |
perc.rank |
A logical. If TRUE (default) the percentile rank of the predictions is used. |
A list containing the following elements:
cutoffs |
A numeric vector of threshold values |
measure |
A numeric vector of sensitivity values corresponding to the threshold values |
Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]
Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
sensitivity
, specificity
, accuracy
, roc
, auc
, plot
data(churn) sensitivity(churn$predictions,churn$labels)
data(churn) sensitivity(churn$predictions,churn$labels)
This function computes the specificity curve required for the auc
function and the plot
function.
specificity(predictions, labels, perc.rank = TRUE)
specificity(predictions, labels, perc.rank = TRUE)
predictions |
A numeric vector of classification probabilities (confidences, scores) of the positive event. |
labels |
A factor of observed class labels (responses) with the only allowed values {0,1}. |
perc.rank |
A logical. If TRUE (default) the percentile rank of the predictions is used. |
A list containing the following elements:
cutoffs |
A numeric vector of threshold values |
measure |
A numeric vector of specificity values corresponding to the threshold values |
Authors: Michel Ballings and Dirk Van den Poel, Maintainer: [email protected]
Ballings, M., Van den Poel, D., Threshold Independent Performance Measures for Probabilistic Classifcation Algorithms, Forthcoming.
sensitivity
, specificity
, accuracy
, roc
, auc
, plot
data(churn) specificity(churn$predictions,churn$labels)
data(churn) specificity(churn$predictions,churn$labels)