Title: | Multi-Armed Qini |
---|---|
Description: | Policy evaluation using generalized Qini curves: Evaluate data-driven treatment targeting rules for one or more treatment arms over different budget constraints in experimental or observational settings under unconfoundedness. |
Authors: | Erik Sverdrup [aut, cre], Han Wu [aut], Susan Athey [aut], Stefan Wager [aut] |
Maintainer: | Erik Sverdrup <[email protected]> |
License: | MIT + file LICENSE |
Version: | 0.5.0 |
Built: | 2024-11-25 15:23:55 UTC |
Source: | CRAN |
Get an estimate of Q(B).
average_gain(object, spend)
average_gain(object, spend)
object |
A maq object. |
spend |
The spend level B. |
An estimate of Q(B) along with standard errors.
Given two Qini curves, and
, get an estimate of the difference
, at a spend level B.
difference_gain(object.lhs, object.rhs, spend)
difference_gain(object.lhs, object.rhs, spend)
object.lhs |
A maq object |
object.rhs |
A maq object |
spend |
The spend level B. |
An estimate of difference in gain along with standard errors.
A simple convenience function to construct an AIPW-based evaluation score given estimates of conditional means and treatment propensities.
get_aipw_scores(Y, W, mu.hat, W.hat = NULL)
get_aipw_scores(Y, W, mu.hat, W.hat = NULL)
Y |
The observed outcome. |
W |
The observed treatment assignment (must be a factor vector, where the first factor level is the control arm). |
mu.hat |
A matrix of conditional mean estimates for each arm, |
W.hat |
Optional treatment propensities. If these vary by unit and arm, then
this should be a matrix with the treatment assignment
probability of units to arms, with columns corresponding to the levels of |
An matrix of evaluation scores
(eqn (13) in the multi-armed Qini paper).
Robins, James M, Andrea Rotnitzky, and Lue Ping Zhao. "Estimation of regression coefficients when some regressors are not always observed." Journal of the American statistical Association, 89(427), 1994.
Sverdrup, Erik, Han Wu, Susan Athey, and Stefan Wager. "Qini Curves for Multi-Armed Treatment Rules". arXiv preprint arXiv:2306.11979, 2023.
if (require("grf", quietly = TRUE)) { # Simulate data with two treatment arms (k = 1, 2) and a control arm (k = 0). n <- 3000 p <- 5 X <- matrix(runif(n * p), n, p) W <- as.factor(sample(c("0", "1", "2"), n, replace = TRUE)) Y <- X[, 1] + X[, 2] * (W == "1") + 1.5 * X[, 3] * (W == "2") + rnorm(n) # Fit a CATE estimator on a training sample. train <- sample(1:n, n/2) tau.forest <- grf::multi_arm_causal_forest(X[train, ], Y[train], W[train]) # Predict CATEs on held out evaluation data. test <- -train tau.hat <- predict(tau.forest, X[test, ], drop = TRUE)$predictions # Form costs. cost <- cbind(X[test, 4] / 4, X[test, 5]) # Estimate nuisance components for test set AIPW scores. X.test <- X[test, ] Y.test <- Y[test] W.test <- W[test] # Fit models for E[Y | W = k, X], k = 0, 1, 2, using for example separate random forests. Y0.forest <- grf::regression_forest(X.test[W.test == 0, ], Y.test[W.test == 0]) Y1.forest <- grf::regression_forest(X.test[W.test == 1, ], Y.test[W.test == 1]) Y2.forest <- grf::regression_forest(X.test[W.test == 2, ], Y.test[W.test == 2]) mu.hat = cbind( mu0 = predict(Y0.forest, X.test)$predictions, mu1 = predict(Y1.forest, X.test)$predictions, mu2 = predict(Y2.forest, X.test)$predictions ) # If unknown, estimate the propensity scores E[W = k | X]. W.hat <- predict(grf::probability_forest(X.test, W.test))$predictions # Form doubly robust scores. DR.scores <- get_aipw_scores(Y.test, W.test, mu.hat, W.hat) # Fit a Qini curve estimated with forest-based AIPW. qini <- maq(tau.hat, cost, DR.scores, R = 200) plot(qini) }
if (require("grf", quietly = TRUE)) { # Simulate data with two treatment arms (k = 1, 2) and a control arm (k = 0). n <- 3000 p <- 5 X <- matrix(runif(n * p), n, p) W <- as.factor(sample(c("0", "1", "2"), n, replace = TRUE)) Y <- X[, 1] + X[, 2] * (W == "1") + 1.5 * X[, 3] * (W == "2") + rnorm(n) # Fit a CATE estimator on a training sample. train <- sample(1:n, n/2) tau.forest <- grf::multi_arm_causal_forest(X[train, ], Y[train], W[train]) # Predict CATEs on held out evaluation data. test <- -train tau.hat <- predict(tau.forest, X[test, ], drop = TRUE)$predictions # Form costs. cost <- cbind(X[test, 4] / 4, X[test, 5]) # Estimate nuisance components for test set AIPW scores. X.test <- X[test, ] Y.test <- Y[test] W.test <- W[test] # Fit models for E[Y | W = k, X], k = 0, 1, 2, using for example separate random forests. Y0.forest <- grf::regression_forest(X.test[W.test == 0, ], Y.test[W.test == 0]) Y1.forest <- grf::regression_forest(X.test[W.test == 1, ], Y.test[W.test == 1]) Y2.forest <- grf::regression_forest(X.test[W.test == 2, ], Y.test[W.test == 2]) mu.hat = cbind( mu0 = predict(Y0.forest, X.test)$predictions, mu1 = predict(Y1.forest, X.test)$predictions, mu2 = predict(Y2.forest, X.test)$predictions ) # If unknown, estimate the propensity scores E[W = k | X]. W.hat <- predict(grf::probability_forest(X.test, W.test))$predictions # Form doubly robust scores. DR.scores <- get_aipw_scores(Y.test, W.test, mu.hat, W.hat) # Fit a Qini curve estimated with forest-based AIPW. qini <- maq(tau.hat, cost, DR.scores, R = 200) plot(qini) }
A simple convenience function to construct an evaluation score matrix via IPW, where entry (i, k) equals
where is the treatment assignment of unit i and
the observed outcome.
are one of K treatment arms and k = 0 is the control arm.
get_ipw_scores(Y, W, W.hat = NULL)
get_ipw_scores(Y, W, W.hat = NULL)
Y |
The observed outcome. |
W |
The observed treatment assignment (must be a factor vector, where the first factor level is the control arm). |
W.hat |
Optional treatment propensities. If these vary by unit and arm, then
this should be a matrix with the treatment assignment
probability of units to arms, with columns corresponding to the levels of |
An matrix of evaluation scores.
# Draw some equally likely samples from control arm A and treatment arms B and C. n <- 5000 W <- as.factor(sample(c("A", "B", "C"), n, replace = TRUE)) Y <- 42 * (W == "B") - 42 * (W == "C") + rnorm(n) IPW.scores <- get_ipw_scores(Y, W) # An IPW-based estimate of E[Y(B) - Y(A)] and E[Y(C) - Y(A)]. Should be approx 42 and -42. colMeans(IPW.scores) # Draw non-uniformly from the different arms. W.hat <- c(0.2, 0.2, 0.6) W <- as.factor(sample(c("A", "B", "C"), n, replace = TRUE, prob = W.hat)) Y <- 42 * (W == "B") - 42 * (W == "C") + rnorm(n) IPW.scores <- get_ipw_scores(Y, W, W.hat = W.hat) # Should still be approx 42 and -42. colMeans(IPW.scores)
# Draw some equally likely samples from control arm A and treatment arms B and C. n <- 5000 W <- as.factor(sample(c("A", "B", "C"), n, replace = TRUE)) Y <- 42 * (W == "B") - 42 * (W == "C") + rnorm(n) IPW.scores <- get_ipw_scores(Y, W) # An IPW-based estimate of E[Y(B) - Y(A)] and E[Y(C) - Y(A)]. Should be approx 42 and -42. colMeans(IPW.scores) # Draw non-uniformly from the different arms. W.hat <- c(0.2, 0.2, 0.6) W <- as.factor(sample(c("A", "B", "C"), n, replace = TRUE, prob = W.hat)) Y <- 42 * (W == "B") - 42 * (W == "C") + rnorm(n) IPW.scores <- get_ipw_scores(Y, W, W.hat = W.hat) # Should still be approx 42 and -42. colMeans(IPW.scores)
Given two Qini curves, and
, and a maximum spend
,
get an estimate of the integrated difference
.
integrated_difference(object.lhs, object.rhs, spend)
integrated_difference(object.lhs, object.rhs, spend)
object.lhs |
A maq object |
object.rhs |
A maq object |
spend |
The spend level |
An estimate of the area between the two curves along with standard errors.
Fit a curve that shows estimates of a policy value over increasing decision thresholds
. These may include constraints on the treatment allocation, such as the fraction treated or
spending per unit. The policy uses estimated treatment effects, for example from one or more CATE
functions, to optimize treatment allocation under the decision constraint
.
maq( reward, cost, DR.scores, budget = NULL, target.with.covariates = TRUE, R = 0, paired.inference = TRUE, sample.weights = NULL, clusters = NULL, tie.breaker = NULL, num.threads = NULL, seed = 42 )
maq( reward, cost, DR.scores, budget = NULL, target.with.covariates = TRUE, R = 0, paired.inference = TRUE, sample.weights = NULL, clusters = NULL, tie.breaker = NULL, num.threads = NULL, seed = 42 )
reward |
A |
cost |
A |
DR.scores |
An |
budget |
The maximum spend per unit, |
target.with.covariates |
If TRUE (Default), then the policy |
R |
Number of bootstrap replicates for computing standard errors. Default is 0 (only point estimates are computed). |
paired.inference |
Whether to allow for paired tests with other Qini curves fit on the same
evaluation data. If TRUE (Default) then the path of bootstrap replicates are stored in order to perform
paired comparisons that account for the correlation between curves evaluated on the same data. This
takes memory on the order of |
sample.weights |
Weights given to an observation in estimation. If NULL, each observation is given the same weight. Default is NULL. |
clusters |
Vector of integers or factors specifying which cluster each observation corresponds to, which are used to construct clustered standard errors. Default is NULL (ignored). |
tie.breaker |
An optional permutation of the integers 1 to n used to
break potential ties in the optimal treatment allocation
(only relevant if the predictions |
num.threads |
Number of threads used in bootstrap replicates. By default, the number of threads is set to the maximum hardware concurrency. |
seed |
The seed of the C++ random number generator. Default is 42. |
Consider mutually exclusive and costly treatment arms,
where k = 0 is a zero-cost control arm. Let
be an estimated
multi-armed treatment effect function and
a known cost function
(where the k-th element of these vectors measures
and
where
are potential outcomes corresponding
to the k-th treatment state,
the cost of assigning unit i the k-th arm,
and
a set of covariates). We provide estimates of the Qini curve:
which is the expected gain, at any budget constraint B, when assigning treatment in accordance
to , the treatment policy that optimally selects
which arm to assign to which unit while incurring a cost less than or equal to B in expectation
when using the given functions
and
:
At a budget B, the k-th element of is 1 if assigning the k-th arm
to the i-th unit is optimal, and 0 otherwise.
The Qini curve can be used to quantify the value, as measured by the expected gain over
assigning each unit the control arm when using the estimated function
with cost structure
to allocate treatment,
as we vary the available budget
.
A fit maq object.
Sverdrup, Erik, Han Wu, Susan Athey, and Stefan Wager. "Qini Curves for Multi-Armed Treatment Rules". arXiv preprint arXiv:2306.11979, 2023.
if (require("grf", quietly = TRUE)) { # Fit a CATE estimator on a training sample. n <- 3000 p <- 5 X <- matrix(runif(n * p), n, p) W <- as.factor(sample(c("0", "1", "2"), n, replace = TRUE)) Y <- X[, 1] + X[, 2] * (W == "1") + 1.5 * X[, 3] * (W == "2") + rnorm(n) train <- sample(1:n, n/2) tau.forest <- grf::multi_arm_causal_forest(X[train, ], Y[train], W[train]) # Predict CATEs on held out evaluation data. test <- -train tau.hat <- predict(tau.forest, X[test, ], drop = TRUE)$predictions # Assume costs equal a unit's pre-treatment covariate - the following are a toy example. cost <- cbind(X[test, 4] / 4, X[test, 5]) # Fit an evaluation forest to compute doubly robust scores on the test set. eval.forest <- grf::multi_arm_causal_forest(X[test, ], Y[test], W[test]) DR.scores <- grf::get_scores(eval.forest, drop = TRUE) # Fit a Qini curve on evaluation data, using 200 bootstrap replicates for confidence intervals. ma.qini <- maq(tau.hat, cost, DR.scores, R = 200) # Plot the Qini curve. plot(ma.qini) legend("topleft", c("All arms", "95% CI"), lty = c(1, 3)) # Get an estimate of gain at a given spend per unit along with standard errors. average_gain(ma.qini, spend = 0.2) # Get the treatment allocation matrix at a given spend per unit. pi.mat <- predict(ma.qini, spend = 0.2) # If the treatment randomization probabilities are known, then an alternative to # evaluation via AIPW scores is to use inverse-propensity weighting (IPW). W.hat <- rep(1/3, 3) IPW.scores <- get_ipw_scores(Y[test], W[test], W.hat) mq.ipw <- maq(tau.hat, cost, IPW.scores) plot(mq.ipw, add = TRUE, col = 2) legend("topleft", c("All arms", "95% CI", "All arms (IPW)"), col = c(1, 1, 2), lty = c(1, 3, 1)) # Estimate some baseline policies. # a) A policy that ignores covariates and only takes the average reward/cost into account. qini.avg <- maq(tau.hat, cost, DR.scores, target.with.covariates = FALSE, R = 200) # b) A policy that only use arm 1. qini.arm1 <- maq(tau.hat[, 1], cost[, 1], DR.scores[, 1], R = 200) # c) A policy that only use arm 2. qini.arm2 <- maq(tau.hat[, 2], cost[, 2], DR.scores[, 2], R = 200) plot(ma.qini, ci.args = NULL) plot(qini.avg, col = 2, add = TRUE, ci.args = NULL) plot(qini.arm1, col = 3, add = TRUE, ci.args = NULL) plot(qini.arm2, col = 4, add = TRUE, ci.args = NULL) legend("topleft", c("All arms (targeting)", "All arms (without targeting)", "Arm 1", "Arm 2"), col = 1:4, lty = 1) # Estimate the value of employing all arms over a random allocation. difference_gain(ma.qini, qini.avg, spend = 0.2) # Estimate the value of targeting with both arms as opposed to targeting with only arm 1. difference_gain(ma.qini, qini.arm1, spend = 0.2) # Estimate the value of targeting with both arms as opposed to targeting with only arm 2. difference_gain(ma.qini, qini.arm2, spend = 0.2) # Compare targeting strategies over a range of budget values by estimating an area between # two curves up to a given spend point. integrated_difference(ma.qini, qini.arm1, spend = 0.3) }
if (require("grf", quietly = TRUE)) { # Fit a CATE estimator on a training sample. n <- 3000 p <- 5 X <- matrix(runif(n * p), n, p) W <- as.factor(sample(c("0", "1", "2"), n, replace = TRUE)) Y <- X[, 1] + X[, 2] * (W == "1") + 1.5 * X[, 3] * (W == "2") + rnorm(n) train <- sample(1:n, n/2) tau.forest <- grf::multi_arm_causal_forest(X[train, ], Y[train], W[train]) # Predict CATEs on held out evaluation data. test <- -train tau.hat <- predict(tau.forest, X[test, ], drop = TRUE)$predictions # Assume costs equal a unit's pre-treatment covariate - the following are a toy example. cost <- cbind(X[test, 4] / 4, X[test, 5]) # Fit an evaluation forest to compute doubly robust scores on the test set. eval.forest <- grf::multi_arm_causal_forest(X[test, ], Y[test], W[test]) DR.scores <- grf::get_scores(eval.forest, drop = TRUE) # Fit a Qini curve on evaluation data, using 200 bootstrap replicates for confidence intervals. ma.qini <- maq(tau.hat, cost, DR.scores, R = 200) # Plot the Qini curve. plot(ma.qini) legend("topleft", c("All arms", "95% CI"), lty = c(1, 3)) # Get an estimate of gain at a given spend per unit along with standard errors. average_gain(ma.qini, spend = 0.2) # Get the treatment allocation matrix at a given spend per unit. pi.mat <- predict(ma.qini, spend = 0.2) # If the treatment randomization probabilities are known, then an alternative to # evaluation via AIPW scores is to use inverse-propensity weighting (IPW). W.hat <- rep(1/3, 3) IPW.scores <- get_ipw_scores(Y[test], W[test], W.hat) mq.ipw <- maq(tau.hat, cost, IPW.scores) plot(mq.ipw, add = TRUE, col = 2) legend("topleft", c("All arms", "95% CI", "All arms (IPW)"), col = c(1, 1, 2), lty = c(1, 3, 1)) # Estimate some baseline policies. # a) A policy that ignores covariates and only takes the average reward/cost into account. qini.avg <- maq(tau.hat, cost, DR.scores, target.with.covariates = FALSE, R = 200) # b) A policy that only use arm 1. qini.arm1 <- maq(tau.hat[, 1], cost[, 1], DR.scores[, 1], R = 200) # c) A policy that only use arm 2. qini.arm2 <- maq(tau.hat[, 2], cost[, 2], DR.scores[, 2], R = 200) plot(ma.qini, ci.args = NULL) plot(qini.avg, col = 2, add = TRUE, ci.args = NULL) plot(qini.arm1, col = 3, add = TRUE, ci.args = NULL) plot(qini.arm2, col = 4, add = TRUE, ci.args = NULL) legend("topleft", c("All arms (targeting)", "All arms (without targeting)", "Arm 1", "Arm 2"), col = 1:4, lty = 1) # Estimate the value of employing all arms over a random allocation. difference_gain(ma.qini, qini.avg, spend = 0.2) # Estimate the value of targeting with both arms as opposed to targeting with only arm 1. difference_gain(ma.qini, qini.arm1, spend = 0.2) # Estimate the value of targeting with both arms as opposed to targeting with only arm 2. difference_gain(ma.qini, qini.arm2, spend = 0.2) # Compare targeting strategies over a range of budget values by estimating an area between # two curves up to a given spend point. integrated_difference(ma.qini, qini.arm1, spend = 0.3) }
Plot the estimated curve . If the underlying estimated policy
entails treating zero units (that is, all the estimated treatment effects are
negative) then this function returns an empty value.
## S3 method for class 'maq' plot( x, ..., add = FALSE, horizontal.line = TRUE, ci.args = list(), grid.step = NULL )
## S3 method for class 'maq' plot( x, ..., add = FALSE, horizontal.line = TRUE, ci.args = list(), grid.step = NULL )
x |
A maq object. |
... |
Additional arguments passed to plot. |
add |
Whether to add to an already existing plot. Default is FALSE. |
horizontal.line |
Whether to draw a horizontal line where the Qini curve plateaus.
Only applies if the maq object is fit with a maximum |
ci.args |
A list of optional arguments to |
grid.step |
The spend grid increment size to plot the curve on. Default is
|
A data.frame with the data making up the plot (point estimates and lower/upper 95% CIs)
if (require("ggplot2", quietly = TRUE)) { # Generate toy data and customize plots. n = 500 K = 1 reward = matrix(1 + rnorm(n * K), n, K) scores = reward + matrix(rnorm(n * K), n, K) cost = 1 # Fit Qini curves. qini.avg <- maq(reward, cost, scores, R = 200, target.with.covariates = FALSE) qini <- maq(reward, cost, scores, R = 200) # In some settings we may want to plot using one of R's many plot libraries. # The plot method invisibly returns the plot data we can use for this purpose. df.qini.baseline <- plot(qini.avg) df.qini <- plot(qini, add = TRUE, col = 2) # Make an alternate plot style, using, for example, ggplot. ggplot(df.qini, aes(x = spend, y = gain)) + geom_ribbon(aes(ymin = gain - 1.96 * std.err, ymax = gain + 1.96 * std.err), fill = "lightgray") + geom_line(linewidth = 2) + ylab("Policy value") + xlab("Fraction treated") + geom_line(data = df.qini.baseline, aes(x = spend, y = gain), lty = 2) }
if (require("ggplot2", quietly = TRUE)) { # Generate toy data and customize plots. n = 500 K = 1 reward = matrix(1 + rnorm(n * K), n, K) scores = reward + matrix(rnorm(n * K), n, K) cost = 1 # Fit Qini curves. qini.avg <- maq(reward, cost, scores, R = 200, target.with.covariates = FALSE) qini <- maq(reward, cost, scores, R = 200) # In some settings we may want to plot using one of R's many plot libraries. # The plot method invisibly returns the plot data we can use for this purpose. df.qini.baseline <- plot(qini.avg) df.qini <- plot(qini, add = TRUE, col = 2) # Make an alternate plot style, using, for example, ggplot. ggplot(df.qini, aes(x = spend, y = gain)) + geom_ribbon(aes(ymin = gain - 1.96 * std.err, ymax = gain + 1.96 * std.err), fill = "lightgray") + geom_line(linewidth = 2) + ylab("Policy value") + xlab("Fraction treated") + geom_line(data = df.qini.baseline, aes(x = spend, y = gain), lty = 2) }
Get an estimate of the policy at a spend level B.
is a K-dimensional vector where the k-th element is 1 if assigning the k-th
arm to unit i is optimal at a given spend B, and 0 otherwise (with all entries 0 if the
control arm is assigned).
Depending on the value of B,
might be fractional for at most one unit j.
There are two such cases - the first one is when there is not sufficient budget left to assign j an
initial arm. The second is if there is not sufficient budget to upgrade unit j from arm k to k'.
In these cases
takes on one, or two fractional values, respectively,
representing an assignment probability of a given arm.
## S3 method for class 'maq' predict(object, spend, type = c("matrix", "vector"), ...)
## S3 method for class 'maq' predict(object, spend, type = c("matrix", "vector"), ...)
object |
A maq object. |
spend |
The spend level B. |
type |
If "matrix" (Default), then return a matrix where the i-th entry equals
|
... |
Additional arguments (currently ignored). |
A matrix with row i equal to . If
type = "vector"
then an
n-length vector with elements equal to the arm (from 0 to K) that is assigned at the given spend B
(note: if the treatment allocation contains a fractional entry at the given B, then the returned
vector is the policy at the nearest spend B' in the solution path where the allocation is
integer-valued but incurs a cost B' < B).
# Generate some toy data and fit a solution path. n <- 10 K <- 4 reward <- matrix(rnorm(n * K), n, K) cost <- matrix(runif(n * K), n, K) DR.scores <- reward + rnorm(n) path <- maq(reward, cost, DR.scores) # Get the treatment allocation matrix pi.mat <- predict(path, 0.1) pi.mat # pi.mat might have fractional entries for a single unit but satisfies # the budget in expectation exactly. sum(cost * pi.mat) / n # Get the treatment allocation instead encoded in the set {0, 1, ..., K}. pi.vec <- predict(path, 0.1, type = "vector") pi.vec # If a unit has a fractional entry, then pi.vec will incur a cost slightly # lower than 0.1. sum(cost[cbind(1:n, pi.vec)]) / n # Retrieve the underlying solution path. data.path <- summary(path) # If we predict at a spend level on this grid, say entry 5, # then the policy is integer-valued: spend <- data.path$spend[5] predict(path, spend) predict(path, spend, type = "vector")
# Generate some toy data and fit a solution path. n <- 10 K <- 4 reward <- matrix(rnorm(n * K), n, K) cost <- matrix(runif(n * K), n, K) DR.scores <- reward + rnorm(n) path <- maq(reward, cost, DR.scores) # Get the treatment allocation matrix pi.mat <- predict(path, 0.1) pi.mat # pi.mat might have fractional entries for a single unit but satisfies # the budget in expectation exactly. sum(cost * pi.mat) / n # Get the treatment allocation instead encoded in the set {0, 1, ..., K}. pi.vec <- predict(path, 0.1, type = "vector") pi.vec # If a unit has a fractional entry, then pi.vec will incur a cost slightly # lower than 0.1. sum(cost[cbind(1:n, pi.vec)]) / n # Retrieve the underlying solution path. data.path <- summary(path) # If we predict at a spend level on this grid, say entry 5, # then the policy is integer-valued: spend <- data.path$spend[5] predict(path, spend) predict(path, spend, type = "vector")
Print a maq object.
## S3 method for class 'maq' print(x, ...)
## S3 method for class 'maq' print(x, ...)
x |
A maq object. |
... |
Additional arguments (currently ignored). |
Get a data.frame with columns equal to [B, Q(B), std.err(Q(B)), i, k], where i is the unit and k the treatment arm that is optimal to assign at a spend level B.
## S3 method for class 'maq' summary(object, ...)
## S3 method for class 'maq' summary(object, ...)
object |
A maq object. |
... |
Additional arguments (currently ignored). |
A data.frame making up the elements of the estimated Qini curve.