Title: | Hierarchical Dirichlet Process Generalized Linear Models |
---|---|
Description: | Implementation of MCMC algorithms to estimate the Hierarchical Dirichlet Process Generalized Linear Model (hdpGLM) presented in the paper Ferrari (2020) Modeling Context-Dependent Latent Heterogeneity, Political Analysis <DOI:10.1017/pan.2019.13> and <doi:10.18637/jss.v107.i10>. |
Authors: | Diogo Ferrari [aut, cre] |
Maintainer: | Diogo Ferrari <[email protected]> |
License: | MIT + file LICENSE |
Version: | 1.0.3 |
Built: | 2024-11-06 06:42:57 UTC |
Source: | CRAN |
This function returns a data frame with the data points classified according to the estimation of cluster probabilities generated by the output of the function hdpGLM
classify(data, samples)
classify(data, samples)
data |
a data frame with the data set used to estimate the |
samples |
the output of |
This function gives the posterior mean of the coefficients
## S3 method for class 'dpGLM' coef(object, ...)
## S3 method for class 'dpGLM' coef(object, ...)
object |
a |
... |
The additional parameters accepted are: |
This function gives the posterior mean of the coefficients
## S3 method for class 'hdpGLM' coef(object, ...)
## S3 method for class 'hdpGLM' coef(object, ...)
object |
a |
... |
The additional parameters accepted are: |
The function estimates a semi-parametric mixture of Generalized Linear Models. It uses a (hierarchical) Dependent Dirichlet Process Prior for the mixture probabilities.
hdpGLM( formula1, formula2 = NULL, data, mcmc, family = "gaussian", K = 100, context.id = NULL, constants = NULL, weights = NULL, n.display = 1000, na.action = "exclude", imp.bin = "R" )
hdpGLM( formula1, formula2 = NULL, data, mcmc, family = "gaussian", K = 100, context.id = NULL, constants = NULL, weights = NULL, n.display = 1000, na.action = "exclude", imp.bin = "R" )
formula1 |
a single symbolic description of the linear model of the
mixture GLM components to be fitted. The syntax is the same
as used in the |
formula2 |
eihter NULL (default) or a single symbolic description of the
linear model of the hierarchical component of the model.
It specifies how the average parameter of the base measure
of the Dirichlet Process Prior varies linearly as a function
of group level covariates. If |
data |
a data.frame with all the variables specified in |
mcmc |
a named list with the following elements - - - - - |
family |
a character with either 'gaussian', 'binomial', or 'multinomial'. It indicates the family of the GLM components of the mixture model. |
K |
an integer indicating the maximum number of clusters to truncate the Dirichlet Process Prior in order to use the blocked Gibbs sampler. |
context.id |
string with the name of the column in the data that uniquely identifies the contexts. If |
constants |
either NULL or a list with the constants of the model. If not NULL,
it must contain a vector named |
weights |
numeric vector with the same size as the number of rows of the data. It must contain the weights of the observations in the data set. NOTE: FEATURE NOT IMPLEMENTED YET |
n.display |
an integer indicating the number of iterations to wait before printing information about the estimation process. If zero, it does not display any information. Note: displaying informaiton at every iteration (n.display=1) may increase the time to estimate the model slightly. |
na.action |
string with action to be taken for the |
imp.bin |
string, either "R" or "Cpp" indicating the language of the implementation of the binomial model. |
This function estimates a Hierarchical Dirichlet Process generalized
linear model, which is a semi-parametric Bayesian approach to regression
estimation with clustering. The estimation is conducted using Blocked Gibbs Sampler if the output
variable is gaussian distributed. It uses Metropolis-Hastings inside Gibbs if
the output variable is binomial or multinomial distributed.
This is specified using the parameter family
. See:
Ferrari, D. (2020). Modeling Context-Dependent Latent Effect Heterogeneity, Political Analysis, 28(1), 20–46. doi:10.1017/pan.2019.13.
Ferrari, D. (2023). "hdpGLM: An R Package to Estimate Heterogeneous Effects in Generalized Linear Models Using Hierarchical Dirichlet Process." Journal of Statistical Software, 107(10), 1-37. doi:10.18637/jss.v107.i10.
Ishwaran, H., & James, L. F., Gibbs sampling methods for stick-breaking priors, Journal of the American Statistical Association, 96(453), 161–173 (2001).
Neal, R. M., Markov chain sampling methods for dirichlet process mixture models, Journal of computational and graphical statistics, 9(2), 249–265 (2000).
The function returns a list with elements samples
, pik
, max_active
,
n.iter
, burn.in
, and time.elapsed
. The samples
element
contains a MCMC object (from coda package) with the samples from the posterior
distribution. The pik
is a n x K
matrix with the estimated
probabilities that the observation $i$ belongs to the cluster $k$
## Note: this example is for illustration. You can run the example ## manually with increased number of iterations to see the actual ## results, as well as the data size (n) set.seed(10) n = 300 data = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:3, n, replace=TRUE), y =I(z==1) * (3 + 4*x1 - x2 + rnorm(n)) + I(z==2) * (3 + 2*x1 + x2 + rnorm(n)) + I(z==3) * (3 - 4*x1 - x2 + rnorm(n)) ) mcmc = list(burn.in = 0, n.iter = 20) samples = hdpGLM(y~ x1 + x2, data=data, mcmc=mcmc, family='gaussian', n.display=30, K=50) summary(samples) plot(samples) plot(samples, separate=TRUE) ## compare with GLM ## lm(y~ x1 + x2, data=data, family='gaussian')
## Note: this example is for illustration. You can run the example ## manually with increased number of iterations to see the actual ## results, as well as the data size (n) set.seed(10) n = 300 data = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:3, n, replace=TRUE), y =I(z==1) * (3 + 4*x1 - x2 + rnorm(n)) + I(z==2) * (3 + 2*x1 + x2 + rnorm(n)) + I(z==3) * (3 - 4*x1 - x2 + rnorm(n)) ) mcmc = list(burn.in = 0, n.iter = 20) samples = hdpGLM(y~ x1 + x2, data=data, mcmc=mcmc, family='gaussian', n.display=30, K=50) summary(samples) plot(samples) plot(samples, separate=TRUE) ## compare with GLM ## lm(y~ x1 + x2, data=data, family='gaussian')
Deprecated
hdpGLM_classify(data, samples)
hdpGLM_classify(data, samples)
data |
a data frame with the data set used to estimate the |
samples |
the output of |
Further information is available at: http://www.diogoferrari.com/hdpGLM/index.html
References:
- Ferrari, D. (2020). Modeling Context-Dependent Latent Effect Heterogeneity. Political Analysis, 28(1), 20–46.
- Mukhopadhyay, S., & Gelfand, A. E. (1997). Dirichlet Process Mixed Generali- zed Linear Models. Journal of the American Statistical Association, 92(438), 633–639.
- Hannah, L. A., Blei, D. M., & Powell, W. B. (2011). Dirichlet Process Mix- tures of Generalized Linear Models. Journal of Machine Learning Research, 12(Jun), 1923–1953.
- Heckman, J. J., & Vytlacil, E. J. (2007). Econometric Evaluation of Social Programs, Part I: Causal Models, Structural Models and Econometric Policy Evaluation. Handbook of Econometrics, 6(), 4779–4874.
The package implements a hierarchical Dirichlet process Generalized Linear Model as proposed in Ferrari (2020) Modeling Context-Dependent Latent Effect Heterogeneity, which expands the non-parametric Bayesian models proposed in Mukhopadhyay and Gelfand (1997), Hannah (2011), and Heckman and Vytlacil (2007) to deal with context-dependent cases. The package can be used to estimate latent heterogeneity in the marginal effect of GLM linear coeffi- cients, to cluster data points based on that latent heterogeneity, and to investigate the occurrence of Simpson’s Paradox due to latent or omitted fea- tures.
This function generates parameters that can be used to simulate data sets from the Hierarchical Dirichlet Process of Generalized Linear Model (hdpGLM) or dpGLM
hdpGLM_simParameters( K, nCov = 2, nCovj = 0, J = 1, pi = NULL, same.K = FALSE, seed = NULL, context.effect = NULL, same.clusters.across.contexts = NULL, context.dependent.cluster = NULL )
hdpGLM_simParameters( K, nCov = 2, nCovj = 0, J = 1, pi = NULL, same.K = FALSE, seed = NULL, context.effect = NULL, same.clusters.across.contexts = NULL, context.dependent.cluster = NULL )
K |
integer, the number of clusters. If there are multiple contexts, K is the average number of clusters across contexts, and each context gets a number of clusters sampled from a Poisson distribution, except if |
nCov |
integer, the number of covariates of the GLM components |
nCovj |
an integer indicating the number of covariates determining the average parameter of the base measure of the Dirichlet process prior |
J |
an integer representing the number of contexts @param parameters either NULL or a list with the parameters to generate the model. If not NULL, it must contain a sublist name beta, a vector named tau, and a vector named pi. The sublist beta must be a list of vectors, each one with size nCov+1 to be the coefficients of the GLM mixtures components that will generate the data. For the vector tau, if nCovj=0 (single-context case) then it must be a 1x1 matrix containing 1. If ncovj>0, it must be a (nCov+1)x(nCovj+1) matrix. The vector pi must add up to 1 and have length K. |
pi |
either NULL or a vector with length K that add up to 1. If not NULL, it determines the mixture probabilities |
same.K |
boolean, used when data is sampled from more than one context. If |
seed |
a seed for |
context.effect |
either |
same.clusters.across.contexts |
boolean, if |
context.dependent.cluster |
integer, indicates which cluster will be context-dependent. If |
The function returns a list with the parameters used to generate data sets from the hdpGLM model. This list can be used in the function hdpGLM_simulateData
pars = hdpGLM_simParameters(nCov=2, K=2, nCovj=3, J=20, same.clusters.across.contexts=FALSE, context.dependent.cluster=0)
pars = hdpGLM_simParameters(nCov=2, K=2, nCovj=3, J=20, same.clusters.across.contexts=FALSE, context.dependent.cluster=0)
Simulate a Data Set from hdpGLM
hdpGLM_simulateData( n, K, nCov = 2, nCovj = 0, J = 1, family = "gaussian", parameters = NULL, pi = NULL, same.K = FALSE, seed = NULL, context.effect = NULL, same.clusters.across.contexts = NULL, context.dependent.cluster = NULL )
hdpGLM_simulateData( n, K, nCov = 2, nCovj = 0, J = 1, family = "gaussian", parameters = NULL, pi = NULL, same.K = FALSE, seed = NULL, context.effect = NULL, same.clusters.across.contexts = NULL, context.dependent.cluster = NULL )
n |
integer, the sample size of the data. If there are multiple contexts, each context will have n cases. |
K |
integer, the number of clusters. If there are multiple contexts, K is the average number of clusters across contexts, and each context gets a number of clusters sampled from a Poisson distribution, except if |
nCov |
integer, the number of covariates of the GLM components. |
nCovj |
an integer indicating the number of covariates determining the average parameter of the base measure of the Dirichlet process prior |
J |
an integer representing the number of contexts @param parameters either NULL or a list with the parameters to generate the model. If not NULL, it must contain a sublist name beta, a vector named tau, and a vector named pi. The sublist beta must be a list of vectors, each one with size nCov+1 to be the coefficients of the GLM mixtures components that will generate the data. For the vector tau, if nCovj=0 (single-context case) then it must be a 1x1 matrix containing 1. If nCovj>0, it must be a (nCov+1)x(nCovj+1) matrix. The vector pi must add up to 1 and have length K. |
family |
a character with either 'gaussian', 'binomial', or 'multinomial'. It indicates the family of the GLM components of the mixture model. |
parameters |
a list with the parameter values of the model. Format should be the same of the output of the function hdpGLM_simulateParameters() |
pi |
either NULL or a vector with length K that add up to 1. If not NULL, it determines the mixture probabilities |
same.K |
boolean, used when data is sampled from more than one context. If |
seed |
a seed for |
context.effect |
either |
same.clusters.across.contexts |
boolean, if |
context.dependent.cluster |
integer, indicates which cluster will be context-dependent. If |
Generic method to return the MCMC information
mcmc_info.dpGLM(x, ...)
mcmc_info.dpGLM(x, ...)
x |
a |
... |
ignore |
Generic method to return the MCMC information
mcmc_info.hdpGLM(x, ...)
mcmc_info.hdpGLM(x, ...)
x |
a |
... |
ignore |
This function returns the number of clusters found in the estimation
nclusters(object)
nclusters(object)
object |
a |
Plot the posterior distribution of the linear parameters beta for each context
plot_beta( samples, X = NULL, context.id = NULL, true.beta = NULL, title = NULL, subtitle = NULL, plot.mean = FALSE, plot.grid = FALSE, showKhat = FALSE, col = NULL, xlab.size = NULL, ylab.size = NULL, title.size = NULL, legend.size = NULL, xtick.distance = NULL, left.margin = 0, ytick.distance = NULL, col.border = "white" )
plot_beta( samples, X = NULL, context.id = NULL, true.beta = NULL, title = NULL, subtitle = NULL, plot.mean = FALSE, plot.grid = FALSE, showKhat = FALSE, col = NULL, xlab.size = NULL, ylab.size = NULL, title.size = NULL, legend.size = NULL, xtick.distance = NULL, left.margin = 0, ytick.distance = NULL, col.border = "white" )
samples |
an output of the function |
X |
a string vector with the name of the first-level covariates whose associated tau should be displayed |
context.id |
string with the name of the column containing the labels identifying the contexts. This variable should have been specified when the estimation was conducted using the function |
true.beta |
a |
title |
string, title of the plot |
subtitle |
string, the subtitle of the plot |
plot.mean |
boolean, if |
plot.grid |
boolean, if |
showKhat |
boolean, if |
col |
string, color of the densities |
xlab.size |
numeric, size of the breaks in the x-axis |
ylab.size |
numeric, size of the breaks in the y-axis |
title.size |
numeric, size of the title |
legend.size |
numeric, size of the legend |
xtick.distance |
numeric, distance between x-axis marks and bottom of the figure |
left.margin |
numeric, distance between left margin and left side of the figure |
ytick.distance |
numeric, distance between y-axis marks and bottom of the figure |
col.border |
string, color of the border of the densities |
Create a plot with the beta sampled from its distribution, as a function of context-level feature $W$. Only works for the hierarchical model (hdpGLM), not the dpGLM
plot_beta_sim(data, w.idx, ncol = NULL)
plot_beta_sim(data, w.idx, ncol = NULL)
data |
the output of the function |
w.idx |
integer, the index of the context level covariate the plot |
ncol |
integer, the number of columns in the grid of the plot |
this function creates a plot with two grids. One is the grid with posterior expectation of betas as function of context-level covariates. The other is the posterior distribution of tau
plot_hdpglm( samples, X = NULL, W = NULL, ncol.taus = 1, ncol.betas = NULL, ncol.w = NULL, nrow.w = NULL, smooth.line = FALSE, pred.pexp.beta = FALSE, title.tau = NULL, true.tau = NULL, title.beta = NULL, tau.x.axis.size = 1.1, tau.y.axis.size = 1.1, tau.title.size = 1.2, tau.panel.title.size = 1.4, tau.legend.size = 1, beta.x.axis.size = 1.1, beta.y.axis.size = 1.1, beta.title.size = 1.2, beta.panel.title.size = 1.4, beta.legend.size = 1, tau.xlab = NULL )
plot_hdpglm( samples, X = NULL, W = NULL, ncol.taus = 1, ncol.betas = NULL, ncol.w = NULL, nrow.w = NULL, smooth.line = FALSE, pred.pexp.beta = FALSE, title.tau = NULL, true.tau = NULL, title.beta = NULL, tau.x.axis.size = 1.1, tau.y.axis.size = 1.1, tau.title.size = 1.2, tau.panel.title.size = 1.4, tau.legend.size = 1, beta.x.axis.size = 1.1, beta.y.axis.size = 1.1, beta.title.size = 1.2, beta.panel.title.size = 1.4, beta.legend.size = 1, tau.xlab = NULL )
samples |
an output of the function |
X |
a string vector with the name of the first-level covariates whose associated tau should be displayed |
W |
a string vector with the name of the context-level covariate(s) whose linear effect will be displayed. If |
ncol.taus |
integer with the number of columns of the grid containing the posterior distribution of tau |
ncol.betas |
integer with the number of columns of the posterior expectation of betas as function of context-level features |
ncol.w |
integer with the number of columns to use to display the different context-level covariates |
nrow.w |
integer with the number of rows to use to display the different context-level covariates |
smooth.line |
boolean, if |
pred.pexp.beta |
boolean, if |
title.tau |
string, the title for the posterior distribution of the context effects |
true.tau |
a |
title.beta |
string, the title for the posterior expectation of beta as function of context-level covariate |
tau.x.axis.size |
numeric, relative size of the x-axis of the plot with tau |
tau.y.axis.size |
numeric, relative size of the y-axis of the plot with tau |
tau.title.size |
numeric, relative size of the title of the plot with tau |
tau.panel.title.size |
numeric, relative size of the title of the panels of the plot with tau |
tau.legend.size |
numeric, relative size of the legend of the plot with tau |
beta.x.axis.size |
numeric, relative size of the x-axis of the plot with beta |
beta.y.axis.size |
numeric, relative size of the y-axis of the plot with beta |
beta.title.size |
numeric, relative size of the title of the plot with beta |
beta.panel.title.size |
numeric, relative size of the title of the panels of the plot with beta |
beta.legend.size |
numeric, relative size of the legend of the plot with beta |
tau.xlab |
string, the label of the x-axis for the plot with tau |
library(magrittr) # Note: this example is just for illustration. MCMC iterations are very reduced set.seed(10) n = 20 data.context1 = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:3, n, replace=TRUE), y =I(z==1) * (3 + 4*x1 - x2 + rnorm(n)) + I(z==2) * (3 + 2*x1 + x2 + rnorm(n)) + I(z==3) * (3 - 4*x1 - x2 + rnorm(n)) , w = 20 ) data.context2 = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:2, n, replace=TRUE), y =I(z==1) * (1 + 3*x1 - 2*x2 + rnorm(n)) + I(z==2) * (1 - 2*x1 + x2 + rnorm(n)), w = 10 ) data = data.context1 %>% dplyr::bind_rows(data.context2) ## estimation mcmc = list(burn.in=1, n.iter=50) samples = hdpGLM(y ~ x1 + x2, y ~ w, data=data, mcmc=mcmc, n.display=1) plot_hdpglm(samples) plot_hdpglm(samples, ncol.taus=2, ncol.betas=2, X='x1') plot_hdpglm(samples, ncol.taus=2, ncol.betas=2, X='x1', ncol.w=2, nrow.w=1, pred.pexp.beta=TRUE,smooth.line=TRUE )
library(magrittr) # Note: this example is just for illustration. MCMC iterations are very reduced set.seed(10) n = 20 data.context1 = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:3, n, replace=TRUE), y =I(z==1) * (3 + 4*x1 - x2 + rnorm(n)) + I(z==2) * (3 + 2*x1 + x2 + rnorm(n)) + I(z==3) * (3 - 4*x1 - x2 + rnorm(n)) , w = 20 ) data.context2 = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:2, n, replace=TRUE), y =I(z==1) * (1 + 3*x1 - 2*x2 + rnorm(n)) + I(z==2) * (1 - 2*x1 + x2 + rnorm(n)), w = 10 ) data = data.context1 %>% dplyr::bind_rows(data.context2) ## estimation mcmc = list(burn.in=1, n.iter=50) samples = hdpGLM(y ~ x1 + x2, y ~ w, data=data, mcmc=mcmc, n.display=1) plot_hdpglm(samples) plot_hdpglm(samples, ncol.taus=2, ncol.betas=2, X='x1') plot_hdpglm(samples, ncol.taus=2, ncol.betas=2, X='x1', ncol.w=2, nrow.w=1, pred.pexp.beta=TRUE,smooth.line=TRUE )
This function plots the posterior expectation of beta, the linear effect of the individual level covariates, as function of the context-level covariates
plot_pexp_beta( samples, X = NULL, W = NULL, pred.pexp.beta = FALSE, ncol.beta = NULL, ylab = NULL, nrow.w = NULL, ncol.w = NULL, smooth.line = FALSE, title = NULL, legend.position = "top", col.pred.line = "red", x.axis.size = 1.1, y.axis.size = 1.1, title.size = 12, panel.title.size = 1.4, legend.size = 1 )
plot_pexp_beta( samples, X = NULL, W = NULL, pred.pexp.beta = FALSE, ncol.beta = NULL, ylab = NULL, nrow.w = NULL, ncol.w = NULL, smooth.line = FALSE, title = NULL, legend.position = "top", col.pred.line = "red", x.axis.size = 1.1, y.axis.size = 1.1, title.size = 12, panel.title.size = 1.4, legend.size = 1 )
samples |
an output of the function |
X |
a string vector with the name of the first-level covariates whose associated tau should be displayed |
W |
a string vector with the name of the context-level covariate(s) whose linear effect will be displayed. If |
pred.pexp.beta |
boolean, if |
ncol.beta |
integer with number of columns of the grid used for each group of context-level covariates |
ylab |
string, the label of the y-axis |
nrow.w |
integer with the number of rows of the grid |
ncol.w |
integer with the number of columns of the grid |
smooth.line |
boolean, if |
title |
string, title of the plot |
legend.position |
one of four options: "bottom" (default), "top", "left", or "right". It indicates the position of the legend |
col.pred.line |
string with color of fitted line. Only works if |
x.axis.size |
numeric, the relative size of the label in the x-axis |
y.axis.size |
numeric, the relative size of the label in the y-axis |
title.size |
numeric, absolute size of the title |
panel.title.size |
numeric, the relative size of the titles in the panel of the plot |
legend.size |
numeric, the relative size of the legend |
library(magrittr) set.seed(66) # Note: this example is just for illustration. MCMC iterations are very reduced set.seed(10) n = 20 data.context1 = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:3, n, replace=TRUE), y =I(z==1) * (3 + 4*x1 - x2 + rnorm(n)) + I(z==2) * (3 + 2*x1 + x2 + rnorm(n)) + I(z==3) * (3 - 4*x1 - x2 + rnorm(n)) , w = 20 ) data.context2 = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:2, n, replace=TRUE), y =I(z==1) * (1 + 3*x1 - 2*x2 + rnorm(n)) + I(z==2) * (1 - 2*x1 + x2 + rnorm(n)), w = 10 ) data = data.context1 %>% dplyr::bind_rows(data.context2) ## estimation mcmc = list(burn.in=1, n.iter=50) samples = hdpGLM(y ~ x1 + x2, y ~ w, data=data, mcmc=mcmc, n.display=1) plot_pexp_beta(samples) plot_pexp_beta(samples, X='x1', ncol.w=2, nrow.w=1) plot_pexp_beta(samples, X='x1', ncol.beta=2) plot_pexp_beta(samples, pred.pexp.beta=TRUE, W="w", X=c("x1", "x2")) plot_pexp_beta(samples, W='w', smooth.line=TRUE, pred.pexp.beta=TRUE, ncol.beta=2)
library(magrittr) set.seed(66) # Note: this example is just for illustration. MCMC iterations are very reduced set.seed(10) n = 20 data.context1 = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:3, n, replace=TRUE), y =I(z==1) * (3 + 4*x1 - x2 + rnorm(n)) + I(z==2) * (3 + 2*x1 + x2 + rnorm(n)) + I(z==3) * (3 - 4*x1 - x2 + rnorm(n)) , w = 20 ) data.context2 = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:2, n, replace=TRUE), y =I(z==1) * (1 + 3*x1 - 2*x2 + rnorm(n)) + I(z==2) * (1 - 2*x1 + x2 + rnorm(n)), w = 10 ) data = data.context1 %>% dplyr::bind_rows(data.context2) ## estimation mcmc = list(burn.in=1, n.iter=50) samples = hdpGLM(y ~ x1 + x2, y ~ w, data=data, mcmc=mcmc, n.display=1) plot_pexp_beta(samples) plot_pexp_beta(samples, X='x1', ncol.w=2, nrow.w=1) plot_pexp_beta(samples, X='x1', ncol.beta=2) plot_pexp_beta(samples, pred.pexp.beta=TRUE, W="w", X=c("x1", "x2")) plot_pexp_beta(samples, W='w', smooth.line=TRUE, pred.pexp.beta=TRUE, ncol.beta=2)
Function to plot posterior distribution of tau
plot_tau( samples, X = NULL, W = NULL, title = NULL, true.tau = NULL, show.all.taus = FALSE, show.all.betas = FALSE, ncol = NULL, legend.position = "top", x.axis.size = 1.1, y.axis.size = 1.1, title.size = 1.2, panel.title.size = 1.4, legend.size = 1, xlab = NULL )
plot_tau( samples, X = NULL, W = NULL, title = NULL, true.tau = NULL, show.all.taus = FALSE, show.all.betas = FALSE, ncol = NULL, legend.position = "top", x.axis.size = 1.1, y.axis.size = 1.1, title.size = 1.2, panel.title.size = 1.4, legend.size = 1, xlab = NULL )
samples |
an output of the function |
X |
a string vector with the name of the first-level covariates whose associated tau should be displayed |
W |
a string vector with the name of the context-level covariate(s) whose linear effect will be displayed. If |
title |
string, title of the plot |
true.tau |
a |
show.all.taus |
boolean, if |
show.all.betas |
boolean, if |
ncol |
number of columns of the grid. If |
legend.position |
one of four options: "bottom" (default), "top", "left", or "right". It indicates the position of the legend |
x.axis.size |
numeric, the relative size of the label in the x-axis |
y.axis.size |
numeric, the relative size of the label in the y-axis |
title.size |
numeric, the relative size of the title of the plot |
panel.title.size |
numeric, the relative size of the titles in the panel of the plot |
legend.size |
numeric, the relative size of the legend |
xlab |
string, the label of the x-axis |
library(magrittr) set.seed(66) # Note: this example is just for illustration. MCMC iterations are very reduced set.seed(10) n = 20 data.context1 = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:3, n, replace=TRUE), y =I(z==1) * (3 + 4*x1 - x2 + rnorm(n)) + I(z==2) * (3 + 2*x1 + x2 + rnorm(n)) + I(z==3) * (3 - 4*x1 - x2 + rnorm(n)) , w = 20 ) data.context2 = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:2, n, replace=TRUE), y =I(z==1) * (1 + 3*x1 - 2*x2 + rnorm(n)) + I(z==2) * (1 - 2*x1 + x2 + rnorm(n)), w = 10 ) data = data.context1 %>% dplyr::bind_rows(data.context2) ## estimation mcmc = list(burn.in=1, n.iter=50) samples = hdpGLM(y ~ x1 + x2, y ~ w, data=data, mcmc=mcmc, n.display=1) plot_tau(samples) plot_tau(samples, ncol=2) plot_tau(samples, X='x1', W='w') plot_tau(samples, show.all.taus=TRUE, show.all.betas=TRUE, ncol=2)
library(magrittr) set.seed(66) # Note: this example is just for illustration. MCMC iterations are very reduced set.seed(10) n = 20 data.context1 = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:3, n, replace=TRUE), y =I(z==1) * (3 + 4*x1 - x2 + rnorm(n)) + I(z==2) * (3 + 2*x1 + x2 + rnorm(n)) + I(z==3) * (3 - 4*x1 - x2 + rnorm(n)) , w = 20 ) data.context2 = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:2, n, replace=TRUE), y =I(z==1) * (1 + 3*x1 - 2*x2 + rnorm(n)) + I(z==2) * (1 - 2*x1 + x2 + rnorm(n)), w = 10 ) data = data.context1 %>% dplyr::bind_rows(data.context2) ## estimation mcmc = list(burn.in=1, n.iter=50) samples = hdpGLM(y ~ x1 + x2, y ~ w, data=data, mcmc=mcmc, n.display=1) plot_tau(samples) plot_tau(samples, ncol=2) plot_tau(samples, X='x1', W='w') plot_tau(samples, show.all.taus=TRUE, show.all.betas=TRUE, ncol=2)
This function generates desity plots with the posterior distribution generated by the function hdpGLM
## S3 method for class 'dpGLM' plot( x, terms = NULL, separate = FALSE, hpd = TRUE, true.beta = NULL, title = NULL, subtitle = NULL, adjust = 1, ncols = NULL, only.occupied.clusters = TRUE, focus.hpd = FALSE, legend.position = "top", colour = "grey", alpha = 0.4, display.terms = TRUE, plot.mean = TRUE, legend.label.true.value = "True", ... )
## S3 method for class 'dpGLM' plot( x, terms = NULL, separate = FALSE, hpd = TRUE, true.beta = NULL, title = NULL, subtitle = NULL, adjust = 1, ncols = NULL, only.occupied.clusters = TRUE, focus.hpd = FALSE, legend.position = "top", colour = "grey", alpha = 0.4, display.terms = TRUE, plot.mean = TRUE, legend.label.true.value = "True", ... )
x |
a dpGLM object with the samples from generated by |
terms |
string vector with the name of covariates to plot. If |
separate |
boolean, if |
hpd |
boolean, if |
true.beta |
either |
title |
string, the title of the plot |
subtitle |
string, the subtitle of the plot |
adjust |
the bandwidth used is actually |
ncols |
integer, the number of columns in the plot |
only.occupied.clusters |
boolean, if |
focus.hpd |
boolean, if |
legend.position |
one of four options: "bottom" (default), "top", "left", or "right". It indicates the position of the legend |
colour |
= string with color to fill the density plot |
alpha |
number between 0 and 1 indicating the degree of transparency of the density |
display.terms |
boolean, if |
plot.mean |
boolean, if |
legend.label.true.value |
a string with the value to display in the legend when the |
... |
ignored |
# Note: this example is just for illustration. MCMC iterations are very reduced set.seed(10) n = 20 data = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:3, n, replace=TRUE), y =I(z==1) * (3 + 4*x1 - x2 + rnorm(n)) + I(z==2) * (3 + 2*x1 + x2 + rnorm(n)) + I(z==3) * (3 - 4*x1 - x2 + rnorm(n)) , ) ## estimation mcmc = list(burn.in=1, n.iter=50) samples = hdpGLM(y ~ x1 + x2, data=data, mcmc=mcmc, n.display=1) plot(samples)
# Note: this example is just for illustration. MCMC iterations are very reduced set.seed(10) n = 20 data = tibble::tibble(x1 = rnorm(n, -3), x2 = rnorm(n, 3), z = sample(1:3, n, replace=TRUE), y =I(z==1) * (3 + 4*x1 - x2 + rnorm(n)) + I(z==2) * (3 + 2*x1 + x2 + rnorm(n)) + I(z==3) * (3 - 4*x1 - x2 + rnorm(n)) , ) ## estimation mcmc = list(burn.in=1, n.iter=50) samples = hdpGLM(y ~ x1 + x2, data=data, mcmc=mcmc, n.display=1) plot(samples)
Generic function to plot the posterior density estimation produced by the function hdpGLM
## S3 method for class 'hdpGLM' plot( x, terms = NULL, j.label = NULL, j.idx = NULL, title = NULL, subtitle = NULL, true.beta = NULL, ncol = NULL, legend.position = "bottom", display.terms = TRUE, context.id = NULL, ylab = NULL, xlab = NULL, x.axis.size = 1.1, y.axis.size = 1.1, title.size = 1.2, panel.title.size = 1.5, legend.size = 1.1, rel.height = 0.01, fill.col = "#00000044", border.col = "white", ... )
## S3 method for class 'hdpGLM' plot( x, terms = NULL, j.label = NULL, j.idx = NULL, title = NULL, subtitle = NULL, true.beta = NULL, ncol = NULL, legend.position = "bottom", display.terms = TRUE, context.id = NULL, ylab = NULL, xlab = NULL, x.axis.size = 1.1, y.axis.size = 1.1, title.size = 1.2, panel.title.size = 1.5, legend.size = 1.1, rel.height = 0.01, fill.col = "#00000044", border.col = "white", ... )
x |
an object of the class |
terms |
string vector with the name of the individual-level covariates to plot. If |
j.label |
string vector with the names of the contexts to plot. An alternative is to use the context indexes with the parameter |
j.idx |
integer vector with the index of the contexts to plot. An alternative is to use the context labels with the parameter |
title |
string, the title of the plot |
subtitle |
string, the subtitle of the plot |
true.beta |
a |
ncol |
interger, the number of columns in the plot |
legend.position |
one of four options: "bottom" (default), "top", "left", or "right". It indicates the position of the legend |
display.terms |
boolean, if |
context.id |
string with the name of the column containing the labels identifying the contexts. This variable should have been specified when the estimation was conducted using the function |
ylab |
string, the label of the y-axis |
xlab |
string, the label of the x-axis |
x.axis.size |
numeric, the relative size of the label in the x-axis |
y.axis.size |
numeric, the relative size of the label in the y-axis |
title.size |
numeric, the relative size of the title of the plot |
panel.title.size |
numeric, the relative size of the titles in the panel of the plot |
legend.size |
numeric, the relative size of the legend |
rel.height |
see ggridges::geom_density_ridges |
fill.col |
string with the color of the densities |
border.col |
string with the color of the border of the densities |
... |
Additional arguments accepted are:
|
Function returns the predicted (fitted) values of the outcome variable using
the estimated posterior expectation of the linear covariate betas produced by
the hdpGLM
function
## S3 method for class 'dpGLM' predict(object, new_data = NULL, ...)
## S3 method for class 'dpGLM' predict(object, new_data = NULL, ...)
object |
outcome of the function hdpLGM |
new_data |
data frame with the values of the covariates that are going to be used to generate the predicted/fitted values. The posterior mean is used to create the predicted values |
... |
|
It returns a data.frame with the fitted values for the outcome
variable, which are produced using the estimated posterior expectation of the
linear coefficients beta
.
Function returns the predicted (fitted) values of the outcome variable using
the estimated posterior expectation of the linear covariate betas produced by
the hdpGLM
function
## S3 method for class 'hdpGLM' predict(object, new_data = NULL, ...)
## S3 method for class 'hdpGLM' predict(object, new_data = NULL, ...)
object |
outcome of the function hdpLGM |
new_data |
data frame with the values of the covariates that are going to be used to generate the predicted/fitted values. The posterior mean is used to create the predicted values |
... |
|
It returns a data.frame with the fitted values for the outcome
variable, which are produced using the estimated posterior expectation of the
linear coefficients beta
.
Generic method to print the output of the dpGLM
function
## S3 method for class 'dpGLM' print(x, ...)
## S3 method for class 'dpGLM' print(x, ...)
x |
a |
... |
ignore |
returns a summary of the posterior distribution of the parameters
Generic method to print the output of the hdpGLM_simulateData
function
## S3 method for class 'dpGLM_data' print(x, ...)
## S3 method for class 'dpGLM_data' print(x, ...)
x |
a |
... |
ignore |
returns a summary of the simulated data
Generic method to print the output of the hdpGLM
function
## S3 method for class 'hdpGLM' print(x, ...)
## S3 method for class 'hdpGLM' print(x, ...)
x |
a |
... |
ignore |
returns a summary of the posterior distribution of the parameters
Generic method to print the output of the hdpGLM_simulateData
function
## S3 method for class 'hdpGLM_data' print(x, ...)
## S3 method for class 'hdpGLM_data' print(x, ...)
x |
a |
... |
ignore |
returns a summary of the simulated data
This function provides a summary of the MCMC samples from the dpGLM model
summary_tidy(object, ...)
summary_tidy(object, ...)
object |
a |
... |
The additional parameters accepted are: true.beta: (see plot.dpGLM) |
Data points are assigned to clusters according to the highest estimated probability of belonging to that cluster
This function provides a summary of the MCMC samples from the dpGLM model
## S3 method for class 'dpGLM' summary(object, ...)
## S3 method for class 'dpGLM' summary(object, ...)
object |
a |
... |
The additional parameters accepted are: true.beta: (see plot.dpGLM) |
Data points are assigned to clusters according to the highest estimated probability of belonging to that cluster
This function summarizes the data and parameters used to generate the data using the function hdpLGM.
## S3 method for class 'dpGLM_data' summary(object, ...)
## S3 method for class 'dpGLM_data' summary(object, ...)
object |
an object of the class dpGLM_data |
... |
ignored |
The function returns a list with the summary of the data produced by the standard summary function and a data.frame
with the true values of beta for each cluster.
This is a generic summary function that describes the output of the function hdpGLM
## S3 method for class 'hdpGLM' summary(object, ...)
## S3 method for class 'hdpGLM' summary(object, ...)
object |
an object of the class |
... |
Additional arguments accepted are:
|
The function hdpGLM returns a list with the samples from the posterior distribution along with other elements. That list contains an element named context.cov
that connects the indexed "C" created during the estimation and the context-level covariates. So each unique context-level covariate gets an index during the estimation. The algorithm only requires the context-level covariates, but it creates such index C to help the estimation. If true.beta is provided, it must contain indexes for the context as well, which indicates the context of each specific linear coefficient beta
. Such index will probably be different from the one created by the algorithm. Therefore, when the true.beta
is provided, we need to connect the context index C generated by the algorithm and the column j in the true.beta data.frame in order to compare the true values and the estimated value for each context. That is why we need the values of the context-level covariates as well. The summary uses them as key to merge the true and the estimated values for each context. The true and estimated clusters are matched based on the shortest distance between the estimated posterior average and the true value in each context because the labels of the clusters in the estimation can vary, even thought the same data points are classified in the same clusters.
The function returns a list with two data.frames. The first summarizes the posterior distribution of the linear coefficients beta
. The mean, median, and the 95% HPD interval are provided. The second data.frame contains the summary of the posterior distribution of the parameter tau
.
This functions summarizes the data simulated by the function hdpGLM_simulateData
## S3 method for class 'hdpGLM_data' summary(object, ...)
## S3 method for class 'hdpGLM_data' summary(object, ...)
object |
an object of the class |
... |
ignored |
It returns a list with three elements. The first is a summary of the data, the second a tibble with the linear coefficients beta
and their values used to generate the data, and the third element is also a tibble with the true values of tau
used to generate the betas
.
A dataset containing simulated data about public opinion
welfare
welfare
A data frame with 2000 rows and 4 variables:
support for welfare policies
levels of inequality in the neighborhood
individual-level income
individual-level ideology
Simulated data
A dataset containing simulated data about public opinion in different countries
welfare2
welfare2
A data frame with 2000 rows and 6 variables:
support for welfare policies
levels of inequality in the neighborhood
individual-level income
individual-level ideology
country label or index
country-level gender gap in country's provision of public good
Simulated data