Package: PIE 1.0.0

Jingyi Yang

PIE: A Partially Interpretable Model with Black-Box Refinement

Implements a novel predictive model, Partially Interpretable Estimators (PIE), which jointly trains an interpretable model and a black-box model to achieve high predictive performance as well as partial model. See the paper, Wang, Yang, Li, and Wang (2021) <doi:10.48550/arXiv.2105.02410>.

Authors:Tong Wang [aut], Jingyi Yang [aut, cre], Yunyi Li [aut], Boxiang Wang [aut]

PIE_1.0.0.tar.gz
PIE_1.0.0.tar.gz(r-4.5-noble)PIE_1.0.0.tar.gz(r-4.4-noble)
PIE_1.0.0.tgz(r-4.4-emscripten)PIE_1.0.0.tgz(r-4.3-emscripten)
PIE.pdf |PIE.html
PIE/json (API)

# Install 'PIE' in R:
install.packages('PIE', repos = c('https://cran.r-universe.dev', 'https://cloud.r-project.org'))
Datasets:

This package does not link to any Github/Gitlab/R-forge repository. No issue tracker or development information is available.

2.00 score 5 exports 6 dependencies

Last updated 25 days agofrom:ca2c6b06d6. Checks:2 OK. Indexed: no.

TargetResultLatest binary
Doc / VignettesOKJan 27 2025
R-4.5-linuxOKJan 27 2025

Exports:data_processMAEPIE_fitRPEsparsity_count

Dependencies:data.tablegglassojsonlitelatticeMatrixxgboost

Introduction to PIE -- A Partially Interpretable Model with Black-box Refinement

Rendered fromPIE.Rmdusingknitr::rmarkdownon Jan 27 2025.

Last update: 2025-01-27
Started: 2025-01-27