Title: | Minimization Tool for Pharmacokinetic-Pharmacodynamic Data Analysis |
---|---|
Description: | This is a set of minimization tools (maximum likelihood estimation and least square fitting) to solve examples in the Johan Gabrielsson and Dan Weiner's book "Pharmacokinetic and Pharmacodynamic Data Analysis - Concepts and Applications" 5th ed. (ISBN:9198299107). Examples include linear and nonlinear compartmental model, turn-over model, single or multiple dosing bolus/infusion/oral models, allometry, toxicokinetics, reversible metabolism, in-vitro/in-vivo extrapolation, enterohepatic circulation, metabolite modeling, Emax model, inhibitory model, tolerance model, oscillating response model, enantiomer interaction model, effect compartment model, drug-drug interaction model, receptor occupancy model, and rebound phenomena model. |
Authors: | Kyun-Seop Bae [aut] |
Maintainer: | Kyun-Seop Bae <[email protected]> |
License: | GPL-3 |
Version: | 0.8.1 |
Built: | 2024-10-30 06:49:17 UTC |
Source: | CRAN |
This is a minimization tool to solve the examples in the book Gabrielsson J, Weiner D. 'Pharmacokinetic and Pharmacodynamic Data Analysis - Concepts and Applications' 5th ed. 2016. (ISBN:9198299107).
This is a set of minimization tools to solve all the examples in the book 'Pharmacokinetic and Pharmacodynamic Data Analysis - Concepts and Applications' 5th ed. 2016.
Kyun-Seop Bae <[email protected]>
Gabrielsson J, Weiner D. Pharmacokinetic and Pharmacodynamic Data Analysis - Concepts and Applications. 5th ed. 2016.
tData = Theoph colnames(tData) = c("ID", "BWT", "DOSE", "TIME", "DV") fPK = function(THETA) # Prediction function { DOSE = 320000 # in microgram TIME = e$DATA[,"TIME"] # use data in e$DATA K = THETA[1] Ka = THETA[2] V = THETA[3] Cp = DOSE/V*Ka/(Ka - K)*(exp(-K*TIME) - exp(-Ka*TIME)) return(Cp) } IDs = unique(tData[,"ID"]) nID = length(IDs) for (i in 1:nID) { Data = tData[tData$ID == IDs[i],] Res = nlr(fPK, Data, pNames=c("k", "ka", "V"), IE=c(0.1, 3, 500), SecNames=c("CL", "Thalf", "MRT"), SecForms=c(~V*k, ~log(2)/k, ~1/k)) print(paste("## ID =", i, "##")) print(Res) }
tData = Theoph colnames(tData) = c("ID", "BWT", "DOSE", "TIME", "DV") fPK = function(THETA) # Prediction function { DOSE = 320000 # in microgram TIME = e$DATA[,"TIME"] # use data in e$DATA K = THETA[1] Ka = THETA[2] V = THETA[3] Cp = DOSE/V*Ka/(Ka - K)*(exp(-K*TIME) - exp(-Ka*TIME)) return(Cp) } IDs = unique(tData[,"ID"]) nID = length(IDs) for (i in 1:nID) { Data = tData[tData$ID == IDs[i],] Res = nlr(fPK, Data, pNames=c("k", "ka", "V"), IE=c(0.1, 3, 500), SecNames=c("CL", "Thalf", "MRT"), SecForms=c(~V*k, ~log(2)/k, ~1/k)) print(paste("## ID =", i, "##")) print(Res) }
It performs chi-square test for two models comparison.
cmpChi(r1, r2)
cmpChi(r1, r2)
r1 |
A result from |
r2 |
Another result from |
One model should include the other model.
Returns a p-value from pchisq
Kyun-Seop Bae <[email protected]>
It calculates using one compartment model.
Comp1(Ke, Ka=0, DH)
Comp1(Ke, Ka=0, DH)
Ke |
Elimination rate constant |
Ka |
Absorption rate constant |
DH |
Expanded dosing history table |
First compartment is the gut compartment for oral dosing. IV bolus and infusion dosing should be done at the second compartment.
This returns a table with the gut and the central compartment columns
Kyun-Seop Bae <[email protected]>
DAT DAT2 = ExpandDH(DAT) X1 = Comp1(Ke=0.1, Ka=1, DAT2) X1 matplot(DAT2[, "TIME"], X1, type="l")
DAT DAT2 = ExpandDH(DAT) X1 = Comp1(Ke=0.1, Ka=1, DAT2) X1 matplot(DAT2[, "TIME"], X1, type="l")
This is a conventional NONMEM input data format.
DAT
DAT
This data frame has 5 columns with 18 time-points for the simulation.
TIME
Time
AMT
Amount given for the compartment of CMT
column
RATE
Infusion rate
CMT
Compartment number, 1=gut, 2=central, 3=peripheral, etc.
DV
Currently blank and not used.
To be used at Comp1
or nComp
, expand dosing history with ExpandDH
function.
It performs a simple diagnostic plot from the result of nlr
.
dx(r)
dx(r)
r |
a result from |
This plots 'Observation vs. Prediction' and 'Normalized Redisual vs. Prediction' only. Normalized residual are meant to be distributed as standard normal distribution, N(0, 1).
This just draws a plot.
Kyun-Seop Bae <[email protected]>
Get an environment's visible objects as a list.
EnvObj(envir = e)
EnvObj(envir = e)
envir |
environment to get its content |
All the visible objects in the environment including functions and data will be returned.
All visible objects as a list
Kyun-Seop Bae <[email protected]>
It expands dosing history table.
ExpandDH(DH, Fo = 1)
ExpandDH(DH, Fo = 1)
DH |
Dosing history table of NONMEM type |
Fo |
Bioavailability of the first (gut) compartment |
It expands dosing history table of conventional NONMEM data format. It calculate bioavailable amount, then add time points of non-differentiable, e.g. stopping points of infusion.
Returns expanded dosing history table.
Kyun-Seop Bae <[email protected]>
DAT ExpandDH(DAT) # One observation point is increased at the time of 27.
DAT ExpandDH(DAT) # One observation point is increased at the time of 27.
Hougaard measure of skewness with nonlinear regression
hSkew(rx)
hSkew(rx)
rx |
a result of nls function |
Hougaard measure of skewness can be used to check if the parameters of nonlinear regression behavior in linear fashion, i.e. symmetric confidence interval. Be cautious on the variable name conflict. All the variables in the nonlinear function should be able to be accessed by the function.
Hougaard estimate of skewness for each parameter
(0 , 0.1]
|
The estimate is very close-to-linear in behavior. |
(0.1 , 0.25]
|
The estimate is reasonably close-to-linear in behavior. |
(0.25 , 1]
|
The skweness is apparent. |
>1 |
The estimate is considerably nonlinear in behavior. |
Kyun-Seop Bae [email protected]
EL-Shehawy SA. On Calculating the Hougaard Measure of Skewness in a Nonlinear Regression Model with Two Parameters. J Math & Stat. 2009;5(4):360-364.
r1 = nls(density ~ b1*conc/(b2 + conc), DNase[DNase$Run == 1, ], start=list(b1=3, b2=5)) hSkew(r1)
r1 = nls(density ~ b1*conc/(b2 + conc), DNase[DNase$Run == 1, ], start=list(b1=3, b2=5)) hSkew(r1)
It calculates using multi-compartment model.
nComp(Sol, Ka=0, DH)
nComp(Sol, Ka=0, DH)
Sol |
Solution list of lambdas and coefficients |
Ka |
Absorption rate constant |
DH |
Expanded dosing history table |
First compartment is the gut compartment for oral dosing. IV bolus and infusion dosing should be done at the second compartment. If a bolus dose was given at time T, it is reflected at times of larger than T. This is more close to real observation. ADAPT does like this, but NONMEM does not.
This returns a table with the gut and the other compartment columns
Kyun-Seop Bae <[email protected]>
DAT DAT2 = ExpandDH(DAT) Sol = SolComp2(K10=0.1, K12=3, K21=1) X2 = nComp(Sol, Ka=1, DAT2) X2 matplot(DAT2[, "TIME"], X2, type="l")
DAT DAT2 = ExpandDH(DAT) Sol = SolComp2(K10=0.1, K12=3, K21=1) X2 = nComp(Sol, Ka=1, DAT2) X2 matplot(DAT2[, "TIME"], X2, type="l")
It performs nonlinear regression usually for pharmacokinetic and pharmacodynamic models.
nlr(Fx, Data, pNames, IE, LB, UB, Error="A", ObjFx=ObjDef, SecNames, SecForms, Method="L-BFGS-B", Sx, conf.level=0.95, k, fix=0)
nlr(Fx, Data, pNames, IE, LB, UB, Error="A", ObjFx=ObjDef, SecNames, SecForms, Method="L-BFGS-B", Sx, conf.level=0.95, k, fix=0)
Fx |
Function for structural model. It should return a vector of the same length to observations. |
Data |
Data table which will be used in Fx. Fx should access this with |
pNames |
Parameter names in the order of Fx arguments |
IE |
Initial estimates of parameters |
LB |
Lower bound for |
UB |
Upper bound for |
Error |
Error model. One of |
ObjFx |
Objective function to be minimized. The default is maximum likelihood estimation function(-2 log likelihood). |
SecNames |
Names of secondary parameter estimates |
SecForms |
Formula to calculate the secondary parameter estimates |
Method |
|
Sx |
Scale function. This is usually the inverse of weight. It should return the same length(nrow) of Y. When Error="S", Scale function should be provided as |
conf.level |
Confidence level for confidence interval |
k |
1/k likelihood interval(LI) will be provided. Currently recommended value is exp(qf(1 - alpha, 1, nRec-nPara)/2) + 1. |
fix |
indices of parameters to fix |
This uses scaled transformed parameters and environment e
internally.
Est |
Point estimate(PE) with standard error(SE) and relative standard error(RSE) |
LI |
1/k likelihood interval, at which likelihood drops to 1/k of maximum likelihood. This reflects asymmetry better than confidence interval. This is estimated likelihood interval, not profile likelihood interval. |
Skewness |
Hougaard's skewness measure. This is printed only with additive error model. See also |
Cov |
Variance-covariance matrix of the objective function at the value of point estimates |
run$m |
Count of positive residuals |
run$n |
Count of negative residuals |
run$run |
Count of runs of residuals |
run$p.value |
P value of run test with excluding zero points |
Objective Function Value |
Minimum value of the objective function |
-2LL |
-2 times log likelihood |
AIC |
Akaike Information Criterion |
AICc |
Corrected Akaike Information Criterion |
BIC |
Schwarz Bayesian Information Criterion |
Convergence |
Convergence code from |
Message |
Message from |
Prediction |
Fitted(predicted) values |
Residuals |
Residuals |
Scale |
Scales with Error="S". Variances for each points are scale vector multiplied by |
Elapsed Time |
Consumed time by minimization |
Kyun-Seop Bae <[email protected]>
tData = Theoph colnames(tData) = c("ID", "BWT", "DOSE", "TIME", "DV") fPK = function(THETA) # Prediction function { DOSE = 320000 # in microgram TIME = e$DATA[, "TIME"] # use data in e$DATA K = THETA[1] Ka = THETA[2] V = THETA[3] P = DOSE/V*Ka/(Ka - K) * (exp(-K*TIME) - exp(-Ka*TIME)) return(P) } IDs = unique(tData[,"ID"]) nID = length(IDs) for (i in 1:nID) { Data = tData[tData$ID == IDs[i],] Res = nlr(fPK, Data, pNames=c("k", "ka", "V"), IE=c(0.1, 3, 500), SecNames=c("CL", "Thalf", "MRT"), SecForms=c(~V*k, ~log(2)/k, ~1/k)) print(paste("## ID =", i, "##")) print(Res) } # Another example from radioimmunoassay(RIA) d1 = data.frame(conc = c(200, 100, 50, 25, 12.5, 6.25, 3.125, 0), DV = c(1.78, 1.5, 1.17, 0.74, 0.51, 0.31, 0.19, 0.04)) PRED = function(TH) TH[1] + TH[2]*d1$conc^TH[4]/(TH[3]^TH[4] + d1$conc^TH[4]) Scale = function(TH) 1/(PRED(TH) - (TH[1] + TH[2])/2)^2 nlr(PRED, d1, pNames=c("R0", "Rmax", "RC50", "Hill"), IE=c(0.1, 3, 50, 1), Error="S", Sx=Scale)
tData = Theoph colnames(tData) = c("ID", "BWT", "DOSE", "TIME", "DV") fPK = function(THETA) # Prediction function { DOSE = 320000 # in microgram TIME = e$DATA[, "TIME"] # use data in e$DATA K = THETA[1] Ka = THETA[2] V = THETA[3] P = DOSE/V*Ka/(Ka - K) * (exp(-K*TIME) - exp(-Ka*TIME)) return(P) } IDs = unique(tData[,"ID"]) nID = length(IDs) for (i in 1:nID) { Data = tData[tData$ID == IDs[i],] Res = nlr(fPK, Data, pNames=c("k", "ka", "V"), IE=c(0.1, 3, 500), SecNames=c("CL", "Thalf", "MRT"), SecForms=c(~V*k, ~log(2)/k, ~1/k)) print(paste("## ID =", i, "##")) print(Res) } # Another example from radioimmunoassay(RIA) d1 = data.frame(conc = c(200, 100, 50, 25, 12.5, 6.25, 3.125, 0), DV = c(1.78, 1.5, 1.17, 0.74, 0.51, 0.31, 0.19, 0.04)) PRED = function(TH) TH[1] + TH[2]*d1$conc^TH[4]/(TH[3]^TH[4] + d1$conc^TH[4]) Scale = function(TH) 1/(PRED(TH) - (TH[1] + TH[2])/2)^2 nlr(PRED, d1, pNames=c("R0", "Rmax", "RC50", "Hill"), IE=c(0.1, 3, 50, 1), Error="S", Sx=Scale)
It plots the diagrom of a comparment model.
pComp(dComp, dRate, Shape="rect", Col=NA, Bx=0.3, By=0.2, Cex=1.0, Lwd=3, Radius=0.3, thIn=pi/2, thOut=pi/2, ...)
pComp(dComp, dRate, Shape="rect", Col=NA, Bx=0.3, By=0.2, Cex=1.0, Lwd=3, Radius=0.3, thIn=pi/2, thOut=pi/2, ...)
dComp |
data.frame for a compartment model. See the example. |
dRate |
data.frame for rate information. See the example. |
Shape |
rectangle or cricle |
Col |
filling color |
Bx |
half width of compartment box |
By |
half height of compartment box |
Cex |
character expansion |
Lwd |
line width |
Radius |
radius of compartment circle |
thIn |
Input angle in radian |
thOut |
Output angle in radian |
... |
arguments to be passed to |
Flow direction is from the top to bottom.
It plots.
Kyun-Seop Bae <[email protected]>
dA = data.frame(No = c(1, 2, 3, 4), Name=c("Gut Depot", "Skin Depot", "Central", "Peripheral"), Level=c(1, 1, 2, 2), xPos=c(-0.5, 0.5, 0, 1)) dB = data.frame(From = c(1, 2, 3, 4, 3, 0, 0), To=c(3, 3, 4, 3, 5, 1, 2), Name=c("KA", "KA2", "K12", "K21", "CL", "F1", "F2")) pComp(dA, dB) #par(oma=c(0, 0, 0, 0), mar=c(0, 0, 1, 0)) # If need, adjust margin before calling pComp(dA, dB, "circ", main="Compartmental Model Diagram") pComp(dA, dB, "circ", main="Compartmental Model Diagram", Col="#DDEEFF", asp=1)
dA = data.frame(No = c(1, 2, 3, 4), Name=c("Gut Depot", "Skin Depot", "Central", "Peripheral"), Level=c(1, 1, 2, 2), xPos=c(-0.5, 0.5, 0, 1)) dB = data.frame(From = c(1, 2, 3, 4, 3, 0, 0), To=c(3, 3, 4, 3, 5, 1, 2), Name=c("KA", "KA2", "K12", "K21", "CL", "F1", "F2")) pComp(dA, dB) #par(oma=c(0, 0, 0, 0), mar=c(0, 0, 1, 0)) # If need, adjust margin before calling pComp(dA, dB, "circ", main="Compartmental Model Diagram") pComp(dA, dB, "circ", main="Compartmental Model Diagram", Col="#DDEEFF", asp=1)
It plots estimated likelihood profile. This is not profile likelihood profile.
pProf(Bag = e, Title = "", ...)
pProf(Bag = e, Title = "", ...)
Bag |
an environment or an object containing the objects of resultant environment e after nlr() |
Title |
title for the plot |
... |
arguments to pass to the plot function |
This plots likelihood profile from the result of nlr() function. Bag should contain the results of nlr().
No values but a plot.
Kyun-Seop Bae <[email protected]>
Get standard error and relative standard error (cv) of the secondary parameter estimate
Secondary(Formula, PE, COV)
Secondary(Formula, PE, COV)
Formula |
Formula to calculate the secondary parameter estimate |
PE |
Point estimates of primary parameters with names |
COV |
Variance-covariance matrix of primary estimates |
Variables within Formula
should exist in the names of PE
vector.
This returns point estimate, standard error, relative standard error of the secondary parameter estimate.
Kyun-Seop Bae <[email protected]>
tData = Theoph colnames(tData) = c("ID", "BWT", "DOSE", "TIME", "DV") # Table requires DV column fPK = function(THETA) # Prediction function { AMT = 320000 # in microgram TIME = e$DATA[,"TIME"] V = THETA[1] K = THETA[2] Ka = THETA[3] Cp = AMT/V*Ka/(Ka - K)*(exp(-K*TIME) - exp(-Ka*TIME)) return(Cp) } Data = tData[tData$ID == 1,] Res = nlr(fPK, Data, pNames=c("V", "K", "Ka"), IE=c(30000, 0.1, 2)) Secondary(~V*K, Res$Est["PE",1:e$nPara], Res$Cov)
tData = Theoph colnames(tData) = c("ID", "BWT", "DOSE", "TIME", "DV") # Table requires DV column fPK = function(THETA) # Prediction function { AMT = 320000 # in microgram TIME = e$DATA[,"TIME"] V = THETA[1] K = THETA[2] Ka = THETA[3] Cp = AMT/V*Ka/(Ka - K)*(exp(-K*TIME) - exp(-Ka*TIME)) return(Cp) } Data = tData[tData$ID == 1,] Res = nlr(fPK, Data, pNames=c("V", "K", "Ka"), IE=c(30000, 0.1, 2)) Secondary(~V*K, Res$Est["PE",1:e$nPara], Res$Cov)
It calculates lambdas and coefficients for two-compartment model from K10, K12, and K21.
SolComp2(K10, K12, K21)
SolComp2(K10, K12, K21)
K10 |
Ke, Elimination rate constant from central compartment |
K12 |
Rate constant from the central to the peripheral compartment |
K21 |
Rate constant from the peripheral to the central compartment |
It calculates lambdas and coefficients of two-compartment model from K10, K12, and K21. Lambdas should have no identical values.
This returns a list of lambdas and coefficients.
Kyun-Seop Bae <[email protected]>
DAT DAT2 = ExpandDH(DAT) Sol = SolComp2(K10=0.1, K12=3, K21=1) X2 = nComp(Sol, Ka=1, DAT2) X2 matplot(DAT2[, "TIME"], X2, type="l")
DAT DAT2 = ExpandDH(DAT) Sol = SolComp2(K10=0.1, K12=3, K21=1) X2 = nComp(Sol, Ka=1, DAT2) X2 matplot(DAT2[, "TIME"], X2, type="l")
It calculates lambdas and coefficients for three-compartment model from K10, K12, K21, K13, and K31.
SolComp3(K10, K12, K21, K13, K31)
SolComp3(K10, K12, K21, K13, K31)
K10 |
Ke, Elimination rate constant from central compartment |
K12 |
Rate constant from the central to the first peripheral compartment |
K21 |
Rate constant from the first peripheral to the central compartment |
K13 |
Rate constant from the central to the second peripheral compartment |
K31 |
Rate constant from the second peripheral to the central compartment |
It calculates lambdas and coefficients of two-compartment model from K10, K12, and K21. Lambdas should have no identical values.
This returns a list of lambdas and coefficients.
Kyun-Seop Bae <[email protected]>
DAT DAT2 = ExpandDH(DAT) Sol = SolComp3(K10=0.1, K12=3, K21=1, K13=2, K31=0.5) X3 = nComp(Sol, Ka=1, DAT2) X3 matplot(DAT2[, "TIME"], X3, type="l")
DAT DAT2 = ExpandDH(DAT) Sol = SolComp3(K10=0.1, K12=3, K21=1, K13=2, K31=0.5) X3 = nComp(Sol, Ka=1, DAT2) X3 matplot(DAT2[, "TIME"], X3, type="l")
It performs old type Winnonlin regression.
wnl5(Fx, Data, pNames, IE, LB, UB, Error="A", ObjFx=ObjLS)
wnl5(Fx, Data, pNames, IE, LB, UB, Error="A", ObjFx=ObjLS)
Fx |
Function for structural model. It should return a vector of the same length to observations. |
Data |
Data table which will be used in Fx. Fx should access this with |
pNames |
Parameter names in the order of Fx arguments |
IE |
Initial estimates of parameters |
LB |
Lower bound for |
UB |
Upper bound for |
Error |
Error model. One of |
ObjFx |
Objective function to be minimized. The default is least square function. |
This uses scaled transformed parameters and environment e
internally. Here we do not provide standard error. If you want standard error, use nlr
.
PE |
Point estimates |
WRSS |
Weighted Residual Sum of Square |
run$m |
Count of positive residuals |
run$n |
Count of negative residuals |
run$run |
Count of runs of residuals |
run$p.value |
P value of run test with excluding zero points |
Objective Function Value |
Minimum value of the objective function |
AIC |
Akaike Information Criterion |
SBC |
Schwarz Bayesian Information Criterion |
Condition Number |
Condition number |
Message |
Message from |
Prediction |
Fitted(predicted) values |
Residuals |
Residuals |
Elapsed Time |
Consumed time by minimization |
Kyun-Seop Bae <[email protected]>
tData = Theoph colnames(tData) = c("ID", "BWT", "DOSE", "TIME", "DV") fPK = function(THETA) # Prediction function { DOSE = 320000 # in microgram TIME = e$DATA[,"TIME"] # use data in e$DATA K = THETA[1] Ka = THETA[2] V = THETA[3] Cp = DOSE/V*Ka/(Ka - K)*(exp(-K*TIME) - exp(-Ka*TIME)) return(Cp) } IDs = unique(tData[,"ID"]) nID = length(IDs) for (i in 1:nID) { Data = tData[tData$ID == IDs[i],] Res = wnl5(fPK, Data, pNames=c("k", "ka", "V"), IE=c(0.1, 3, 500)) print(paste("## ID =", i, "##")) print(Res) }
tData = Theoph colnames(tData) = c("ID", "BWT", "DOSE", "TIME", "DV") fPK = function(THETA) # Prediction function { DOSE = 320000 # in microgram TIME = e$DATA[,"TIME"] # use data in e$DATA K = THETA[1] Ka = THETA[2] V = THETA[3] Cp = DOSE/V*Ka/(Ka - K)*(exp(-K*TIME) - exp(-Ka*TIME)) return(Cp) } IDs = unique(tData[,"ID"]) nID = length(IDs) for (i in 1:nID) { Data = tData[tData$ID == IDs[i],] Res = wnl5(fPK, Data, pNames=c("k", "ka", "V"), IE=c(0.1, 3, 500)) print(paste("## ID =", i, "##")) print(Res) }