CGNM: Cluster Gauss-Newton Method

library(CGNM)
library(knitr)

When and when not to use CGNM

Use CGNM

  • Wish to fit relatively complex parameterized model/curve/function to the data; however, unsure of the appropriate initial guess (e.g., “start” argument in nls function). => CGNM searches the parameter from multiple initial guess so that the use can specify rough initial range instead of a point initial guess.
  • When the practical identifiability (if there is only one best fit parameter or not) is unknown. => CGNM will find multiple set of best fit parameters; hence if the parameter is not practically identifiable then the multiple best-fit parameters found by CGNM will not converge to a point.
  • When the model is a blackbox (i.e., cannot be explicitly/easily write out as a “formula”) and the model may not be continuous with respect to the parameter. => CGNM makes minimum assumptions on the model so all user need to provide is a function that takes the model parameter and simulation (and what happen in the function, CGNM does not care).

Not to use CGNM

  • When the you already know where approximately the best fit parameter is and you just need to find one best fit parameter. => Simply use nls and that will be faster.
  • When the model is relatively simple and computation cost is not an issue. => CGNM is made to save the number of model evaluation during the parameter estimation, so may not see much advantage compared to the conventional multi-start methods (e.g.repeatedly using nls from various “start”).

How to use CGNM

To illustrate the use of CGNM here we illustrate how CGNM can be used to estimate two sets of the best fit parameters of the pharmacokinetics model when the drug is administered orally (known as flip-flop kinetics).

Prepare the model (f)

model_function=function(x){

  observation_time=c(0.1,0.2,0.4,0.6,1,2,3,6,12)
  Dose=1000
  F=1

  ka=x[1]
  V1=x[2]
  CL_2=x[3]
  t=observation_time

  Cp=ka*F*Dose/(V1*(ka-CL_2/V1))*(exp(-CL_2/V1*t)-exp(-ka*t))

  log10(Cp)
}

Prepare the data (y*)

observation=log10(c(4.91, 8.65, 12.4, 18.7, 24.3, 24.5, 18.4, 4.66, 0.238))

Run Cluster_Gauss_Newton_method

Here we have specified the upper and lower range of the initial guess.

CGNM_result=Cluster_Gauss_Newton_method(nonlinearFunction=model_function,targetVector = observation,
initial_lowerRange =rep(0.01,3),initial_upperRange =  rep(100,3),lowerBound = rep(0,3), saveLog=TRUE, num_minimizersToFind = 500, ParameterNames = c("Ka","V1","CL"))
#> Ka, V1, CL will be transformed internally to impose boundaries. See CGNM_result$runSetting$ReparameterizationDef for exact transformation. Transformed parameters are denoted as x and untransformed parameters are denoted as theta.
#> checking if the nonlinearFunction can be evaluated at the initial_lowerRange
#> Evaluation Successful
#> checking if the nonlinearFunction can be evaluated at the initial_upperRange
#> Evaluation Successful
#> checking if the nonlinearFunction can be evaluated at the (initial_upperRange+initial_lowerRange)/2
#> Evaluation Successful
#> CGNM iteration should finish before: 2024-12-10 06:37:21.59153
#> Generating initial cluster. 494 out of 500 done
#> Generating initial cluster. 500 out of 500 done
#> Iteration:1  Median sum of squares residual=5.74451799951206
#> Rough estimation of remaining computation time: 0.9 min
#> CGNM iteration estimated to finish at: 2024-12-10 06:35:41.686747
#> Iteration:2  Median sum of squares residual=2.52889351373123
#> Iteration:3  Median sum of squares residual=1.12035859370579
#> Iteration:4  Median sum of squares residual=0.926981161947034
#> Iteration:5  Median sum of squares residual=0.92697602248231
#> Iteration:6  Median sum of squares residual=0.480258577752242
#> Iteration:7  Median sum of squares residual=0.209844144195492
#> Iteration:8  Median sum of squares residual=0.00942877811413272
#> Iteration:9  Median sum of squares residual=0.00735015326760216
#> Iteration:10  Median sum of squares residual=0.00734924769060116
#> Iteration:11  Median sum of squares residual=0.00734923410511653
#> CGNM iteration estimated to finish at: 2024-12-10 06:35:32.549741
#> Iteration:12  Median sum of squares residual=0.007349234083777
#> Iteration:13  Median sum of squares residual=0.00734923408269502
#> Iteration:14  Median sum of squares residual=0.00734923408258082
#> Iteration:15  Median sum of squares residual=0.00734923408245494
#> Iteration:16  Median sum of squares residual=0.00734923408239799
#> Iteration:17  Median sum of squares residual=0.00734923408239253
#> Iteration:18  Median sum of squares residual=0.0073492340823862
#> Iteration:19  Median sum of squares residual=0.00734923408238274
#> Iteration:20  Median sum of squares residual=0.00734923408238184
#> Iteration:21  Median sum of squares residual=0.00734923408238142
#> CGNM iteration estimated to finish at: 2024-12-10 06:35:32.282833
#> Iteration:22  Median sum of squares residual=0.0073492340823814
#> Iteration:23  Median sum of squares residual=0.00734923408238102
#> Iteration:24  Median sum of squares residual=0.00734923408238092
#> Iteration:25  Median sum of squares residual=0.00734923408238088
#> CGNM computation time:  0.7 min

Obtain the approximate minimizers

kable(head(acceptedApproximateMinimizers(CGNM_result)))
Ka V1 CL
0.5178956 10.66084 9.877326
0.9265056 19.07204 9.877326
0.9265056 19.07204 9.877326
0.5178956 10.66084 9.877326
0.5178956 10.66084 9.877326
0.5178956 10.66084 9.877326
kable(table_parameterSummary(CGNM_result))
CGNM: Minimum 25 percentile Median 75 percentile Maximum
Ka 0.5178655 0.5178956 0.5178956 0.9265056 0.9265799
V1 10.6598590 10.6608371 10.6608381 19.0720415 19.0732877
CL 9.8770685 9.8773257 9.8773257 9.8773258 9.8777148

Can run residual resampling bootstrap analyses using CGNM as well

CGNM_bootstrap=Cluster_Gauss_Newton_Bootstrap_method(CGNM_result, nonlinearFunction=model_function)
#> checking if the nonlinearFunction can be evaluated at the initial_lowerRange
#> Evaluation Successful
#> checking if the nonlinearFunction can be evaluated at the initial_upperRange
#> Evaluation Successful
#> checking if the nonlinearFunction can be evaluated at the (initial_upperRange+initial_lowerRange)/2
#> Evaluation Successful
#> CGNM iteration should finish before: 2024-12-10 06:35:33.290062
#> Generating initial cluster. 200 out of 200 done
#> Iteration:1  Median sum of squares residual=0.0117643029385501
#> Rough estimation of remaining computation time: 0.1 min
#> CGNM iteration estimated to finish at: 2024-12-10 06:35:36.129377
#> Iteration:2  Median sum of squares residual=0.0109873451772748
#> Iteration:3  Median sum of squares residual=0.0109211943960194
#> Iteration:4  Median sum of squares residual=0.0109210921435761
#> Iteration:5  Median sum of squares residual=0.0109210832776185
#> Iteration:6  Median sum of squares residual=0.0109210813967911
#> Iteration:7  Median sum of squares residual=0.0109210813849233
#> Iteration:8  Median sum of squares residual=0.010921081225684
#> Iteration:9  Median sum of squares residual=0.010921081225684
#> Iteration:10  Median sum of squares residual=0.010921081225684
#> Iteration:11  Median sum of squares residual=0.010921081225684
#> CGNM iteration estimated to finish at: 2024-12-10 06:35:35.624265
#> Iteration:12  Median sum of squares residual=0.0109210807515955
#> Iteration:13  Median sum of squares residual=0.0109210807515955
#> Iteration:14  Median sum of squares residual=0.0109210807515955
#> Iteration:15  Median sum of squares residual=0.0109210807515955
#> Iteration:16  Median sum of squares residual=0.0109210807515955
#> Iteration:17  Median sum of squares residual=0.0109210807515955
#> Iteration:18  Median sum of squares residual=0.0109210807515955
#> Iteration:19  Median sum of squares residual=0.0109210807515955
#> Iteration:20  Median sum of squares residual=0.0109210807515955
#> Iteration:21  Median sum of squares residual=0.0109210807515954
#> CGNM iteration estimated to finish at: 2024-12-10 06:35:35.586614
#> Iteration:22  Median sum of squares residual=0.0109210807515954
#> Iteration:23  Median sum of squares residual=0.0109210807515954
#> Iteration:24  Median sum of squares residual=0.0109210807515954
#> Iteration:25  Median sum of squares residual=0.0109210807515954
#> CGNM computation time:  0.1 min
kable(table_parameterSummary(CGNM_bootstrap))
CGNM Bootstrap: Minimum 25 percentile Median 75 percentile Maximum RSE (%)
Ka 0.487292 0.5160089 0.5386429 0.9159255 1.065106 29.970378
V1 9.237087 10.5200980 11.4315069 18.7093340 21.500887 29.992113
CL 9.290934 9.6744127 9.8124539 10.0609200 10.733425 2.816224

Visualize the CGNM modelfit analysis result

To use the plot functions the user needs to manually load ggplot2.

library(ggplot2)

Inspect the distribution of SSR of approximate minimizers found by CGNM

Despite the robustness of the algorithm not all approximate minimizers converge so here we visually inspect to see how many of the approximate minimizers we consider to have the similar SSR to the minimum SSR. Currently the algorithm automatically choose “acceptable” approximate minimizer based on Grubbs’ Test for Outliers. If for whatever the reason this criterion is not satisfactly the users can manually set the indicies of the acceptable approximat minimizers.

plot_Rank_SSR(CGNM_result)

plot_paraDistribution_byHistogram(CGNM_bootstrap, bins = 50)+scale_x_continuous(trans="log10")

visually inspect goodness of fit of top 50 approximate minimizers

plot_goodnessOfFit(CGNM_result, plotType = 1, independentVariableVector = c(0.1,0.2,0.4,0.6,1,2,3,6,12), plotRank = seq(1,50))

plot model prediction with uncertainties based on residual resampling bootstrap analysis

plot_goodnessOfFit(CGNM_bootstrap, plotType = 1, independentVariableVector = c(0.1,0.2,0.4,0.6,1,2,3,6,12))

plot profile likelihood

plot_profileLikelihood(c("CGNM_log","CGNM_log_bootstrap"))+scale_x_continuous(trans="log10")
#> [1] "log saved in /tmp/Rtmpz8mK6u/Rbuildb3d43a99810/CGNM/vignettes/CGNM_log is used to draw SSR/likelihood surface"
#> [1] "log saved in /tmp/Rtmpz8mK6u/Rbuildb3d43a99810/CGNM/vignettes/CGNM_log_bootstrap is used to draw SSR/likelihood surface"

kable(table_profileLikelihoodConfidenceInterval(c("CGNM_log","CGNM_log_bootstrap"), alpha = 0.25))
#> [1] "WARNING: ALWAYS first inspect the profile likelihood plot (using plot_profileLikelihood()) and then use this table, DO NOT USE this table by itself."
#> [1] "log saved in /tmp/Rtmpz8mK6u/Rbuildb3d43a99810/CGNM/vignettes/CGNM_log is used to draw SSR/likelihood surface"
#> [1] "log saved in /tmp/Rtmpz8mK6u/Rbuildb3d43a99810/CGNM/vignettes/CGNM_log_bootstrap is used to draw SSR/likelihood surface"
25 percentile best-fit 75 percentile identifiability
Ka 0.487886 NA 1.078405 Not identifiable
V1 9.491705 NA 21.452623 Not identifiable
CL 9.354619 NA 10.518595 Not identifiable

plot profile likelihood surface

plot_2DprofileLikelihood(CGNM_result, showInitialRange=FALSE, alpha = 0.05)+scale_x_continuous(trans="log10")+scale_y_continuous(trans="log10")
#> [1] "log saved in /tmp/Rtmpz8mK6u/Rbuildb3d43a99810/CGNM/vignettes/CGNM_log is used to draw SSR/likelihood surface"
#> [1] "log saved in /tmp/Rtmpz8mK6u/Rbuildb3d43a99810/CGNM/vignettes/CGNM_log_bootstrap is used to draw SSR/likelihood surface"

Parallel computation

Cluster Gauss Newton method implementation in CGNM package (above version 0.6) can use nonlinear function that takes multiple input vectors stored in matrix (each column as the input vector) and output matrix (each column as the output vector). This implementation was to be used to parallelize the computation. See below for the examples of parallelized implementation in various hardware. Cluster Gauss Newton method is embarrassingly parallelizable so the computation speed is almost proportional to the number of computation cores used especially for the nonlinear functions that takes time to compute (e.g. models with numerical method to solve a large system of ODEs).

model_matrix_function=function(x){
  Y_list=lapply(split(x, rep(seq(1:nrow(x)),ncol(x))), model_function)
  Y=t(matrix(unlist(Y_list),ncol=length(Y_list)))
}

testX=t(matrix(c(rep(0.01,3),rep(10,3),rep(100,3)), nrow = 3))
print("testX")
#> [1] "testX"
print(testX)
#>       [,1]  [,2]  [,3]
#> [1,] 1e-02 1e-02 1e-02
#> [2,] 1e+01 1e+01 1e+01
#> [3,] 1e+02 1e+02 1e+02
print("model_matrix_function(testX)")
#> [1] "model_matrix_function(testX)"
print(model_matrix_function(testX))
#>           [,1]      [,2]     [,3]      [,4]      [,5]      [,6]       [,7]
#> [1,] 1.9782455 2.2578754 2.517166 2.6529261 2.7982741 2.9311513  2.9684634
#> [2,] 1.7756978 1.8804296 1.860008 1.7832148 1.6114094 1.1771685  0.7428740
#> [3,] 0.9609136 0.9175059 0.830647 0.7437881 0.5700703 0.1357758 -0.2985186
#>            [,8]      [,9]
#> [1,]  2.9771626  2.952246
#> [2,] -0.5600094 -3.165776
#> [3,] -1.6014021 -4.207169

print("model_matrix_function(testX)-rbind(model_function(testX[1,]),model_function(testX[2,]),model_function(testX[3,]))")
#> [1] "model_matrix_function(testX)-rbind(model_function(testX[1,]),model_function(testX[2,]),model_function(testX[3,]))"
print(model_matrix_function(testX)-rbind(model_function(testX[1,]),model_function(testX[2,]),model_function(testX[3,])))
#>      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9]
#> [1,]    0    0    0    0    0    0    0    0    0
#> [2,]    0    0    0    0    0    0    0    0    0
#> [3,]    0    0    0    0    0    0    0    0    0

an example of parallel implementation for Mac using parallel package


# library(parallel)
#
# obsLength=length(observation)
# 
## Given CGNM searches through wide range of parameter combination, it can encounter
## parameter combinations that is not feasible to evaluate. This try catch function
## is implemented within CGNM for regular functions but for the matrix functions
## user needs to implement outside of CGNM
#
# modelFunction_tryCatch=function(x_in){
#  out=tryCatch({model_function(x_in)},
#               error=function(cond) {rep(NA, obsLength)}
#  )
#  return(out)
# }
# 
#  model_matrix_function=function(X){
#   Y_list=mclapply(split(X, rep(seq(1:nrow(X)),ncol(X))), modelFunction_tryCatch,mc.cores = (parallel::detectCores()-1), mc.preschedule = FALSE)
#
#   Y=t(matrix(unlist(Y_list),ncol=length(Y_list)))
# 
#   return(Y)
#  }

an example of parallel implementation for Windows using foreach and doParllel packages

#library(foreach)
#library(doParallel)

#numCore=8
#registerDoParallel(numCore-1)
#cluster=makeCluster(numCore-1, type = "PSOCK")
#registerDoParallel(cl=cluster)

# obsLength=length(observation)

## Given CGNM searches through wide range of parameter combination, it can encounter
## parameter combinations that is not feasible to evaluate. This try catch function
## is implemented within CGNM for regular functions but for the matrix functions
## user needs to implement outside of CGNM

# modelFunction_tryCatch=function(x_in){
#  out=tryCatch({model_function(x_in)},
#               error=function(cond) {rep(NA, obsLength)}
#  )
#  return(out)
# }

#model_matrix_function=function(X){
#  Y_list=foreach(i=1:dim(X)[1], .export = c("model_function", "modelFunction_tryCatch"))%dopar%{ #make sure to include all related functions in .export and all used packages in .packages for more information read documentation of dopar
#      modelFunction_tryCatch((X[i,]))
#    }
  
#  Y=t(matrix(unlist(Y_list),ncol=length(Y_list)))
#}
CGNM_result=Cluster_Gauss_Newton_method(nonlinearFunction=model_matrix_function,
targetVector = observation,
initial_lowerRange =rep(0.01,3),initial_upperRange =  rep(100,3),lowerBound = rep(0,3), saveLog=TRUE, num_minimizersToFind = 500, ParameterNames = c("Ka","V1","CL"))
#> Warning in dir.create(saveFolderName): 'CGNM_log' already exists
#> Ka, V1, CL will be transformed internally to impose boundaries. See CGNM_result$runSetting$ReparameterizationDef for exact transformation. Transformed parameters are denoted as x and untransformed parameters are denoted as theta.
#> nonlinearFunction is given as matrix to matrix function
#> NonlinearFunction evaluation at initial_lowerRange Successful.
#> NonlinearFunction evaluation at (initial_upperRange+initial_lowerRange)/2 Successful.
#> NonlinearFunction evaluation at initial_upperRange Successful.
#> CGNM iteration should finish before: 2024-12-10 06:35:45.503676
#> Warning in dir.create(saveFolderName): 'CGNM_log' already exists
#> Generating initial cluster. 497 out of 500 done
#> Generating initial cluster. 500 out of 500 done
#> Iteration:1  Median sum of squares residual=5.92292897175439
#> Rough estimation of remaining computation time: 0.9 min
#> CGNM iteration estimated to finish at: 2024-12-10 06:36:37.704743
#> Iteration:2  Median sum of squares residual=2.71019546980899
#> Iteration:3  Median sum of squares residual=1.27101426207368
#> Iteration:4  Median sum of squares residual=0.926981289187213
#> Iteration:5  Median sum of squares residual=0.864422226995585
#> Iteration:6  Median sum of squares residual=0.275660769850915
#> Iteration:7  Median sum of squares residual=0.0214259736247871
#> Iteration:8  Median sum of squares residual=0.00739381811020241
#> Iteration:9  Median sum of squares residual=0.00734932110670596
#> Iteration:10  Median sum of squares residual=0.00734923429144077
#> Iteration:11  Median sum of squares residual=0.00734923409116069
#> CGNM iteration estimated to finish at: 2024-12-10 06:36:26.173737
#> Iteration:12  Median sum of squares residual=0.00734923408256669
#> Iteration:13  Median sum of squares residual=0.00734923408240188
#> Iteration:14  Median sum of squares residual=0.0073492340823878
#> Iteration:15  Median sum of squares residual=0.00734923408238441
#> Iteration:16  Median sum of squares residual=0.00734923408238364
#> Iteration:17  Median sum of squares residual=0.00734923408238209
#> Iteration:18  Median sum of squares residual=0.00734923408238192
#> Iteration:19  Median sum of squares residual=0.00734923408238134
#> Iteration:20  Median sum of squares residual=0.00734923408238108
#> Iteration:21  Median sum of squares residual=0.00734923408238104
#> CGNM iteration estimated to finish at: 2024-12-10 06:36:26.487945
#> Iteration:22  Median sum of squares residual=0.00734923408238095
#> Iteration:23  Median sum of squares residual=0.00734923408238092
#> Iteration:24  Median sum of squares residual=0.0073492340823809
#> Iteration:25  Median sum of squares residual=0.00734923408238088
#> CGNM computation time:  0.8 min


#stopCluster(cluster) #make sure to close the created cluster if needed
unlink("CGNM_log", recursive=TRUE)
unlink("CGNM_log_bootstrap", recursive=TRUE)

What is CGNM?

For the complete description and comparison with the conventional algorithm please see (https: //doi.org/10.1007/s11081-020-09571-2):

Aoki, Y., Hayami, K., Toshimoto, K., & Sugiyama, Y. (2020). Cluster Gauss–Newton method. Optimization and Engineering, 1-31.

The mathematical problem CGNM solves

Cluster Gauss-Newton method is an algorithm for obtaining multiple minimisers of nonlinear least squares problems minx||f(x) − y*||2 2 which do not have a unique solution (global minimiser), that is to say, there exist x(1) ≠ x(2) such that minx||f(x) − y*||2 2 = ||f(x(1)) − y*||2 2 = ||f(x(2)) − y*||2 2 . Parameter estimation problems of mathematical models can often be formulated as nonlinear least squares problems. Typically these problems are solved numerically using iterative methods. The local minimiser obtained using these iterative methods usually depends on the choice of the initial iterate. Thus, the estimated parameter and subsequent analyses using it depend on the choice of the initial iterate. One way to reduce the analysis bias due to the choice of the initial iterate is to repeat the algorithm from multiple initial iterates (i.e. use a multi-start method). However, the procedure can be computationally intensive and is not always used in practice. To overcome this problem, we propose the Cluster Gauss-Newton method (CGNM), an efficient algorithm for finding multiple approximate minimisers of nonlinear-least squares problems. CGN simultaneously solves the nonlinear least squares problem from multiple initial iterates. Then, CGNM iteratively improves the approximations from these initial iterates similarly to the Gauss-Newton method. However, it uses a global linear approximation instead of the Jacobian. The global linear approximations are computed collectively among all the iterates to minimise the computational cost associated with the evaluation of the mathematical model.