PopED computes optimal experimental designs for both population and individual studies based on nonlinear mixed-effect models. Often this is based on a computation of the Fisher Information Matrix (FIM).

To get started you need to define

• A model.
• An initial design (and design space if you want to optimize)

Below is an example to introduce the package. This example and several other examples, are available as r-scripts in the “examples” folder in the PopED installation directory located at:

system.file("examples", package="PopED")

You can view a list of the example files using the commands:

ex_dir <- system.file("examples", package="PopED")
list.files(ex_dir)

You can then open one of the examples (for example, ex.1.a.PK.1.comp.oral.md.intro.R, the code found in this vignette) using the following code

file_name <- "ex.1.a.PK.1.comp.oral.md.intro.R"
ex_file <- system.file("examples",file_name,package="PopED")
file.copy(ex_file,tempdir(),overwrite = T)
file.edit(file.path(tempdir(),file_name))

In addition, there is another vignette called “Examples” that explores the new features in each example.

# Define a model

Here we define a one-compartment pharmacokinetic model with linear absorption using an analytical solution. In this case the solution is applicable for both multiple and single dosing. Note that this function is also predefined in PopED as ff.PK.1.comp.oral.md.CL (see ?ff.PK.1.comp.oral.md.CL for more information).

ff <- function(model_switch,xt,parameters,poped.db){
with(as.list(parameters),{
N = floor(xt/TAU)+1
y=(DOSE*Favail/V)*(KA/(KA - CL/V)) *
(exp(-CL/V * (xt - (N - 1) * TAU)) * (1 - exp(-N * CL/V * TAU))/(1 - exp(-CL/V * TAU)) -
exp(-KA * (xt - (N - 1) * TAU)) * (1 - exp(-N * KA * TAU))/(1 - exp(-KA * TAU)))
return(list( y=y,poped.db=poped.db))
})
}

Next we define the parameters of this function, in this case the between-subject variability (BSV) for each parameter is log-normally distributed (parameter Favail is assumed not to have BSV). DOSE and TAU are defined as covariates (in vector a) so that we can optimize their values later.

Now we define the residual unexplained variability (RUV) function, in this case the RUV has both an additive and proportional component.

# Define an initial design and design space

Now we define the model parameter values, the initial design and design space for optimization.

In this example, the parameter values are defined for the fixed effects (bpop), the between-subject variability variances (d) and the residual variability variances (sigma). We also fix the parameter Favail using notfixed_bpop, since we have only oral dosing and the parameter is not identifiable. Fixing a parameter means that we assume the parameter will not be estimated (and is know without uncertainty). In addition, we fix the small additive RUV term, as this term is reflecting the higher error expected at low concentration measurements (limit of quantification measurements) and would typically be calculated from analytical assay methods (for example, the standard deviation of the parameter might be 20% of the limit of quantification).

For the initial design, we define two groups (m=2) of 20 individuals (groupsize=20), with doses of 20 mg or 40 mg every 24 hours (a). The initial design has 5 sample times per individual (xt).

For the design space, which can be searched during optimization, we define a potential dose range of between 0 and 200 mg (mina and maxa), and a range of potential sample times between 0 and 10 hours for the first three samples and between 240 and 248 hours for the last two samples (minxt and maxxt). Finally, we fix the two groups of subjects to have the same sample times (bUseGrouped_xt=TRUE).

poped.db <- create.poped.database(
# Model
ff_fun=ff,
fg_fun=sfg,
fError_fun=feps,
bpop=c(V=72.8,KA=0.25,CL=3.75,Favail=0.9),
notfixed_bpop=c(1,1,1,0),
d=c(V=0.09,KA=0.09,CL=0.25^2),
notfixed_sigma=c(1,0),

# Design
m=2,
groupsize=20,
a=list(c(DOSE=20,TAU=24),c(DOSE=40, TAU=24)),
maxa=c(DOSE=200,TAU=24),
mina=c(DOSE=0,TAU=24),
xt=c( 1,2,8,240,245),

# Design space
minxt=c(0,0,0,240,240),
maxxt=c(10,10,10,248,248),
bUseGrouped_xt=TRUE)

# Simulation

First it may make sense to check your model and design to make sure you get what you expect when simulating data. Here we plot the model typical values:

plot_model_prediction(poped.db, model_num_points = 300)

Next, we plot the expected prediction interval (by default a 95% PI) of the data taking into account the BSV and RUV using the option PI=TRUE. This option makes predictions based on first-order approximations to the model variance and a normality assumption of that variance. Better (and slower) computations are possible with the DV=T, IPRED=T and groupsize_sim = some large number options.

plot_model_prediction(poped.db, PI=TRUE, separate.groups=T, model_num_points = 300, sample.times = FALSE)

We can get these predictions numerically as well:

# Comparison of designs

The precision on CL is similar with the alternative design but the other parameters are less well estimated.

(design_eval <- round(data.frame(design_1=ds1$rse,design_2=ds2$rse),0))
design_1 design_2
V 8 44
KA 10 51
CL 4 6
om_V 40 208
om_KA 61 281
om_CL 28 53
sig_prop 14 23

Comparing the objective function value (OFV), we see that the alternative design (less samples per subject) has a smaller OFV (=worse). We can compare the two OFVs using efficiency, which tells us the proportion extra individuals that are needed in the alternative design to have the same information content as the original design (around 4 times more iindividuals than are currently in the design).

# Design optimization

Now we can optimize the sample times of the origianl design by maximizing the OFV1.

output <- poped_optim(poped.db, opt_xt=TRUE)

We see that there are four distinct sample times for this design. This means that for this model, with these exact parameter values, that the most information from the study to inform the parameter estimation is with these sample times.

# Examine efficiency of sampling windows

Of course, this means that there are multiple samples at some of these time points. We can explore a more practical design by looking at the loss of efficiency if we spread out sample times in a uniform distribution around these optimal points ($$\pm 30$$ minutes).

plot_efficiency_of_windows(output$poped.db,xt_windows=0.5) Here we see the efficiency ($$(|FIM_{optimized}|/|FIM_{initial}|)^{1/npar}$$) drops below 80% in some cases, which is mostly caused by an increase in the parameter uncertainty of the BSV parameter on absorption (om_KA). Smaller windows or different windowing on different samples might be needed. To investigate see ?plot_efficiency_of_windows. # Optimize over a discrete design space In the previous example we optimized over a continuous design space (sample times could be optimized to be any value between a lower and an upper limit). We could also limit the search to only “allowed” values, for example, only samples taken on the hour are allowed. poped.db.discrete <- create.poped.database(poped.db,discrete_xt = list(c(0:10,240:248))) output_discrete <- poped_optim(poped.db.discrete, opt_xt=TRUE) Here we see that the optimization ran somewhat quicker, but gave a less efficient design. # Optimize ‘Other’ design variables One could also optimize over dose, to see if a different dose could help in parameter estimation . output_dose_opt <- poped_optim(output$poped.db, opt_xt=TRUE, opt_a=TRUE)

In this case the results are predictable … higher doses give observations with somewhat lower absolute residual variability leading to both groups at the highest allowed dose levels (200 mg in this case).

# Cost function to optimize dose

Optimizing the dose of a study just to have better model parameter estimates may be somewhat implausible. Instead, let’s use a cost function to optimize dose based on some sort of target concentration … perhaps typical population trough concentrations of 0.2 and 0.35 for the two groups of patients at 240 hours.

First we define the criteria we use to optimize the doses, here a least squares minimization.

Now we minimize the cost function

1. Tip: to make the optimization run faster use the option parallel = TRUE in the poped_optim command.