- On 1 Feb 2005 at 17:26:39, "Vijay V. Upreti" (vupre001.-at-.umaryland.edu) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear all,

I have a data modeling problem. Available with me is a single dose IV PK

data from 4 large mammals, plasma concentration time profiles of three

animals show a good fit for three-exponential model while one of them is

distinctly fitting bi-exponential model, as determined by precision of

parameter estimates and goodness of fit criterions. Modeling was

performed

using WinNonlin and suitable weighting scheme was applied. In such a

situation which model should be selected to report the final PK

parameters,

some thoughts from the group please.

Thanks

Vijay V Upreti

Pharmacokinetics-Biopharmaceutics Laboratory

Department of Pharmaceutical Sciences

University of Maryland, School of Pharmacy

20 Penn St., Baltimore, MD 21201

Voice: 410-706-7388

Fax: 410-706-5017 - On 1 Feb 2005 at 23:10:48, Dimiter Terziivanov (terziiv.aaa.yahoo.com) sent the message

Back to the Top

The following message was posted to: PharmPK

Dr. Vijay V Upreti,

As a rule of thumb the low of parsimony recommends in

such situations to use the simplest of the models,

i.e. the model with the fewer number of structural PK

parameters. Increasing the number of PK parameters

apparently improves the goodness of fit. It is better

to look at the value of log-likelihood criterion.

Kind Regards,

D. Terziivanov

Dimiter Terziivanov, MD,PhD,DSc, Professor

Head, Clinic of Clinical Pharmacology and

Pharmacokinetics,

Univ Hosp "St. I.Rilsky",

15 Acad. I. Geshov st, 1431 Sofia, Bulgaria

Tel:(+ 359 2)8510639;(+ 359 2)5812 828.

Fax:(+ 359 2)8519309. e-mal: terziiv.at.yahoo.com - On 2 Feb 2005 at 08:34:06, "Hans Proost" (j.h.proost.at.rug.nl) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear Vijay,

Fitting of a three-exponential (or three-compartmental) model to a

number of

individuals results almost always in the problem you describes, i.e.

that a

three-exponential model cannot be found in some individuals. Even with a

sufficient number of data this is quite common in my experience,

including

both real data and Monte Carlo simulation data.

IMHO, the only satisfactory solution is a population approach, e.g.

mixed

effect modeling (e.g. NONMEM) or a Bayesian method, e.g. Iterative

Two-Stage

Bayesian analysis. In my experience the latter method works excellent,

and

avoids the problem of unidentifiable parameters in some individuals as a

result of the Bayesian principle. In addition, individual parameter

estimates are more precise, and population standard deviations are

unbiased,

in contrast to the conventional Standard Two-Stage approach.

Best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.aaa.rug.nl

[Interesting approach and use of population PK. I've had similar

problems with 1 versus 2 compartment models. From memory I chose the

smaller model OR reported both models in a combined table. The

population approach looks like a nice option which may even support a 3

compartment model in a data rich environment - db] - On 2 Feb 2005 at 09:17:32, "Porzio, Stefano" (Stefano.Porzio.-at-.ZambonGroup.com) sent the message

Back to the Top

Dear Vijay,

As discrimination between different exponential functions you can is

tipically observe patterns of residuals. You can alternatively use as

selection method the Akaike information Criterion (AIC) or model

selection Criterion (MSC) or F-test for best fitting model or Schwarz

criterion.

For reference:

Ludden TM et al : Comparison of the Akaike Information Criterion, the

Schwartz criterion and the F test as guides to model selection

J Pharmacokinet Biopharm 22(2): 431-445, 1994

John G. Wagner - Measures of fit. In: PHARMACOKINETICS FOR THE

PHARMACEUTICAL SCIENTIST - TECHNOMIC publishing Co.-1993

Best Regards

Stefano Porzio

Pharmacokinetic and Tox. Dept.

Inpharzam Ricerche SA - ZAMBON-GROUP

Taverne - Switzerland - On 2 Feb 2005 at 07:30:28, Robert P Hunter (HUNTER_ROBERT_P.at.LILLY.COM) sent the message

Back to the Top

Dear Vijay,

You use the model which is the best fit for each animal to report the

individual animals pharmacokinetic data.

The use whatever summary statistic you wish, such as median and range,

to report the results, in my opinion.

Rob Hunter, MS, PhD

Sr. Research Scientist

Veterinary Safety/ADME

Elanco Animal Health

HUNTER_ROBERT_P.-a-.LILLY.COM - On 2 Feb 2005 at 10:30:42, "Bonate, Peter" (Peter.Bonate.aaa.genzyme.com) sent the message

Back to the Top

The following message was posted to: PharmPK

In regards to the model selection discussion that is occurring right

now, I don't think the issue is which model is better. I think Vikay

has stated pretty clearly that in some subjects a 2-compartment fits

better than a 3-compartment. The issue is what to do about it, if

anything.

This phenomenon is probably due to analytical constraints. Suppose all

subjects follow a 3-compartment model but that the rates are faster in

some subjects than others. If the LLOQ of the assay is such that the

third phase of the profile cannot be observed in those subjects with the

faster rates then those subjects will appear to have bi-exponential

kinetics and a 2-compartment may fit better - even though the

3-compartment model is the true model.

I think Hans is correct. The only true solution is a population

approach where those subjects with data missing in the third phase

"borrow" data from the population to estimate their individual

pharmacokinetic parameters.

At the individual level, some of the results from the bi-exponential fit

are biased estimates. All estimates related to the central compartment

are unbiased, but the estimates related to the peripheral compartments

will be biased. To see this I did a quick simulation with a

3-compartment model having the following parameters: V1 = 1, V2 = 0.5,

V3 = 0.25, CL = 1.5, Q2 = 0.15, Q3 = 0.015 with a dose of 100 units.

I collected 200 samples up to 48 timeunits. A nice tri-exponential

curve is evident. I then fit a 2-compartment model to the data. The

results were V1 = 1.0 +/- 2.32E-6, V2 = 0.56 +/- 4.1E-5, CL = 1.5 +/-

4.7E-6, Q2 = 0.16 +/- 4.7E-6. The model does an excellent job up to the

start of the third phase, where the model breaks down. When I fit only

data from the first two phases, the model is a perfect fit having the

same parameter estimates as when fitting all data up to 48 time units.

It's easy to redo this sim with different combinations of parameters.

Every time, the estimates related to the central compartment (CL, V1)

will be precisely estimated. The same conclusion will hold if you fit a

1-compartment model to 2-compartment data. Why this is, I don't know.

I am sure someone with time on their hands can show this mathematically

- if it already hasn't been done.

Pete Bonate

[Pete, great job with the two volumes for AAPS - db] - On 2 Feb 2005 at 16:48:24, "Vijay V. Upreti" (vupre001.at.umaryland.edu) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear group, thanks to all for the highly useful inputs, thanks Dr.

Bonate

for simulation exercise it indeed makes a very good point. As Drs.

Proost

and Bonate suggested I do understand that the real solution is in

population

analysis, but I was sceptical seeing the small sample size, anyways I

will

give it a shot.

Thanks again

Vijay Upreti - On 2 Feb 2005 at 16:59:16, Roger Jelliffe (jelliffe.at.usc.edu) sent the message

Back to the Top

Dear Vijay:

If you are considering evaluating various methods of modeling,

you might consider the nonparametric population modeling approach. This

approach makes no assumptions about the shape of the parameter

distributions in the population, such as Gaussian, lognormal, etc. The

distributions are determined only by the data itself, and by the

weighting scheme used. If one examines the assay error pattern, one can

then also find the magnitude of the environmental noise, which we call

gamma, and can get the entire parameter distribution which has the

maximum likelihood. This is exact in the nonparametric approach, not

approximate as in the iterative Bayesian of NONMEM, both of which

usually use the FOCE approach to approximate the likelihood. These

approximations destroy statistical consistency, and seriously interfere

with the precision of the parameter estimates. That is the problem with

current parametric approaches, that they do NOT have the basic property

that the more subjects you study, the closer the results get to the

truth. This is because of the FO and the FOCE approximations.

Bob Leary has examined this well, and has presented this at the

PAGE meeting in various forms in the last 2 or 3 years. He has also

implemented a parametric EM (PEM) method which used a Faure low

discrepancy integration procedure to get an almost exact likelihood.

Dr. Leary simulated a two-parameter truly Gaussian population

ranging from 25 to 800 subjects, and compared the results found with 1)

IT2B (using the FOCE approximate parametric likelihood), 2) PEM (with

an accurate parametric likelihood from the Faure low discrepancy

sequence numerical integration procedure [39]), and 3) with NPAG and

NPOD (with exact nonparametric likelihoods).

The model had parameters Vol and Kel. Over 1000 replications

were done. Populations ranged from 25 to 800 subjects. A single bolus

intravenous dose, and 2 simulated serum concentrations, each with a 10%

standard deviation, were used. Results with both NPAG and PEM were

consistent, with estimates more closely approaching the true values as

the number of subjects increased. The FOCE IT2B did not have such

consistent behavior. A small bias in mean values of 1 -- 2 % was seen

with FOCE. As to variances, NPAG and PEM were again consistent, but the

bias of FOCE was quite significant, about 20 -- 30%. As to correlation

coefficients, consistent behavior was again seen with NPAG, NPOD, and

PEM. Severe bias was seen with FOCE.

Even more disturbing was the loss of statistical efficiency

with the FOCE approximation. Recently [41], this work was extended,

with Dr. Ruedi Port of the German Cancer Research Institute, to include

the FO and FOCE approximations as implemented in the parametric

population modeling program NONMEM. NONMEM FO resulted in biases as

high as 50% in estimates of variances, and statistical efficiencies

less than 2% of those of the accurate likelihood PEM and NPAG methods

for 800 subjects. NONMEM FOCE was a modest improvement relative to its

IT2B FOCE counterpart. However, NONMEM FOCE still exhibited

significantly compromised statistical efficiency that was less than

half that of the accurate likelihood methods, as shown below :

Estimator Relative efficiency Relative

error

DIRECT OBSERVATION 100.0 % 1.00

PEM 75.4% 1.33

NPOD 61.4% 1.63

NONMEM FOCE 29.0% 3.45

IT2B FOCE 25.3% 3.95

NONMEM FO 0.9% 111.11

A Recent Competition. In September 2004, an international blind trial

of seven parametric population PK/PD estimation methods was conducted

under the sponsorship of INSERM in Lyon, France. One hundred simulated

data sets from a sigmoidal PD dose/response model were sent out in May,

2004 to a variety of PK/PD software vendors and academic developers.

Both standard (e.g., NONMEM and NLME) nonlinear mixed effects methods

based on FOCE likelihood approximations and new approaches (simulated

likelihood, stochastic approximation, and parametric EM methods,

including our PEM) were included. In September, 2004, participants met

in Lyon and the results were revealed. In general, the methods based on

more precise likelihood evaluation techniques significantly

outperformed the methods using FOCE approximations. Our PEM method tied

for the overall best performance among all seven methods as measured by

criteria such as RMSE of the estimated parameter values relative to the

true values, and the bias of the model predictions. In particular, PEM

had the best overall performance in correctly identifying which data

sets had a significant gender covariate dependence and which did not.

When it is not known whether the distributions are Gaussian or not,

NPOD or NPAG are very efficient and useful methods for population PK/PD

modeling, as shown above.

It really seems that methods that use approximations to get the

likelihood are on the way out. Exact likelihoods are in. You would be

well served to get away from approximate likelihood methods and to use

either a good parametric mmethod such as PEM, or a nonparametric

method such as NPAG or NPOD to make your population model. One other

big advantage of the nonparametric methods is that they are very well

suited to the new method of "multiple model" (MM) dosage design (see

our web site) This method uses nonparametric models and develops the

dosage regimen which is designed to hit the selected target

specifically with the least weighted squared error.

Go to our web site www.lapk.org, click on New Advances in PK/PD

Modeling, and see Bob Leary's seminar which he gave to the Applied Math

Department at USC not too long ago. You can also go to Teaching

Topics and click around there for more information. Also, go to the

PAGE meeting for last year, 2004, in Uppsala, and find Dr. Leary's

abstract from his talk there.

If you are going to make pop models, you would be well served

to do it in a way which permits the most optimal course of action based

on the data from the population. Methods based on parametric models

assume a symmetrical parameter distribution such as normal or

lognormal. The action taken (the dosage regimen) is based only on the

estimated central tendencies of the parameter distribution. In

contrast, NP methods permit maximally precise dosage MM regimens based

on the entire joint parameter distribution, and are specifically

designed to hit the selected targets with minimal weighted squared

error, based on that raw population data.

Think about it all. Click around. See for yourself. Clinical

software for multiple model dosage design is available in beta form

from us now, and more will be out soon.

Very best regards,

Roger Jelliffe

Roger W. Jelliffe, M.D. Professor of Medicine,

Division of Geriatric Medicine,

Laboratory of Applied Pharmacokinetics,

USC Keck School of Medicine

2250 Alcazar St, Los Angeles CA 90033, USA

Phone (323)442-1300, fax (323)442-1302, email= jelliffe.-at-.usc.edu

Our web site= http://www.lapk.org - On 4 Feb 2005 at 13:42:00, "Hans Proost" (j.h.proost.-a-.rug.nl) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear Vijay,

You wrote:

> I do understand that the real solution is in population

> analysis, but I was sceptical seeing the small sample size

This is an interesting point of discussion. One might consider

population

analysis as applicable to large populations, and preferring individual

analysis for small groups. IMHO this view is not correct. Even for two

individuals I would prefer a population approach over an individual

approach, simply because one gets more reliable parameter estimates.

This

can be demonstrated by analysis of Monte Carlo simulation data.

I would appreciate comments from the PharmPK group on this statement!

Best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.at.rug.nl

[Yes, that comment caught my eye too. The number of subjects may be

small (n > 1 to be a population ;-) but the number of data should be

more than 'small' if a three compartment model is being considered.

Population analysis will expect a larger number of total data points.

12 points from each of 2 subjects may work as well as 2 points from

each of 12 subjects, subject to optimal sampling time questions. I once

analyzed data from a six well equilibrium dialysis experiment with some

success using NONMEM. The total number of data points should not be too

'small' - db] - On 4 Feb 2005 at 13:50:29, "Hans Proost" (j.h.proost.aaa.rug.nl) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear Rob,

You wrote to Vijay Upreti:

> You use the model which is the best fit for each animal to report the

> individual animals pharmacokinetic data.

> The use whatever summary statistic you wish, such as median and range,

> to report the results, in my opinion.

I do not understand your answer. Vijay reported a problem that data of

1 out

of 4 animal could not be fitted to a 3-comp model. Summary statistics

of a

mix of data from two different models is meaningless. According to your

proposal, you would report the median and range of the 3-comp model the

3

animals, and the data for the remaining animal for the 2-comp model? I

do

not see much difference with reporting the data of all animals

individually!

The only difference is that you leave out the connection between the

values

and the individual (and so less informative).

Best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.aaa.rug.nl - On 4 Feb 2005 at 10:02:33, "Bonate, Peter" (Peter.Bonate.aaa.genzyme.com) sent the message

Back to the Top

The following message was posted to: PharmPK

There is nothing wrong with population modeling data from few

individuals. I think with data-rich intensive data, as few as 10

individuals could be used to get an accurate measure of the population

mean. The difficulty would be in estimating the population variance

components, in which you would need much more data. Think about

collecting data on a single variable, like weight. It doesn't take much

data to get the average weight in a group, but you need much more to get

an estimate of the population variance.

It sounds like in Vijay's case that he would be interested in the

individual estimates of the pk parameters, i.e., the empircal Bayes

estimate. Again it will depend on whether the data are sparse or rich.

With rich data, accurate estimates could easily be obtained with a

reasonable small number of subjects. With sparse data, you would need

more subjects and you would suffer from more "shrinkage" than in the

data-rich case.

Pete Bonate - On 4 Feb 2005 at 11:11:59, Robert Bies (rrb47+.at.pitt.edu) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear Vijay and Hans,

Perhaps combining the information and analyzing using a mixed effects

approach along the lines of Rik Schoemakers "fitting impossible curves"

paper that appeared I believe in British Journal of Clinical

Pharmacology

around 7 years ago would help. In this case, information is borrowed

across

the animals for the model structure and one can examine whether or not

limit

of quantitation issues in that one animal not "fittable" to the three

compartment model is contributing to the falling out of the individual

assessments (or regressions). In addition, one could turn on or off

compartments to see if that animal in fact is distinguishable from the

other

three in this case (or even use a $ mix option - although this n is very

very small for this type of operation).

Best Regards,

Rob Bies

University of Pittsburgh - On 4 Feb 2005 at 11:23:47, "Labadie, Robert" (robert.labadie.aaa.pfizer.com) sent the message

Back to the Top

The following message was posted to: PharmPK

My thoughts were as follows;

1) Report the best model and summary stats for the 3 animals.

2) Report the best model and findings for the 1 animal.

3) I assumed that all 4 individual animals' findings would be simply

listed

in a different table.

Regards.

Rob - On 6 Feb 2005 at 09:10:58, "Steve Duffull" (sduffull.aaa.pharmacy.uq.edu.au) sent the message

Back to the Top

The following message was posted to: PharmPK

Hi all

Pete Bonate Wrote:

"There is nothing wrong with population modeling data from few

individuals. I think with data-rich intensive data, as few as 10

individuals could be used to get an accurate measure of the population

mean. The difficulty would be in estimating the population variance

components, in which you would need much more data. Think about"

It is possible to estimate the number of patients required to estimate

the population parameters of various PK and PKPD models. The number of

patients will relate to the complexity of the model, the values of the

parameters and of course the sampling design that you intend (not to

mention how certain you are that you know the best model). If you were

to perform an intensive sampling design (n=12 per subject) with 10

subjects then it is possible to get reasonable estimates of both fixed

and variance of the random effects parameters ...

If Dose = 100, Ka = 1/h, CL = 4 L/h, V = 20L, and the between subject

variances were 0.25, 0.1, 0.1 (respectively - assuming a log normal

distribution of the parameters) and with a proportional and additive

residual error (but only proportional estimated). Then, the estimated

SE for the fixed effects parameters were < 20% and for the variance of

the random effects parameters ~ 50-60%. Adding a few more patients or

optimizing the design would improve these estimates further.

The more complex 2 compartment model discussed in this thread is much

harder to estimate using a population method from a single dose with

only 10 patients.

So, the success of a population analysis is of course design dependent

- and it is possible (albeit not desirable) to perform successfully

such analyses with only few subjects.

Regards

Steve

Stephen Duffull

School of Pharmacy

University of Queensland

Brisbane 4072

Australia

Tel +61 7 3365 8808

Fax +61 7 3365 1688

University Provider Number: 00025B

Email: sduffull.-at-.pharmacy.uq.edu.au

www: http://www.uq.edu.au/pharmacy/sduffull/duffull.htm

PFIM: http://www.uq.edu.au/pharmacy/sduffull/pfim.htm

MCMC PK example: http://www.uq.edu.au/pharmacy/sduffull/MCMC_eg.htm - On 6 Feb 2005 at 15:41:52, Angusmdmclean.at.aol.com sent the message

Back to the Top

Following the message from Steve and Pete;

Given that sufficient data-rich drug plasma concentration information

is available upfront from subjects and it is possible to make a

reasonable estimate of the mean PK parameters (say 1 cpt model with

first order absorption and first order elimination) and associated

variance: from the point of view of validation what is the best way to

look forward and position that information with respect to designing a

new pop PK clinical study with a large number of subjects involving

sparse sampling (say 3 samples per subject).

(a) Use the pop PK parameters obtained from the data-rich subjects as

initial estimates and perform validation with a "model building" set of

subjects and "validation group" of subjects using exclusively sparse

data from each subject; and proceed from there.

or

(b) As part of the study design should one additionally have some

data-rich plasma concentrations from subjects in the validation group

to demonstrate the validity of the pop PK model within that study; and

proceed from there.

Or indeed other options.

Invite comments on above;

Angus McLean

8125 Langport Terrace,

Suite 100,

Gaithersburg,

MD 20877 - On 11 Feb 2005 at 10:55:19, "Hans Proost" (j.h.proost.aaa.rug.nl) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear Pete and Steve,

Thank you for your comments about model selection and population

modeling. I

think we agree that the reliability of the results of a data analysis

depends on the study design, including the number of subjects and the

number

of measurements per subjects (and their product, i.e. the total number

of

measurements being the most important!), and the complexity of the

model. In

my opinion and experience, population analysis will increase the

reliability

of the results, independent of these factors.

What is desirable is a different question. Given a data set (i.e. if the

study has been performed), one should choose the analysis that provides

the

most reliable estimates. If the study has not yet been performed, the

study

design should be chosen in such a way that the desired degree of

precision

and accuracy is likely to be achieved, e.g. by Monte Carlo simulations.

Best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.at.rug.nl - On 12 Feb 2005 at 08:37:08, "Steve Duffull" (sduffull.-at-.pharmacy.uq.edu.au) sent the message

Back to the Top

The following message was posted to: PharmPK

Hi Hans

I agree with your thoughts. I have a couple of comments though:

1) The reliability of a study (perhaps better considered as the

informativeness of the design) is dependent on the total number of

subjects and the number and choice of sampling times. However the

allocation of sampling times to patients is not trivial. Althought

there is a proportional increase in the informativenss of some study

designs with increases in patient numbers, this is not true for the

number of samples per patient. Where generally when the number of

samples per patient exceeds the number of fixed effects parameters by

1.5 to 2 fold then there is diminishing returns by taking more samples

per patient.

2) Monte Carlo simulation is certainly a useful method for finding

sufficient designs. The design would be termed sufficient since when

doing a simulation for assessing designs there is no automated method

for searching over the design space - hence the operator stops when the

design looks sufficient for the purposes of their study. It is also

possible to design population PK and PKPD studies using optimal design

techniques, which do allow searching over a the potential design space

and can provide designs which are near optimal and allow for error in

the executed sampling times. Although only a couple of studies have

used optimal population designs prospectively to date (both got good

results) there are more in the wings which are looking very promising.

Without wanting to be seen to wave the optimal design banner even more

(this is obviously a non sequitur), but there is a staggering

difference in the time taken to assess the informativeness of even a

single design using Monte Carlo simulation (say at least 10 minutes)

and optimal design (say 0.2 secs).

Kind regards

Steve

Stephen Duffull

School of Pharmacy

University of Queensland

Brisbane 4072

Australia

Tel +61 7 3365 8808

Fax +61 7 3365 1688

University Provider Number: 00025B

Email: sduffull.aaa.pharmacy.uq.edu.au

www: http://www.uq.edu.au/pharmacy/sduffull/duffull.htm

PFIM: http://www.uq.edu.au/pharmacy/sduffull/pfim.htm

MCMC PK example: http://www.uq.edu.au/pharmacy/sduffull/MCMC_eg.htm - On 14 Feb 2005 at 09:00:01, "Hans Proost" (j.h.proost.aaa.rug.nl) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear Steve,

Thank you for your comments. I fully agree that the allocation of

sampling

times to patients is not trivial, and that there is diminishing returns

by

taking more samples per patient when the number of samples per patient

exceeds the number of fixed effects parameters by 1.5 to 2 fold. My

message

was less informative than your reply.

I also agree with your comments on Monte Carlo (MC) simulation and

optimal

design (OD) techniques. The latter are indeed much faster to assess,

and are

certainly valuable techniques. But I see some advantages of MC over OD:

1) OD may be less appropriate in case of more complex problems, e.g.

multi-step calculations or cases where there is not a direct

mathematical

relationship between data and the outcome to be optimized. MC can be

applied

to 'any' type of problem.

2) OD is derived for (infinitely) small errors in the data. A more

realistic

level of data noise results in nonlinearities and systematic

deviations. MC

handles this correctly, irrespective of the level of data noise.

3) OD is elegant, but from a practical point of view it lacks the

'proof of

the pudding'. One should rely on the statistical background. MC provides

directly what one wants to know.

I realize that these advantages are 'relative', and perhaps 'a matter of

skills', and probably more a 'personal feeling' than convincing

evidence. I

would suggest that, in general, OD is the first choice for 'normal

cases',

and MC is the method of choice for validation of OD, and 'special

cases'.

Best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.aaa.rug.nl - On 15 Feb 2005 at 09:22:55, "Steve Duffull" (sduffull.-at-.pharmacy.uq.edu.au) sent the message

Back to the Top

The following message was posted to: PharmPK

Hi Hans

Thanks for your response. I have a general comment about the

difference of MC simulation and optimal design (OD) - then I will

address your comments. To me the most important difference between the

methods is the question being asked. In MC simulation you can ask the

question: "what is the power of this study to reject the null?". I

know of no way that this question can be asked in an optimal design

sense. However OD can answer the questions: "What is the best design

to estimate the parameters of my model?" or "What is the best design to

discriminate between two or more models?" or "What is the best design

to reduce parameter bias?"

To answer your questions below:

1) "OD may be less appropriate for more complex models" - without an

example I cannot agree or disagree. Both MC simulations and OD are

dependent on models - without the models you cannot simulate data. So

all models need to be able to be expressed in some form (analytical or

as differential equations). So - I cannot think of an example where MC

simulation would be able to work without a model - or indeed be able to

work with models that OD could not.

2) "OD is derived for (infinitely) small errors in the data" - not

true. OD accounts for exactly the same residual variability models

that MC simulation does. So - take any model from NONMEM you can

simulate from it and you can optimize a design for it.

3) "OD lacks proof of the pudding" - true - but you have to start

somewhere (to be honest MC simulations is also lacking here). I

understand your argument - but it is important to try new things out.

As a note when I optimize a design - I always simulate and estimate

under the best design to check how it works. There are some

prospective optimal designed studies - which have produced very

encouraging results.

My conclusion is: OD and MC sim do different things - they are

complementary not competitive. Both are useful (and perhaps soon to be

essential) tools for the modern pharmacometrician.

Regards

Steve

Stephen Duffull

School of Pharmacy

University of Queensland

Brisbane 4072

Australia

Tel +61 7 3365 8808

Fax +61 7 3365 1688

University Provider Number: 00025B

Email: sduffull.aaa.pharmacy.uq.edu.au

www: http://www.uq.edu.au/pharmacy/sduffull/duffull.htm

PFIM: http://www.uq.edu.au/pharmacy/sduffull/pfim.htm

MCMC PK example: http://www.uq.edu.au/pharmacy/sduffull/MCMC_eg.htm - On 16 Feb 2005 at 13:05:53, "Hans Proost" (j.h.proost.-a-.rug.nl) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear Steve,

Thank you for your well-considered comments. I agree with your comment

about

the difference between MC simulation and optimal design (OD). I was not

really clear and complete at this point.

I have also a question about this: Can OD learn us how large parameter

bias

and precision will be for a given study design (and, of course,

estimations

of parameter means and sds, residual error, etcetera) ? MC can

certainly do

this.

> 1) "OD may be less appropriate for more complex models"

I was thinking about Iterative Two-Stage Bayesian techniques, or

sequential

PK-PD analysis (first PK analysis, then PD analysis with fixed PK

parameters). Could OD be performed in these cases?

> 2) "OD is derived for (infinitely) small errors in the data" - not

> true. OD accounts for exactly the same residual variability models

> that MC simulation does. So - take any model from NONMEM you can

> simulate from it and you can optimize a design for it.

Probably I was not quite clear in my comment. I was concerned about the

bias

and loss of precision due to the nonlinear relationship between (noisy)

data

and parameter estimates. As a simple example: the parameter is the

square of

the measurement (i.e. a nonlinear relationship). A measurement of 10

results

in a parameter of 100. Measurements of 9 and 11 result in 81 and 121

respectively. A normal distribution around 10 results in a skewed

distribution around 100, but with a mean value larger than 100, i.e. a

biased estimate. The bias increases rapidly with increasing variability.

Similar situations occur for any nonlinear relationships between

measurements and parameters. In this simple example it can be avoided by

using a lognormal distribution, but this does not work for most

nonlinear

equations. My question is: can OD handle these errors? MC certainly

does.

> 3) "OD lacks proof of the pudding" - true - but you have to start

> somewhere (to be honest MC simulations is also lacking here).

Of course one has to know or assume a model, values of the model

parameters, their distribution, residual error, etcetera. Is this what

you

mean what is lacking in MC simulations?

> My conclusion is: OD and MC sim do different things - they are

> complementary not competitive. Both are useful (and perhaps soon to

be

> essential) tools for the modern pharmacometrician.

I fully agree. My somewhat provocative message was meant to get things

clear. Thank you for correcting me.

Best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.at.rug.nl - On 17 Feb 2005 at 10:41:25, "Steve Duffull" (sduffull.aaa.pharmacy.uq.edu.au) sent the message

Back to the Top

The following message was posted to: PharmPK

Hi Hans

> I have also a question about this: Can OD learn us how large

> parameter bias and precision will be for a given study design

> (and, of course, estimations of parameter means and sds,

> residual error, etcetera) ? MC can certainly do this.

Yes. OD methods are normally reserved for evaluating and optimizing the

precision of parameter estimation for a given design and model. It can

also be used for evaluating the bias in the design. It is important to

remember, however, that OD is a "data-independent" method (i.e. no

simulations) and therefore bias does not arise due to simulation error.

Bias in this setting is due to the linear approximation to the

likelihood. NB - designs to minimize bias have not been well

established.

> > 1) "OD may be less appropriate for more complex models"

> I was thinking about Iterative Two-Stage Bayesian techniques,

> or sequential PK-PD analysis (first PK analysis, then PD

> analysis with fixed PK parameters). Could OD be performed in

> these cases?

Yes. You can do simultaneous PKPD or sequential PKPD [see Duffull et

al. Pharm Res 2001;18:83-9 for a sequential example] with OD. Having

said that I believe that where pratical simultaneous PKPD is the better

approach.

> a simple example: the parameter is the square of the

> measurement (i.e. a nonlinear relationship). A measurement of

> 10 results in a parameter of 100. Measurements of 9 and 11

> result in 81 and 121 respectively. A normal distribution

> around 10 results in a skewed distribution around 100, but

> with a mean value larger than 100, i.e. a biased estimate.

> The bias increases rapidly with increasing variability.

> Similar situations occur for any nonlinear relationships

> between measurements and parameters. In this simple example

> it can be avoided by using a lognormal distribution, but this

> does not work for most nonlinear equations. My question is:

> can OD handle these errors? MC certainly does.

OK - I don't quite see the problem, but perhaps I can offer the

following comment. OD can handle (as can MC simulation) a variety of

different random effects models for both parameters and residual effects

for the data. So if you can write the model in NONMEM then you can

optimize it (at least for all the models that I can think of at the

moment). This includes lognormal distributions of the parameters etc.

But, OD would not be useful for determining the influence of execution

model errors (departure from trial protocol) on the power of a study to

show a particular result. This is a situation which is particularly

ammenable to a simulation platform - where simulation can be used to

assess how different levels of protocol violation (e.g. patient

compliance, investigator compliance) can be assessed.

So - to summarise. I believe that

1) OD provides a valuable tool for designing "learning" studies - where

the primary goal is to learn about the underlying system. In this

setting the best designs are generally those where the response variable

is most sensitive to perturbations in the system.

2) MC simulation is a valuable tool for designing "confirming" studies -

where the primary goal is to make inference about the distribution of

responses that arise from the underlying system. In this setting the

best designs are generally those where the inference is essentially

robust (i.e. has low sensitivity) to the underlying system.

(Obviously, this is a gross over simplification.)

> > 3) "OD lacks proof of the pudding" - true - but you have

> to start > somewhere (to be honest MC simulations is also

> lacking here).

> Of course one has to know or assume a model, values of the

> model parameters, their distribution, residual error,

> etcetera. Is this what you mean what is lacking in MC simulations?

My comment was slightly more general. I was just indicating that I

don't believe that MC simulation (e.g. for clinical trial simulation) is

a completely accepted science either.

Regards

Steve

Stephen Duffull

School of Pharmacy

University of Queensland

Brisbane 4072

Australia

Tel +61 7 3365 8808

Fax +61 7 3365 1688

University Provider Number: 00025B

Email: sduffull.-a-.pharmacy.uq.edu.au

www: http://www.uq.edu.au/pharmacy/sduffull/duffull.htm

PFIM: http://www.uq.edu.au/pharmacy/sduffull/pfim.htm

MCMC PK example: http://www.uq.edu.au/pharmacy/sduffull/MCMC_eg.htm - On 18 Feb 2005 at 15:26:31, "Hans Proost" (j.h.proost.aaa.rug.nl) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear Steve,

Thank you again for your clear and extensive comments.

With respect to 'my problem' with bias and decreased precision due to

the

nonlinearity we do not seem to come to a conclusion. Anyhow, I don't

think

it is a large problem, since it becomes apparent at relatively high

levels

of interindividual variability and data noise. Perhaps I can try to

clarify

this point in a new message.

> My comment was slightly more general. I was just indicating that I

> don't believe that MC simulation (e.g. for clinical trial simulation)

is

> a completely accepted science either.

I hope MC simulation is accepted by you! Perhaps I am infected by the

Monte

Carlo virus. Just gambling, and ending with a scientific proof ...

Best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.at.rug.nl

Want to post a follow-up message on this topic? If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "Model selection" as the subject

PharmPK Discussion List Archive Index page

Copyright 1995-2010 David W. A. Bourne (david@boomer.org)