- On 30 Oct 2013 at 13:40:29, Min Li (lm3210.usa.-at-.gmail.com) sent the message

Back to the Top

Dear All,

I have a very basic question that confused for long time. Hope someone can help me better understand

standard deviations of PK parameters in PK modeling.

If you fit mean drug concentration versus time data (only one curve) to one-compartment open model

(oral administration) using nonlinear regression method, you will get the best estimates for PK

parameters including, V, ka and kel. Also from the software, the standard deviation for these

estimates PK parameters will be provided. What do these standard deviation really mean? How are they

calculated? Are they statistically meaningful?

On the other hand, if you use individual curves (drug concentration vs time) do the same thing, you

also will have the standard deviations for these PK parameters. How are they calculated in this

case?

In these two cases, the standard deviations for pk parameters are both calculated. How to interpret

these values? What's the difference between them?

Many thanks

Maria - On 31 Oct 2013 at 10:39:25, Leonid Gibiansky (lgibiansky.at.quantpharm.com) sent the message

Back to the Top

Maria,

The standard deviations that you asked about are indeed different. In

the first case (fitting the mean data) standard deviation refers to the

confidence that you have about parameters that describe the mean curve.

You may construct confidence intervals based on these SD and from those

estimate precision the parameter estimates for the mean curve. This

should be OK for preclinical studies where all subjects are very similar

(and sampling is often destructive).

In the second case, you study many individuals. Presumably, you have

many data points for each individual so that you can estimate each

parameter for each individual. When you study distribution of these

parameters across study subjects, you are likely to see some

variability. Standard deviation of the parameter distribution will tell

you about variability within the study population rather than

characterize the precision of the mean curve. In some cases you may use

these parameter distributions and estimate mean parameters (as means of

individual parameters) and precision of the mean (as SD/sqrt(n-1)). This

precision is somewhat similar to the first case and characterizes the

confidence in the means of the parameter distributions. If the

distributions are log-normal, you take the log of the parameters,

perform the same computations and then exponentiate (thus computing

geometric mean and CI of the geometric means).

Yet another, more mathematically precise procedure is to use population

(nonlinear mixed effect) modeling. By fitting population model to the

data you get both the estimates of inter-subject variability (random

effects) and the typical ("mean") parameters together with the precision

of the parameter estimates. You may google for "population modeling" or

something similar to get tons of references, for example

http://www.nature.com/psp/journal/v1/n9/full/psp20124a.html

Leonid

Leonid Gibiansky, Ph.D.

President, QuantPharm LLC

web: www.quantpharm.com - On 31 Oct 2013 at 10:41:44, Nick Holford (n.holford.-a-.auckland.ac.nz) sent the message

Back to the Top

Maria,

There are two things you should try to understand about PK parameters - variability and uncertainty.

1. A PK parameter such as clearance will vary from person to person. If you have a collection of

clearance values each one from a different person then you can describe the variability by the

standard deviation of the collection of values. This is easy to do and explained in any standard

textbook of statistics. The variability of a parameter is useful for describing drugs which have a

lot of variability e.g. morphine compared to drugs with much less variability e.g. busulfan.

2. When a PK parameter is estimated there is always some uncertainty about the true value because of

things like measurement error and timing of samples. Each PK parameter estimate may be associated

with a measure of uncertainty called the standard error. Note that the term for this is not standard

deviation. The calculation of the standard error can be done in a variety of ways and it probably

doesn't matter which method is used because the standard error is only a crude guide to the

uncertainty. A better guide is a confidence interval calculated using a bootstrap method. The

uncertainty is probably most useful as a guide to how badly designed the PK study was. Big

uncertainty in a parameter means a bad design. Unfortunately this is quite common because people

don't think carefully when planning studies.

If you software reports individual parameter estimates with a standard deviation instead of a

standard error I would recommend not using it. Get software that understands that clearance is a

primary parameter and kel is a secondary parameter and knows the difference between variability and

uncertainty.

Best wishes,

Nick

--

Nick Holford, Professor Clinical Pharmacology

Dept Pharmacology & Clinical Pharmacology, Bldg 503 Room 302A

University of Auckland,85 Park Rd,Private Bag 92019,Auckland,New Zealand

email: n.holford.at.auckland.ac.nz

http://holford.fmhs.auckland.ac.nz/ - On 31 Oct 2013 at 10:44:36, Harish Kaushik Kotakonda (kaushikkotakonda.aaa.gmail.com) sent the message

Back to the Top

Hello Maria,

When we use any of the standard software ,Statistics calculated and displayed in the output while

fitting the PCT to any of the model and estimating the model parameters are Standard Error (SE) and

%CV of each parameters.

Why do we need these stats?

Accuracy and Precision of parameter estimates are understood from these statistical estimates.

Accuracy deals with p-hat in relation to true parameter value and precision deals with %CV

Estimated model parameters p-hat have no value unless they have a fair degree of precision. This can

be assessed by computing each parameters coefficient of variation

%CV=(SE/p-hat) *100

A large parameter CV doesn't imply that model is incorrect but it may be due to not enough samples

or not having samples at the appropriate times. For ex when a model contains several parameters

V=1000± 30L, CV=3% & K= 0.01±0.005 /hr, CV=50% then CV is a better way of expressing the

variability, because CV is a relative measure, where as SE is an absolute measure

After the model is fit when the C hat (predicted conc's) are displayed SD of each of these predicted

Conc at each of the time points are calculated,along with SD residuals also are important to

qualitatively asses the goodness of fit to the selected model, additionally SD is used to calculate

the Variance covariance matrix

in the model fitting

Regards

Kaushik - On 31 Oct 2013 at 10:47:22, zhoux383.aaa.umn.edu sent the message

Back to the Top

Hi Maria,

I try to share my thoughts on this problem:

In the first scenario, you got parameter value, RSE%(CV) and 95% CI when

you run individual parameter estimation. They are calculated based on the

residual error (Y-Yhat) and independent variable(X) and related to the

non-linear function. They represent how well the model fits the data and

where the coefficients mostly likely are. Imagine the simple case of linear

regression to see how the SEs of alpha and beta are calculated.

In the second POPPK scenario, total variability is separated into RUVs and

BSVs. RUVs are unexplained variability. BSVs are explained by random

variability and structured variability(e.g. co-variates) in parameters. In

the output of ADAPT5, you got parameter value(pop mean), %RSE of parameter;

SD and SD's %RSE. SD represents the between-subject random variability as

you defined by ETAs. %RSE shows how well your parameters and SDs are

estimated.

The confusing outputs may have led to your question. The SDs in ADAPT5 is

actually refers to ETAs which represents random BSV. %RSE everywhere represents how well parameters

are estimated and relates to model fitting. It is related to RUV but not BSV.

Hope this helps!

Regards,

Jie - On 1 Nov 2013 at 14:40:01, Min Li (lm3210.usa.-at-.gmail.com) sent the message

Back to the Top

Dear All,

Thank you so much for your answers. They are very helpful. I want to confirm whether I understand

are correct.

I am talking preclinical data in rats, and I don't have enough data to do a pop PK modeling.

For the first case, curve-fitting only performed for the mean data. There is no way to get the

variability information. The standard errors of pk parameters only indicate the uncertainty, and the

resulting intervals (for 95% CI) would bracket the true values of parameter in approximately 95% of

the cases, correct? If the values standard error is large, that means the model could not explain

the data very well, correct? It also could be due to not enough samples or not having samples at the

appropriate times, but it doesn't necessary mean the model is wrong.

Dr. Holford, you said

"The calculation of the standard error can be done in a variety of ways and it probably doesn't

matter which method is used because the standard error is only a crude guide to the uncertainty. A

better guide is a confidence interval calculated using a bootstrap method."

For only a set of data (only mean data), I still could not understand how the standard errors are

calculated. What do you mean the standard error is "a crude guide to the uncertainty". In this

case, there is no way to do bootstrap anyway.

I also confused about the calculation of SE for each parameter in the first case. It seems to me

very complicated. If you have different initial values for the parameters, will the SE of parameters

provided with software also change ?

For the second case, if it is a two-stage method, the standard error of each parameter should be

representative for the variability of these parameters, not the uncertainty. How to determine the

uncertainty of these parameters in this case, then?

Many thanks.

Min - On 3 Nov 2013 at 09:41:42, Walt Woltosz (walt.-at-.simulations-plus.com) sent the message

Back to the Top

Dear Min,

The PKPlus module in GastroPlus provides the CVs for each fitted

parameter. It takes only a few seconds to fit 1, 2, and 3-compartment

models to a set of data and to be able to compare models both graphically

and to see their model parameter CVs, as well as the Akaike Information

Criterion and Schwatrz Criterion for each model. A variety of weighting

schemes are available as well.

Plots show absolute, log, and residuals with a mouse click for any model.

You can see more about PKPlus here:

http://www.simulations-plus.com/Products.aspx?pID=11&mID=11

If you want to send your CP-time data, I would be happy to provide the

results to you.

Best regards,

Walt

Walt Woltosz

Chairman and CEO

Simulations Plus, Inc. (NASDAQ: SLP)

42505 10th Street West

Lancaster, CA 93534 - On 3 Nov 2013 at 09:44:17, Roger Jelliffe (jelliffe.-at-.usc.edu) sent the message

Back to the Top

Dear Nick et al:

It is interesting that all these discussions revolve around parametric estimates of model

parameter values. And the main points under discussion usually appear to be those of the mean and

standard deviation. I really wonder why this is. Why would anyone wish to make some assumption about

the shape of a model parameter distribution when it is not necessary? Why postulate a Gaussian or

lognormal or some other multimodal distribution when you can estimate the entire distribution right

away, using a nonparametric (NP) with no assumptions at all about its shape? Over the years we have

directly compared a parametric iterative Bayesian approach with the NP one. This is possible with

our IT2B software. The NP approach is not constrained by some parametric assumption. You might look

at

Bustad A, Terziivanov D, Leary R, Port R, Schumitzky A, and Jelliffe R: Parametric and Nonparametric

Population Methods: Their Comparative Performance in Analysing a Clinical Data Set and Two Monte

Carlo Simulation Studies. Clin. Pharmacokinet., 45: 365-383, 2006.

As I understand it, one also is interested in the various errors present in the data that is

analyzed. Many people assume an overall error pattern, having additive or proportional components,

and that is fine. But why not separate that error into its lab and clinical components? You can

easily do this. For example, our lab likes to first, determine the assay error over its full working

range before starting the pop modeling. Then, in addition, we like to estimate another parameter

which we call lambda, which is an additive noise term reflecting the errors with which doses are

prepared and given, the time errors associated with giving doses and drawing serum concentrations,

model misspecification, and possible changes in model parameter values (or distributions) during the

period of data analysis. What this does is to determine, first, the measurement error of the assay.

Then, second, it gives us an estimate of the noise associated with the clinical part of the study.

Small lambda, good precise clinical study. Big lambda, noisy clinical part of the study. This is

useful information, we think, and that is why we do it. Actually, these other terms are not

measurement noise at all, but are process noise which more properly should go in the differential

equations themselves.

About uncertainty in the estimates of the model parameter distributions - yes, bootstrap is the

best current method.

About software - just how does software "understand" that clearance is a primary parameter

while Kel is a secondary one? It has been obvious to many that they are quite interchangeable, and

one can use either one as desired. I am not as interested in a hypothetical volume completely

cleared of a drug as I am in the amount of drug transferred from one compartment to another in a

certain time. The mathematicians I know see no difference between the two, and I am told that there

are many texts on model identifiability that show this. You may like tomahtoes, I like tomatoes.

Chacun a son gout.

Respectfully,

Roger - On 3 Nov 2013 at 09:45:42, Charlie Brindley (charlie.brindley.-at-.kinetassist.com) sent the message

Back to the Top

Dear Min,

Actually, it is possible to estimate variability from mean concentrations derived from a destructive

blood sampling study without using a mixed-effect model.

See Nedelman et al 1995, Pharm Res 12:124 which applies Bailer's method for obtaining confidence

intervals (and SEs) for AUC (when only one sample per subject but multiple subjects samples at

several sampling times).

Charlie

KinetAssist Ltd - On 3 Nov 2013 at 09:47:19, Jie Zhou (zhoux383.at.umn.edu) sent the message

Back to the Top

Hi Min,

Pop-pk method can be used for animal data even if there are not enough data points. In the extreme

case of one sample per individual, fixing the residual error is required to perform the analysis.

You would be able to get parameter, SD of parameter (intraindividual variability) and uncertainty

estimation for both. All are reported in results section.

Or you can use naive pooled data-taking the means of each time points and treat the data as "one

individual". You can get estimates for parameters and Uncertainty estimates of each parameter. Your

statements are right. And you are right that the parameters and uncertainty will change a little bit

with different initial values but not too much if your initial values are reasonable. The estimation

of function parameters are implemented through several different algorithms,e.g. Weighted least

squares, maximum likelihood etc. I believe there are mathematical equations in references to show

you how the uncertainty is calculated. Just remember you have data from one "individual " so there

is no intraindividual variation.

One thing to mention is that for AUC, CL etc. non-compartmental parameters and their

SD(intraindividual) variability can be estimated even for sparse sampling data. You can check the

help guide in Phoenix WinNonlin (sparse sampling). Briefly, AUC is calculated through the mean

concentrations and SD is calculated based on number of samples per time group, number of samples per

individual and variation within each time group. Or bootstrapping method according to the reference:

J pharm sci, 87, 372-387. (1998).

Hope this helps!

Regards

Jie - On 3 Nov 2013 at 21:17:35, J.H.Proost (j.h.proost.at.RUG.NL) sent the message

Back to the Top

Dear Jie, Min and others,

It is good to have this discussion. Here is my one-cent contribution. You propose to perform a naive

pooling approach on mean data. Why mean data? The correct way to do naive pooling is using all

individual data. Using mean data implies that you do a pre-analysis, and leave out relevant

information from the individual values, e.g. with respect to the variability. I'm quite sure that

this approach will provide more reliable standard errors and confidence intervals, e.g. by

bootstrapping. Moreover, I don't see any argument against using all individual data (in 2013 we

don't talk about longer runtimes, do we?).

best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics, Toxicology and Targeting

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

Email: j.h.proost.-a-.rug.nl - On 4 Nov 2013 at 13:30:14, Eleveld, DJ (d.j.eleveld.at.umcg.nl) sent the message

Back to the Top

Dear Roger et al:

The choice to use a normal distribution is not arbitrary. The normal distribution has the highest

entropy for all distributions with the same mean and variance. So even when the normal distribution

is the "wrong" distribution, it is still the best choice by virtue of being the "least worst".

Except in the case when you know apriori what the true distribution really is, but this doesn't

happen very often, especially for biological systems.

Sometimes distributional assumptions are necessary in the sense that some methods which we take for

granted nowadays cannot be done with the computing power available to mere mortals. As I understand

it current non-parametric methods require more computing power than parametric methods.

This brings me to something I don't understand about the non-parametric approach. I often see stated

that non-parametric methods make "no assumptions at all" about the shape of distributions. Does this

include "smoothness" of a distribution? It is a very innocuous and I think very reasonable

assumption that the true distribution of some complex natural phenomenon is smooth. But I am not

sure if this is taken into account in current non-parametric methods. Can you clarify?

Warm regards,

Douglas Eleveld - On 6 Nov 2013 at 21:51:37, Bob Leary (Bob.Leary.aaa.certara.com) sent the message

Back to the Top

Douglas,

The maximum entropy argument for assuming a normal distribution is not very compelling in the PK

world. The normal distribution is only the maximum entropy distribution for

random variables which have a range on the entire real line (both negative and positive values).

This is almost never the case for PK parameters, which are often known to be non-negative. Here the

the maximum entropy distribution is exponential (for a given known mean). To enforce

non-negativity, we usual alter the normality assumption to log normality, which is convenient

but not maximum entropy.

Two stronger arguments for use of a normality (or log normality) assumption are

a) it naturally leads to an extended least squares formulation for the corresponding likelihood,

which has certain 'good' properties and is often a reasonable objective function

even when the underlying random variable is not normal or log normal

b) it greatly simplifies the implementation of parametric EM methods

Bob Leary - On 6 Nov 2013 at 21:53:03, Jie Zhou (zhoux383.-at-.umn.edu) sent the message

Back to the Top

Hi Hans,

Thanks for your kind advice!

We had a special data set where there is only one sample per individual (destructive sampling of

animals). Assuming the individuals perform similar, we took the means at each time point

(naïve-pooled method) to generate a structure model for PK/PD. After establishing the structure

PK/PD model, pop-PK/PD method with all individual data then can be used to acquire more information

under the same model. I agree that Naïve-pooled method is limited by throwing away the information

of within time group variation. But it can be the initial step for model establishment and data

processing.

And I did try using all individual data and bootstrapping for estimating parameters such as AUC, CL

etc. non-compartmental parameters. However, further application of this method towards more

complicated functions such as CMT analysis or PK/PD analysis seems to be not guaranteed. Even the

authors of “destructive sampling bootstrapping method” had some concerns about estimating parameters

which are not linear combination of concentrations since “pseudo-individual” time profile is

generated and bootstrapped in the method. I would love to try the method but just wondering if

people believe it or not.

Thanks!

Regards,

Jie - On 7 Nov 2013 at 12:17:09, J.H.Proost (j.h.proost.-at-.rug.nl) sent the message

Back to the Top

Dear Jie,

Thank you for your reply. A few comments from my side:

1) "we took the means at each time point (naïve-pooled method)"

In my view, the essence of the naïve-pooled method is that all data are pooled without considering

that they are obtained from different individuals (the aspect of 'destructive sampling' is not

relevant, as pointed out clearly by Nick Holford). Using the means instead of all individual data is

even one step more 'naïve'.

2) The use of means instead of all individual data may be a good starting point, but I don't see any

reason to use this 'naïve-naïve' method in a final analysis.

3) You say: 'further application of this method towards more complicated functions such as CMT

analysis or PK/PD analysis seems to be not guaranteed'. Is this your experience, or from the authors

of “destructive sampling bootstrapping method”? Do you have a reference?

best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics, Toxicology and Targeting

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands - On 7 Nov 2013 at 16:39:26, Jie Zhou (zhoux383.at.umn.edu) sent the message

Back to the Top

Hi Hans,

Thanks for your comments!

I find out how to perform naïve-pooling using all data points, assuming them to come from one

individual. I had a wrong impression before. Thanks for catching that up.

For the 3) comment, I did consult with the authors and their reply was (if I may quote his email)

that"Using the (nested) bootstrap based on pseudo profiles we were able to estimate almost any

parameter derivable from non-compartmental kinetics and, most importantly, its variability. We did

not explore compartmental models, but I am sure it will become extremely complicated and will

probably deliver highly variable solutions, if any. In many so-called "rich data" situations already

2-CMT (input - central - peripheral) micro-parameters are difficult to estimate with sufficient

precision and

reproducibility (being at least robust in sensitivity analyses), in destructive sampling we would

not have any profiles. Under some reasonable assumptions I could imagine applications in PKPD. Using

simple pooled data will not provide closed-form variability estimates apart from linear functions of

C and I think the same holds true for PopPK (= nonlinear mixed models)"- from Dr. Harry Mager

Thanks!

Regards,

Jie

Want to post a follow-up message on this topic?
If this link does not work with your browser send a follow-up message to PharmPK@lists.ucdenver.edu with "Standard deviations for pk parameter estimates" as the subject |

Copyright 1995-2014 David W. A. Bourne (david@boomer.org)