# PharmPK Discussion - Standard deviations for pk parameter estimates

PharmPK Discussion List Archive Index page
• On 30 Oct 2013 at 13:40:29, Min Li (lm3210.usa.-at-.gmail.com) sent the message
`Dear All,I have a very basic question that confused for long time. Hope someone can help me better understandstandard deviations of PK parameters in PK modeling. If you fit mean drug concentration versus time data (only one curve) to one-compartment open model(oral administration) using nonlinear regression method, you will get the best estimates for PKparameters including, V, ka and kel. Also from the software, the standard deviation for theseestimates PK parameters will be provided. What do these standard deviation really mean? How are theycalculated? Are they statistically meaningful?On the other hand, if you use individual curves (drug concentration vs time) do the same thing, youalso will have the standard deviations for these PK parameters. How are they calculated in thiscase?In these two cases, the standard deviations for pk parameters are both calculated. How to interpretthese values? What's the difference between them?Many thanksMaria`
Back to the Top

• On 31 Oct 2013 at 10:39:25, Leonid Gibiansky (lgibiansky.at.quantpharm.com) sent the message
`Maria,The standard deviations that you asked about are indeed different. Inthe first case (fitting the mean data) standard deviation refers to theconfidence that you have about parameters that describe the mean curve.You may construct confidence intervals based on these SD and from thoseestimate precision the parameter estimates for the mean curve. Thisshould be OK for preclinical studies where all subjects are very similar(and sampling is often destructive).In the second case, you study many individuals. Presumably, you havemany data points for each individual so that you can estimate eachparameter for each individual. When you study distribution of theseparameters across study subjects, you are likely to see somevariability. Standard deviation of the parameter distribution will tellyou about variability within the study population rather thancharacterize the precision of the mean curve. In some cases you may usethese parameter distributions and estimate mean parameters (as means ofindividual parameters) and precision of the mean (as SD/sqrt(n-1)). Thisprecision is somewhat similar to the first case and characterizes theconfidence in the means of the parameter distributions. If thedistributions are log-normal, you take the log of the parameters,perform the same computations and then exponentiate (thus computinggeometric mean and CI of the geometric means).Yet another, more mathematically precise procedure is to use population(nonlinear mixed effect) modeling. By fitting population model to thedata you get both the estimates of inter-subject variability (randomeffects) and the typical ("mean") parameters together with the precisionof the parameter estimates. You may google for "population modeling" orsomething similar to get tons of references, for examplehttp://www.nature.com/psp/journal/v1/n9/full/psp20124a.htmlLeonidLeonid Gibiansky, Ph.D.President, QuantPharm LLCweb:    www.quantpharm.com`
Back to the Top

• On 31 Oct 2013 at 10:41:44, Nick Holford (n.holford.-a-.auckland.ac.nz) sent the message
`Maria,There are two things you should try to understand about PK parameters - variability and uncertainty.1. A PK parameter such as clearance will vary from person to person. If you have a collection ofclearance values each one from a different person then you can describe the variability by thestandard deviation of the collection of values. This is easy to do and explained in any standardtextbook of statistics. The variability of a parameter is useful for describing drugs which have alot of variability e.g. morphine compared to drugs with much less variability e.g. busulfan.2. When a PK parameter is estimated there is always some uncertainty about the true value because ofthings like measurement error and timing of samples. Each PK parameter estimate may be associatedwith a measure of uncertainty called the standard error. Note that the term for this is not standarddeviation. The calculation of the standard error can be done in a variety of ways and it probablydoesn't matter which method is used because the standard error is only a crude guide to theuncertainty. A better guide is a confidence interval calculated using a bootstrap method. Theuncertainty is probably most useful as a guide to how badly designed the PK study was. Biguncertainty in a parameter means a bad design. Unfortunately this is quite common because peopledon't think carefully when planning studies.If you software reports individual parameter estimates with a standard deviation instead of astandard error I would recommend not using it. Get software that understands that clearance is aprimary parameter and kel is a secondary parameter and knows the difference between variability anduncertainty.Best wishes,Nick--Nick Holford, Professor Clinical PharmacologyDept Pharmacology & Clinical Pharmacology, Bldg 503 Room 302AUniversity of Auckland,85 Park Rd,Private Bag 92019,Auckland,New Zealandemail: n.holford.at.auckland.ac.nzhttp://holford.fmhs.auckland.ac.nz/`
Back to the Top

• On 31 Oct 2013 at 10:44:36, Harish Kaushik Kotakonda (kaushikkotakonda.aaa.gmail.com) sent the message
`Hello Maria,When we use any of the standard software ,Statistics calculated and displayed in the output whilefitting the PCT to any of the model and estimating the model parameters are Standard Error (SE) and%CV of each parameters.Why do we need these stats?Accuracy and Precision of parameter estimates are understood from these statistical estimates.Accuracy deals with p-hat in relation to true parameter value and precision deals with %CVEstimated model parameters p-hat have no value unless they have a fair degree of precision. This canbe assessed by computing each parameters coefficient of variation%CV=(SE/p-hat) *100A large parameter CV doesn't imply that model is incorrect but it may be due to not enough samplesor not having samples at the appropriate times. For ex when a model contains several parametersV=1000± 30L, CV=3% & K= 0.01±0.005 /hr, CV=50% then CV is a better way of expressing thevariability, because CV is a relative measure, where as SE is an absolute measureAfter the model is fit when the C hat (predicted conc's) are displayed SD of each of these predictedConc at each of the time points are calculated,along with SD residuals also are important toqualitatively asses the goodness of fit to the selected model, additionally SD is used to calculatethe Variance covariance matrixin the model fittingRegardsKaushik`
Back to the Top

• On 31 Oct 2013 at 10:47:22, zhoux383.aaa.umn.edu sent the message
`Hi Maria,I try to share my thoughts on this problem:In the first scenario, you got parameter value, RSE%(CV) and 95% CI whenyou run individual parameter estimation. They are calculated based on theresidual error (Y-Yhat) and independent variable(X) and related to thenon-linear function. They represent how well the model fits the data andwhere the coefficients mostly likely are. Imagine the simple case of linearregression to see how the SEs of alpha and beta are calculated.In the second POPPK scenario, total variability is separated into RUVs andBSVs. RUVs are unexplained variability. BSVs are explained by randomvariability and structured variability(e.g. co-variates) in parameters. Inthe output of ADAPT5, you got parameter value(pop mean), %RSE of parameter;SD and SD's %RSE. SD represents the between-subject random variability asyou defined by ETAs. %RSE shows how well your parameters and SDs areestimated.The confusing outputs may have led to your question. The SDs in ADAPT5 isactually refers to ETAs which represents random BSV. %RSE everywhere represents how well parametersare estimated and relates to model fitting. It is related to RUV but not BSV.Hope this helps!Regards,Jie`
Back to the Top

• On 1 Nov 2013 at 14:40:01, Min Li (lm3210.usa.-at-.gmail.com) sent the message
`Dear All,Thank you so much for your answers. They are very helpful.  I want to confirm whether I understandare correct.I am talking preclinical data in rats, and I don't have enough data to do a pop PK modeling.For the first case, curve-fitting only performed for the mean data.  There is no way to get thevariability information. The standard errors of pk parameters only indicate the uncertainty, and theresulting intervals (for 95% CI) would bracket the true values of parameter in approximately 95% ofthe cases, correct? If the values standard error is large, that  means the model could not explainthe data very well, correct? It also could be due to not enough samples or not having samples at theappropriate times, but it doesn't necessary mean the model is wrong.Dr. Holford, you said"The calculation of the standard error can be done in a variety of ways and it probably doesn'tmatter which method is used because the standard error is only a crude guide to the uncertainty. Abetter guide is a confidence interval calculated using a bootstrap method."For only a set of data (only mean data), I still could not understand how the standard errors arecalculated.  What do you mean the standard error is "a crude guide to the uncertainty". In thiscase, there is no way to do bootstrap anyway.I also confused about the calculation of SE for each parameter in the first case. It seems to mevery complicated. If you have different initial values for the parameters, will the SE of parametersprovided with software also change ?For the second case, if it is a two-stage method, the standard error of each parameter should berepresentative for the variability of  these parameters, not the uncertainty. How to determine theuncertainty of these parameters in this case, then?Many thanks.Min`
Back to the Top

• On 3 Nov 2013 at 09:41:42, Walt Woltosz (walt.-at-.simulations-plus.com) sent the message
`Dear Min,The PKPlus module in GastroPlus provides the CVs for each fittedparameter. It takes only a few seconds to fit 1, 2, and 3-compartmentmodels to a set of data and to be able to compare models both graphicallyand to see their model parameter CVs, as well as the Akaike InformationCriterion and Schwatrz Criterion for each model. A variety of  weightingschemes are available as well.Plots show absolute, log, and residuals with a mouse click for any model.You can see more about PKPlus here:http://www.simulations-plus.com/Products.aspx?pID=11&mID=11If you want to send your CP-time data, I would be happy to provide theresults to you.Best regards,WaltWalt WoltoszChairman and CEOSimulations Plus, Inc. (NASDAQ: SLP)42505 10th Street WestLancaster, CA 93534`
Back to the Top

• On 3 Nov 2013 at 09:44:17, Roger Jelliffe (jelliffe.-at-.usc.edu) sent the message
`Dear Nick et al:	It is interesting that all these discussions revolve around parametric estimates of modelparameter values. And the main points under discussion usually appear to be those of the mean andstandard deviation. I really wonder why this is. Why would anyone wish to make some assumption aboutthe shape of a model parameter distribution when it is not necessary? Why postulate a Gaussian orlognormal or some other multimodal distribution when you can estimate the entire distribution rightaway, using a nonparametric (NP) with  no assumptions at all about its shape? Over the years we havedirectly compared a parametric iterative Bayesian approach with the NP one. This is possible withour IT2B software. The NP approach is not constrained by some parametric assumption. You might lookatBustad A, Terziivanov D, Leary R, Port R, Schumitzky A, and Jelliffe R: Parametric and NonparametricPopulation Methods: Their Comparative Performance in Analysing a Clinical Data Set and Two MonteCarlo Simulation Studies. Clin. Pharmacokinet., 45: 365-383, 2006.	As I understand it, one also is interested in the various errors present in the data that isanalyzed. Many people assume an overall error pattern, having additive or proportional components,and that is fine. But why not separate that error into its lab and clinical components? You caneasily do this. For example, our lab likes to first, determine the assay error over its full workingrange before starting the pop modeling. Then, in addition, we like to estimate another parameterwhich we call lambda, which is an additive noise term reflecting the errors with which doses areprepared and given, the time errors associated with giving doses and drawing serum concentrations,model misspecification, and possible changes in model parameter values (or distributions) during theperiod of data analysis. What this does is to determine, first, the measurement error of the assay.Then, second, it gives us an estimate of the noise associated with the clinical part of the study.Small lambda, good precise clinical study. Big lambda, noisy clinical part of the study. This isuseful information, we think, and that is why we do it.  Actually, these other terms are notmeasurement noise at all, but are process noise which more properly should go in the differentialequations themselves.	About uncertainty in the estimates of the model parameter distributions - yes, bootstrap is thebest current method.	About  software - just how does software "understand" that clearance is a primary parameterwhile Kel is a secondary one? It has been obvious to many that they are quite interchangeable, andone can use either one as desired. I am not as interested in a hypothetical volume completelycleared of a drug as I am in  the amount of drug transferred from one compartment to another in acertain time. The mathematicians I know see no difference between the two, and I am told that thereare many texts on model identifiability that show this. You may like tomahtoes, I like tomatoes.Chacun a son gout.Respectfully,Roger`
Back to the Top

• On 3 Nov 2013 at 09:45:42, Charlie Brindley (charlie.brindley.-at-.kinetassist.com) sent the message
`Dear Min,Actually, it is possible to estimate variability from mean concentrations derived from a destructiveblood sampling study without using a mixed-effect model.See Nedelman et al 1995, Pharm Res 12:124 which applies Bailer's method for obtaining confidenceintervals (and SEs) for AUC (when only one sample per subject but multiple subjects samples atseveral sampling times).CharlieKinetAssist Ltd`
Back to the Top

• On 3 Nov 2013 at 09:47:19, Jie Zhou (zhoux383.at.umn.edu) sent the message
`Hi Min,Pop-pk method can be used for animal data even if there are not enough data points. In the extremecase of  one sample per individual, fixing the residual error is required to perform the analysis.You would be able to get  parameter, SD of parameter (intraindividual variability) and uncertaintyestimation for both. All are reported in results section.Or you can use naive pooled data-taking the means of each time points and treat the data as "oneindividual". You can get estimates for parameters and Uncertainty estimates of each parameter. Yourstatements are right. And you are right that the parameters and uncertainty will change a little bitwith different initial values but not too much if your initial values are reasonable. The estimationof function parameters are implemented through several different algorithms,e.g. Weighted leastsquares, maximum likelihood etc. I believe there are mathematical equations in references to showyou how the uncertainty is calculated. Just remember you have data from one "individual " so thereis no intraindividual variation.One thing to mention is that for AUC, CL etc. non-compartmental parameters and theirSD(intraindividual) variability can be estimated even for sparse sampling data. You can check thehelp guide in Phoenix WinNonlin (sparse sampling). Briefly, AUC is calculated through the meanconcentrations and SD is calculated based on number of samples per time group, number of samples perindividual and variation within each time group. Or bootstrapping method according to the reference:J pharm sci, 87, 372-387. (1998).Hope this helps!RegardsJie`
Back to the Top

• On 3 Nov 2013 at 21:17:35, J.H.Proost (j.h.proost.at.RUG.NL) sent the message
`Dear Jie, Min and others,It is good to have this discussion. Here is my one-cent contribution. You propose to perform a naivepooling approach on mean data. Why mean data? The correct way to do naive pooling is using allindividual data. Using mean data implies that you do a pre-analysis, and leave out relevantinformation from the individual values, e.g. with respect to the variability. I'm quite sure thatthis approach will provide more reliable standard errors and confidence intervals, e.g. bybootstrapping. Moreover, I don't see any argument against using all individual data (in 2013 wedon't talk about longer runtimes, do we?).best regards,Hans ProostJohannes H. ProostDept. of Pharmacokinetics, Toxicology and TargetingUniversity Centre for PharmacyAntonius Deusinglaan 19713 AV Groningen, The NetherlandsEmail: j.h.proost.-a-.rug.nl`
Back to the Top

• On 4 Nov 2013 at 13:30:14, Eleveld, DJ (d.j.eleveld.at.umcg.nl) sent the message
`Dear Roger et al:The choice to use a normal distribution is not arbitrary. The normal distribution has the highestentropy for all distributions with the same mean and variance. So even when the normal distributionis the "wrong" distribution, it is still the best choice by virtue of being the "least worst".Except in the case when you know apriori what the true distribution really is, but this doesn'thappen very often, especially for biological systems.Sometimes distributional assumptions are necessary in the sense that some methods which we take forgranted nowadays cannot be done with the computing power available to mere mortals. As I understandit current non-parametric methods require more computing power than parametric methods.This brings me to something I don't understand about the non-parametric approach. I often see statedthat non-parametric methods make "no assumptions at all" about the shape of distributions. Does thisinclude "smoothness" of a distribution? It is a very innocuous and I think very reasonableassumption that the true distribution of some complex natural phenomenon is smooth. But I am notsure if this is taken into account in current non-parametric methods. Can you clarify?Warm regards,Douglas Eleveld`
Back to the Top

• On 6 Nov 2013 at 21:51:37, Bob Leary (Bob.Leary.aaa.certara.com) sent the message
`Douglas,The maximum entropy argument for assuming a normal distribution  is not very compelling in the PKworld.  The normal distribution is only the maximum entropy distribution forrandom variables  which have a range on the entire real line (both negative and positive values).This is almost never the case for PK parameters, which are often known to be non-negative.  Here thethe maximum entropy distribution is exponential (for a given known mean).  To enforcenon-negativity, we usual alter the normality assumption to  log normality, which is convenientbut not  maximum entropy.Two stronger arguments for use of a normality (or log normality) assumption  area) it naturally leads to an extended least squares formulation for the corresponding likelihood,which has certain 'good' properties and is often a reasonable objective functioneven when the underlying random variable is not normal or log normalb) it greatly simplifies the implementation of parametric EM methodsBob Leary`
Back to the Top

• On 6 Nov 2013 at 21:53:03, Jie Zhou (zhoux383.-at-.umn.edu) sent the message
`Hi Hans,Thanks for your kind advice!We had a special data set where there is only one sample per individual (destructive sampling ofanimals). Assuming the individuals perform similar, we took the means at each time point(naïve-pooled method) to generate a structure model for PK/PD. After establishing the structurePK/PD model, pop-PK/PD method with all individual data then can be used to acquire more informationunder the same model. I agree that Naïve-pooled method is limited by throwing away the informationof within time group variation. But it can be the initial step for model establishment and dataprocessing.And I did try using all individual data and bootstrapping for estimating parameters such as AUC, CLetc. non-compartmental parameters. However, further application of this method towards morecomplicated functions such as CMT analysis or PK/PD analysis seems to be not guaranteed. Even theauthors of “destructive sampling bootstrapping method” had some concerns about estimating parameterswhich are not linear combination of concentrations since “pseudo-individual” time profile isgenerated and bootstrapped in the method. I would love to try the method but just wondering ifpeople believe it or not.Thanks!Regards,Jie`
Back to the Top

• On 7 Nov 2013 at 12:17:09, J.H.Proost (j.h.proost.-at-.rug.nl) sent the message
`Dear Jie,Thank you for your reply. A few comments from my side:1) "we took the means at each time point (naïve-pooled method)"In my view, the essence of the naïve-pooled method is that all data are pooled without consideringthat they are obtained from different individuals (the aspect of 'destructive sampling' is notrelevant, as pointed out clearly by Nick Holford). Using the means instead of all individual data iseven one step more 'naïve'.2) The use of means instead of all individual data may be a good starting point, but I don't see anyreason to use this 'naïve-naïve' method in a final analysis.3) You say: 'further application of this method towards more complicated functions such as CMTanalysis or PK/PD analysis seems to be not guaranteed'. Is this your experience, or from the authorsof “destructive sampling bootstrapping method”? Do you have a reference?best regards,Hans ProostJohannes H. ProostDept. of Pharmacokinetics, Toxicology and TargetingUniversity Centre for PharmacyAntonius Deusinglaan 19713 AV Groningen, The Netherlands`
Back to the Top

• On 7 Nov 2013 at 16:39:26, Jie Zhou (zhoux383.at.umn.edu) sent the message
`Hi Hans,Thanks for your comments!I find out how to perform naïve-pooling using all data points, assuming them to come from oneindividual. I had a wrong impression before. Thanks for catching that up.For the 3) comment, I did consult with the authors and their reply was (if I may quote his email)that"Using the (nested) bootstrap based on pseudo profiles we were able to estimate almost anyparameter derivable from non-compartmental kinetics and, most importantly, its variability. We didnot explore compartmental models, but I am sure it will become extremely complicated and willprobably deliver highly variable solutions, if any. In many so-called "rich data" situations already2-CMT (input - central - peripheral) micro-parameters are difficult to estimate with sufficientprecision andreproducibility (being at least robust in sensitivity analyses), in destructive sampling we wouldnot have any profiles. Under some reasonable assumptions I could imagine applications in PKPD. Usingsimple pooled data will not provide closed-form variability estimates apart from linear functions ofC and I think the same holds true for PopPK (= nonlinear mixed models)"- from Dr. Harry MagerThanks!Regards,Jie`
Back to the Top

 Want to post a follow-up message on this topic? If this link does not work with your browser send a follow-up message to PharmPK@lists.ucdenver.edu with "Standard deviations for pk parameter estimates" as the subject PharmPK Discussion List Archive Index page

Copyright 1995-2014 David W. A. Bourne (david@boomer.org)