Back to the Top
The following message was posted to: PharmPK
List Members,
What is the distribution of serum concentration values (of most drugs)
usually assumed to be? I have seen several references to normal and
log-normal, and I wonder what evidence there is for any of these. I
also
just read a paper suggesting a gamma or log Cauchy distribution might be
more appropriate. We are dealing with small numbers of animals and
often
have insufficient data to test our assumptions, so any comments would be
appreciated.
Thanks,
vf
Virginia Fajt, DVM, PhD
Pueblo, Colorado
fajt.-at-.iastate.edu
Back to the Top
Hi Virginia,
The commonly accepted statistical distribution for drug levels is the
log-normal distribution.
The possible values for the drug concentration in blood/serum/plasma
are limited on the lower-value side by the LOQ of the analytical
method, while the high-value end is open. Therefore, the actual
distribution is not symmetrical.
It was said that this kind of distributions become "normal" after
log-transformation (the compression of the scale at the high-value end).
For this reason the extensive pharmacokinetic parameters (AUC, Cmax)
are analyzed statistically under the normal distribution assumption
after the log-transformation.
I hope this help,
radu
Radu D. Pop
Director Biopharmaceutics
Pharma Medica Research Inc.
966 Pantera Drive
Mississauga, Ontario
Canada, L4W 2S1
Back to the Top
The following message was posted to: PharmPK
RPop.-at-.pharmamedica.com wrote:
>
> PharmPK - Discussions about Pharmacokinetics
> Pharmacodynamics and related topics
>
> Hi Virginia,
> The commonly accepted statistical distribution for drug levels is the
> log-normal distribution.
The simple log-normal distribution may is probably only commonly
accepted by pharmacokineticists who do not use PK residual error
models. More plausible (and empirically verifiable) models for the
residual error in concentrations typically have a proportional
component (similar to log normal distribution) and a concentration
independent (normal distribution) additive component. The proportional
component is usually used by analytical chemists to describe their
assay as "%CV" of replicate measurements. The additive component is a
measure of "background noise" and reflects the reality that a simple
constant CV error model is not appropriate at concentrations which
challenge the assay sensitivity.
> The possible values for the drug concentration in blood/serum/plasma
> are limited on the lower-value side by the LOQ of the analytical
> method, while the high-value end is open. Therefore, the actual
> distribution is not symmetrical.
> It was said that this kind of distributions become "normal" after
> log-transformation (the compression of the scale at the high-value
> end).
The additive component helps to protect somewhat against the
difficulties caused by analytical chemists arbitrarily truncating their
measurements by the use of LOQ (discussed many times on this list
before).
> For this reason the extensive pharmacokinetic parameters (AUC, Cmax)
> are analyzed statistically under the normal distribution assumption
> after the log-transformation.
The distributions of AUC and Cmax have a different justification. Their
are two sources of random variability in these values. The first and
typically minor component is due to the measurement error (decribable
by the residual error models discussed above). The second, and
quantitatively much larger, component is due to the between
subject/between occasion variability. These random effects have
typically been modelled under the assumption of a log-normal
distribution because 1) all AUC and Cmax values must be non-negative
and the normal assumption does not enforce this assumption 2) the
distribution of estimates of AUC and Cmax in reasonable sample sizes
are often right skewed which is compatible with a log-normal
distribution.
Nick
Nick Holford, Divn Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.-at-.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
Back to the Top
The following message was posted to: PharmPK
Nick and others,
Different problem, but somewhat related: have you ever been in the
situation of analysing a phase IIa dose ranging study, with parallel
design and geometric progression of doses, people sampled at trough
steady-state?
High doses and/or concentrations (not clear whether a priori or not)
were assayed after multiple dilutions, while low doses groups use the
"normal" calibration curve. This was an LC-MS/MS assay validated from 10
to 10,000ng/mL, going up to 100,000ng/mL after dilutions at 2, 5, 10 or
20 ratios.
We have great difficulties with multiple combined additive +
multiplicative error models, each for different ranges of concentrations
determined by dose range...
Thanks for any new idea!
Eliane
Eliane Fuseau, PhD, CEO
EMF Consulting
BP 2
13545 Aix en Provence cedex 4
tel +33 442 908 102
fax +33 442 908 101
mobile +33 622 040 516
eliane.aaa.emf-consulting.com
Back to the Top
The following message was posted to: PharmPK
Elaine,
Eliane Fuseau wrote:
> Different problem, but somewhat related: have you ever been in the
> situation of analysing a phase IIa dose ranging study, with parallel
> design and geometric progression of doses, people sampled at trough
> steady-state?
> High doses and/or concentrations (not clear whether a priori or not)
> were assayed after multiple dilutions, while low doses groups use the
> "normal" calibration curve. This was an LC-MS/MS assay validated from
> 10
> to 10,000ng/mL, going up to 100,000ng/mL after dilutions at 2, 5, 10 or
> 20 ratios.
> We have great difficulties with multiple combined additive +
> multiplicative error models, each for different ranges of
> concentrations
> determined by dose range...
Hard to answer your question without you being more explicit about the
"great difficulties". I assume you are attempting a NONMEM analysis of
the PK data with different residual error models for each dose range.
One approach would be to estimate the parameters of a variance model
for the assay error from replicate known concs measured using each
calibration curve. Then use these model parameters as fixed SIGMAs for
each calibration curve range and use an extra one or two SIGMAs to
estimate the model misspecification component of the residual error.
You could then focus on the PK part of the model (the signal) instead
of the residual error (the noise).
e.g.
Assume replicate measurements of standard concs for calibration curve i
have this model:
var(i) = int(i) + slope(i)*stdconc(j)
then in your NM-TRAN control stream you might do this:
$SIGMA int(1) FIX ; ERR1 Additive part of calib curve 1
$SIGMA slope(1) FIX ; ERR2 Proportional part of calib curve 1
.... etc for each calib curve
$SIGMA misspec ; ERR3 Model misspecification variance
$ERROR
; provide a data item CURV that indicates which calibration curve was
used for each DV
IF (CURV.EQ.1) THEN
ASSERR=ERR(1) + ERR(2)*F ; random effect due to assay error
ENDIF
MDLERR=ERR(3) ; random effect due to model misspecification
Y = F + ASSERR + MLDERR
If you want you could include the replicate concs as DVs and estimate
int(i) and slope(i) jointly with the PK model parameters.
Bonne chance!
Nick
Nick Holford, Divn Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.-at-.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
PharmPK Discussion List Archive Index page
Copyright 1995-2010 David W. A. Bourne (david@boomer.org)