Back to the Top
It seems that the debate on weighting appears regularly in the
pharmacokinetic literature (now read web for literature). The
earliest paper in a pharmacokinetic context was that of Boxenbaum,
Riegelman and Elashoff (JPB 2: 123-148 (1974)). In a biochemical
context there are numerous papers including Atkins (Biochem.J.
138:125-127 (1974)) and Finney (Appl.Stat. 26:312-320 (1977)). There
is also a long history in the Chemometrics literature (see Garden et
al Anal.Chem 52: 2310-2315 (1980)). The contribution of Peck et al
(JPB 12: 545-558 (1984)) is also illuminating but however also see
van Houwelingen (Biometrics 44: 1073-1081 (1988)).
The theory is quite clear (Draper and Smith and numerous other
books). In a least squares context weight by the inverse of the
expected variance of the datum (not as some arbitrary function of
the observed datum). In many applications this is straightfoward as
replicate experiments can be performed. Unfortunately in
pharmacokinetics this can not be done. Note that replicating the
assay or repeating the whole experiment does not replicate the
sample. The closest you might come to it is to take simultaneous
samples from different veins, althought it is not clear that this is
a true replicate. Therefore what should you do.
Roger Jelliffe advocates using a function based on the determined
assay error. As he points out it is simple to perform and certainly
systematizes the approach. However assay is not the only source of
variability, which Roger acknowledges. Stochastic variability may be
part of the process noise but it cannot be estimate separately from
the assay error except perhaps if the assay of a sample could be
repeated 10,000 times to reduce the assay error by a factor of 100.
Simply to multiply the 'assay polynomial' by a scaling factor seems
to defeat the purpose of fitting a polynomial in the first place as
there is no theory which suggests the 'process noise' should have the
same functional form as the assay error. The idea of fixing the
residual variability to that of the assay was originally used by
Mallet in the NPML program. In this case the the residual error had
to be fully specified (including the proportionality constant). My
understanding is that in the lastest version of the program it is now
possible to estimate an 'error model'.
So what should be done? Well let me say that I do not know the
perfect solution. For pragmatic reasons people tend to opt for
pragmatic solutions such as constant variance or constant cv (note
that chosing a constant weight of 1 or 0.1 etc makes no difference
to the precision estimates in least squares as the proportionality
term (sigma^2) is estimated from the residual sum of squares. The
weights only have to be proportional to the reciprocal of the
variance). Roger is right to criticize these schemes as they are only
abstractions of reality. More complicated variance models can be
dreamt up, for example a combination of constant variance and
constant cv but the parameters of these schemes are difficult to
estimate (see Raab Appl.Stat. 30: 32-40 (1981)).
My own approach is a combination of pragmatism and experience.
Eventual having analysed a number of data sets with various
'reasonable' weighting schemes look at the residual plots and choose
the 'best' common model. Mixed effects modelling can also help here.
Then use that weighting scheme for all the data sets. This scheme may
not appear optimal for all data sets but the decision has an
empirical Bayesian feel about it in that you are using information
from all of the data sets to come to this decision and I would be
happy with that.
As I said I would not claim this to be perfect and I expect to hear
of alternatives and objections.
Leon Aarons
________________
Leon Aarons
School of Pharmacy and Pharmaceutical Sciences
University of Manchester
Manchester, M13 9PL, U.K.
tel +44-161-275-2357
fax +44-161-275-2396
email l.aarons.-at-.man.ac.uk
PharmPK Discussion List Archive Index page
Copyright 1995-2010 David W. A. Bourne (david@boomer.org)