Back to the Top
Hello,
I would like the group's opinions on weighting data when running PK data for compartmental analysis. I have heard opposing opinions where some believe that it is fine to weigh data (e.g. 1/Y, 1/Yhat, 1/Y^2, or 1/Yhat^2) and you simply chose the best fit and choose the weight that the majority of the subjects benefited from and run all data with that weight. However, I have also heard that any manipulation of data is not preferred and so weighting of data should be avoided if possible. Can people please share their opinions and if there are any good publications that deal with this topic please share those references as well.
-- Kristin Grimsrud
[You are right, arbitrary weighting does manipulate the 'data'. However, what investigators are trying to do with 1/Y, 1/Y^2 etc is to give the program estimates of the error or variance in each data point and weight by the reciprocal of that variance. In some cases 1/Y or 1/Y^2 etc. are good estimates. In other cases more or less involved functions are needed. In the case of exteneded least squares parameters of a weighting formula are estimated along with the pk parameters. More info at http://www.boomer.org/c/p4/c10/c13/c1301.html
- db]
Back to the Top
Hi Kristin
You may find the following paper useful.
Peck CC, Beal SL, Sheiner LB and Nichols AI (1984) Extended least squares nonlinear regression: a possible solution to the "choice of weights" problem in analysis of individual pharmacokinetic data. J Pharmacokinet Biopharm 12:545-558.
Regards
Masoud
Back to the Top
Kristin,
May I suggest you read one of my publications:
Boxenbaum, H.G., S. Riegelman and R. Elashoff. Statistical estimations in pharmacokinetics. J. Pharmacokin. Biopharm. 2: 123-148 (1974). It's still relevant today (all concepts), despite its age. When you "do not weight," you are actually weighting using a weighting factor of unity. So, you cannot avoid the issue of weighting. And, in most cases, a weighting factor of 1 will not work. I'm done a lot of curve fitting, and here are my viewpoints:
(1) Try and use the same model for each set of data. However, if one set of data is clearly biexponential, go biexponential in your curve fitting. If another data set is clearly triexponential, go triexponential in your curve fitting. However, try and be as consistent as possible in using one or possibly two models -- but be flexible. All you need do is be reasonable; (2) There are really no rules in curve fitting, just some guidelines and some art. The weighting factor depends on the nature of the data. I usually start with 1/y^2, whereas a colleague uses a variety of weighting factors, all run simultaneously; (3) Here's the bottom line -- if you do not have good randomness of data points about the fitted curve, you have probably done something wrong. If one of 12 subjects is problematical, you are OK. However, if most of you curve-fits show poor randomness of scatter, you have a serious problem. Re-think your approach; (4) Poor randomness of scatter is almost usually do to: (a) wrong mode
Back to the Top
Regarding point (7), I should have stated that the linear trapezoidal rule is used until you hit Cmax, and after that, the log trapezoidal rule kicks in. Harold Boxenbaum
arishel.at.comcast.net
Back to the Top
The following message was posted to: PharmPK
Dear All:
If you are weighting PK data, it is useful to start first with the
empirically determined assay error, and to fit the relationship between the
assay SD and the measured concentration over its working range. Having done
this, you will have a good estimate of the SD with which any subsequent
single determination is made. Square the SD to get the assay variance, and
use the reciprocal of the assay variance as a well known and widely used
measure of assay credibility. Much better than CV%. You do not need to
censor low data points.
Then, having made your assay error polynomial and stored it in your
software such as the MM-USCPACK software, you can determine the remaining
environmental error either as a multiplicative or as an additive term. Then
you know explicitly how much error is due to your assay, and how much to the
environment. Useful to know.
This is how we have weighted all our data in population modeling for
years, and in clinical TDM as well. You might look at:
Jelliffe RW: Explicit Determination of Laboratory Assay Error Patterns - A
Useful Aid in Therapeutic Drug Monitoring (TDM). Check Sample Series: Drug
Monitoring and Toxicology, American Society of Clinical Pathologists
Continuing Education Program, Chicago, Il, 10 (4) : pp.1-6, 1990.
Jelliffe RW and Tahani B: A Library of Serum Drug Assay Error Patterns for
Bayesian Fitting of Patient Pharmacokinetic Models, and Some Suggestions for
Improved Therapeutic Drug Monitoring. April 1992.
Jelliffe RW, Schumitzky A, Van Guilder M, Liu M, Hu L, Maire P, Gomis P,
Barbaut X, and Tahani B: Individualizing Drug Dosage Regimens: Roles of
Population Pharmacokinetic and Dynamic Models, Bayesian Fitting, and
Adaptive Control. Therapeutic Drug Monitoring, 15: 380-393, 1993.
Very best regards,
Roger Jelliffe
Back to the Top
The empirically determined assay error may be weighted, will this lead to an accumulation of error if the individual PK profile is weighted as well?
Back to the Top
The following message was posted to: PharmPK
Dear Ed:
You only weight each data point in the overall PK profile only once.
You store the assay error polynomial in the software. Each data point gets
weighted by the reciprocal of its variance. Nothing cumulates. Each data
point in the PK data profile gets weighted only once, by the reciprocal of
its variance. MUCH BETTER than CV%. Much better model parameter estimates.
You also don't have to censor low data points. You can track them all the
way down to the blank. Because of this, you can render much better service
to all who order lab assays. Take an HIV PCR. Don't you really want to drive
the result down to zero and document it? We do not do patients any service
by settling for <50 copies, for example. This is bad practice, and is
easily avoided in this way. There are also many other assay results which we
want to drive to zero - Philadelphia chromosome, for example.
Go look at p 423 of De Groot, Probability and Statistics, 2nd ed,
Addison-Wesley, 1894, or any other book on statistics. Look for Fisher
information. It is used worldwide, and has been for years. The lab guys are
all brainwashed by CV%, and never think about things any more, just
following so-called guidelines. It is so sad! Think about what is behind the
guidelines - the real reasons for them. CV% is really outmoded and should go
in the trash bin.
All the very best, always,
Roger
Back to the Top
Roger: What I am saying is that the analytical data might already be weighted
Want to post a follow-up message on this topic?
If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "Weighting PK data" as the subject | Support PharmPK by using the |
Copyright 1995-2011 David W. A. Bourne (david@boomer.org)