- On 2 May 2000 at 21:39:20, "Muhammad D. Hussain" (Delwar.-a-.uwyo.edu) sent the message

Back to the Top

Dear members,

What criteria is used to formulate the basis in weighting of data

(none, x, I/x, log x, 1/y, etc) in a standard curve for an analytical

method? Accepted that "Weighting is not done to improve the fit of the

data".

One of my colleague is trying to validate an plasma HPLC method for

a compound using internal standard. Concentrations are linear for standard

curve over the range 10-1000 ng/ml (deviation >15%) when a weighting of 1/x

is used for peak response. If no weighting is used the lower two

concentrations have more than 20% deviation from the expected concentration.

His question is does this use of 1/x is valid?

Thanks for your cooperation.

Delwar

M. Delwar Hussain, Ph.D.

Associate Professor

School of Pharmacy

University of Wyoming

Laramie, WY 82071-3375

Tel: 307 766 6129

Fax: 307 766 2953 - On 3 May 2000 at 22:45:23, David_Bourne (david.-a-.boomer.org) sent the message

Back to the Top

[A few replies - db]

=46rom: olaf.kuhlmann.-a-.medizin.uni-halle.de

Comments: Authenticated sender is

To: PharmPK.-a-.boomer.org

Date: Wed, 3 May 2000 09:19:05 +0000

Subject: Re: PharmPK

Priority: normal

No.

The linear range of a chromatographic detector represents the range

of concentrations or mass flows of a substance in the mobile phase at

the detector over which the sensitivity of the detector is constant

within a specified variation, usually +/- 5%.

The best way to present detector linear range is the Linearity Plot

plotting detector sensitivity against amount injected, concentration

or mass flow-rate. The upper limit of linearity can be graphically

established as the amount, concentration, or mass-flow-rate at which

the deviation exceeds the specified value (+/- x% window around the

plot). The lower limit linearity is always the minimum detectable

amount determined separately for the same compound.

See: ChromBook2; 2nd edition from MERCK, page 381-382.

Hope this helps.

Kuhlmann

Dr.rer.nat. Olaf Kuhlmann (Dipl.-Biol.)

Martin-Luther-Universitaet Halle-Wittenberg

Medizinische Fakultaet

Institut fuer Pharmakologie und Toxikologie

Sektion Pharmakokinetik

Magdeburger Str. 4

06097 Halle/Germany

Tel.: 0345-5574091

=46AX : 0345-5571835

E-mail: olaf.kuhlmann.-a-.medizin.uni-halle.de

Webpage: http://www.medizin.uni-halle.de/pharmakin/Kuhlmann.htm

---

=46rom: "Leon Aarons"

To: PharmPK.at.boomer.org

Date: Wed, 3 May 2000 09:43:55 GMT

Subject: Re: PharmPK Weighting

Priority: normal

Delwar

There is a lot in the analytical literature on this topic and you

should start there rather than the pk/statistical literature. An

excellent early reference is Garden et al. 'Nonconstant variance

regression techniques for calibration-curve-based analysis',

Anal.Chem. 52: 2310-2315 (1980)

Leon Aarons

School of Pharmacy and Pharmaceutical Sciences

University of Manchester

Manchester, M13 9PL, U.K.

tel +44-161-275-2357

fax +44-161-275-2396

email l.aarons.aaa.man.ac.uk

---

Date: Wed, 03 May 2000 08:32:31 -0400

=46rom: "Ed O'Connor"

Reply-To: efoconnor.aaa.snet.net

Organization: PM PHARMA

X-Accept-Language: en

To: PharmPK.at.boomer.org

Subject: Re: PharmPK Weighting

weighting usually reduces the overall fit, that is the pearon is reduced wit=

h

increased weighting. The difference from expected (DFE) or error is=20

reduced for

the points at the lower end of the curve. Error may increase at the upper e=

nd.

Generally for most bioanalytical assays, an error of 20% is the=20

tolerance at the

lower end and 15% for the middle and upper. These are general and may be

broadened. Again, as you increase weighting you reduce the bias from the up=

per

points, reduce the r and decrease the error (at the lower end) you=20

must balance

the effects against the accepatnce criteria of your assay-- the limits on er=

ror

and the r value.

---

=46rom: "Ossig, Dr. Joachim"

To: "'PharmPK.aaa.boomer.org'"

Subject: AW: PharmPK Weighting

Date: Wed, 3 May 2000 15:00:54 +0200

Dear Delwar (and others)

what do you think about this justification for weighting:

Linear regression is calculated by minimizing the square of the absolute

deviations from the fitting line.

Let us think about an theoretical experiment with the following result: All

calibration standards perfectly fit, except the lowest, which deviates 1% in

one direction and the highest, which deviates 1% in the opposite direction.

The unweighted linear regression line would lead to deviations of the

recalculated values from about 8% at 10 ng/ml to about 1% at the upper end

of the calibration range.

weight factor: 1/y^0

meas. spiked calc. rel. Deviation

=20 10.1 10.0 9.2 -8.25

=20 15.6 15.6 14.8 -5.60

=20 31.3 31.3 30.5 -2.35

=20 62.5 62.5 62.1 -0.72

125.0 125.0 125.1 0.10

250.0 250.0 251.3 0.50

500.0 500.0 503.5 0.71

990.0 1000.0 998.0 -0.20

Parameters of curve: A (calc.) =3D(meas. - 1.00782)/0.99099

Using 1/y weighting, the relative deviations in this example would be

between -0.8% and +0.6%:

weight factor: 1/y^1

meas. spiked calc. rel. Deviation

=20 10.1 10.0 10.0 -0.47

=20 15.6 15.6 15.5 -0.73

=20 31.3 31.3 31.2 -0.07

=20 62.5 62.5 62.7 0.25

125.0 125.0 125.5 0.42

250.0 250.0 251.3 0.50

500.0 500.0 502.7 0.54

990.0 1000.0 995.6 -0.44

Parameters of curve: A (calc.) =3D(meas. - 0.20466)/0.99419

Therefore, I would agree to use 1/y weighting in bioanalytical work, since

otherwise the data of the upper calibration range imho are overweighed.

Dr. Joachim Ossig Tel.: +49-(0)241-569-2409

Gr=FCnenthal GmbH Fax.: +49-(0)241-569-2501

Department of Pharmacokinetics (FE-PK)

Zieglerstr. 6 Mailto:

Joachim.Ossig.-a-.grunenthal.de

52078 Aachen, Germany

---

Date: Wed, 03 May 2000 08:14:43 -0700

=46rom: "David Nix, Pharm D."

Organization: College of Pharmacy

To: PharmPK.-a-.boomer.org

Subject: Re: PharmPK Weighting

=46or analytical purposes, I routinely have used 1/x^2 weighting.

Weighted based on the theoretical concentration is fairly well accepted

in the analytical field, although one could argue that weigting based on

the predicted concentration (e.g. 1/y^2) is more statistically valid.

Given the typical variability patterns for most analytical techniques,

some weighted is necessary, expecially if the standard curve range is

large (i.e. more than 1 log 10 range). 1/x^2 or 1/y^2 most closely

approaches having residuals that are normally distributed with constant

CV% over the full concentration range.

The only problem with using 1/x^2 or 1/y^2 appears when the standard

curve is being carried down to low. If the lowest standard is

associated with very poor precision, then weighting 1/x^2 or 1/y^2

places too much weight on this low concentration. This is not likely to

be a major problem as long as the CV% for the lowest standard

concentration is less than 15%.

---

=46rom: "Melethil, Srikumaran K."

To: "'PharmPK.-a-.boomer.org'"

Subject: RE: PharmPK Weighting

Date: Wed, 3 May 2000 13:18:15 -0500

Dear Delwar,

Often, I have seen weighting done to improve fit. So, it is good to hear you=

r

motive.

Weighting in principle should be based on expected error (variance)=20

of the assay

at the desired concentration. An old (classic paper?) by Boxenbaum et al (J=

PB

1973 I think) discusses this issue. A statistics text by Draper and Smith al=

so

addresses this issue.

Srikumaran Melethil, Ph.D.

Professor of Pharmaceutics and Medicine

University of Missouri-KC

203 B Katz Hall, School of Pharmacy

5005 Rockhill Road

Kansas City, MO 64110

816-235-1794 (voice); 816-235-5190 (fax)

---

Date: Wed, 03 May 2000 14:39:58 -0400

=46rom: "Ed O'Connor"

Reply-To: efoconnor.aaa.snet.net

Organization: PM PHARMA

X-Accept-Language: en

To: PharmPK.-at-.boomer.org

Subject: Re: PharmPK Weighting

expanding on my earlier response. 1. the LOD should be determined=20

first. 2.

then run the curve in triplicate using uniques standards. 3 Check the aver=

age

LOD and adjust the ELOQ as needed. 4. Now examine the curve. The native er=

ror

will be biased towards the high end. The larger the intended dynamic range =

of

the curve the larger the error at the low points. Conversely, the r value g=

ets

better and better. Now examine the effect of weighting on the error and r

values. As the weight increases, the error at the low end should=20

decrease but r

will also decrease. Possibly to an unusable point. Some assays, particula=

rly

where SPE, derivatization or immunoaffinity is involved may not be linear bu=

t

may infact require a quadratic or other equation to develop a usuable

concentration response curve and may still require weighting. - On 9 May 2000 at 22:57:36, David_Bourne (david.-at-.boomer.org) sent the message

Back to the Top

[Two replies - db]

From: "Stephen Duffull"

To:

Subject: RE: PharmPK Re: Weighting

Date: Thu, 4 May 2000 15:14:03 +1000

X-Priority: 3 (Normal)

Importance: Normal

Why would you want to compute the limit of quantitation?

Regards

Steve

=================

Stephen Duffull

School of Pharmacy

University of Queensland

Brisbane, QLD 4072

Australia

Ph +61 7 3365 8808

Fax +61 7 3365 1688

---

X-Sender: jelliffe.-at-.hsc.usc.edu

Date: Thu, 04 May 2000 15:20:15 -0700

To: PharmPK.at.boomer.org

From: Roger Jelliffe

Subject: Re: PharmPK Weighting

Dear Dr. Hussein:

Why not determine the assay standard deviation (SD) at

several points over

its working range? Why not weight each data point by its easily known

credibility (its Fisher Information, the reciprocal of the variance of each

data point), and then distribute the other errors as intraindividual

variability? Then you know what fraction of the overall intraindividual

variability is due to the assay, for example. Many people simply assume

that assay error is a small fraction of the overall error. In our

experience, doing it our way, assay error may range from something less

than 1/4 to more than 1/2 the total error, and then you KNOW this. We

really suggest is it very useful, and more optimal, to start with the known

assay error, to use a parametric method such as the iterative Bayesian 2

stage (IT2B) population modeling program in the USC*PACK collection, find

gamma, the rest of the error, and then go to a nonparametric program such

as NPEM to get the full parameter joint density. Below is an earlier

version of this discussion as well. The basic point is - why bother to make

assumptions about the assay error? Why not simply determine it and be done

with it? Then find the remaining noise in the intraindividual variability.

It really boosts comfidence in a study if you find a low gamma, for

instance, showing that the remaining error beyond the assay is relatively

small. The converse is also useful to know.

Best regards,

Roger Jelliffe

Dear Colleagues:

I don't understand all this discussion of how to weight the data,

whether

it is better to weight by 1/y^2 or by doing the log transformation, for

example. Why not skip all thiese assumptions and simply calibrate the assay

over its working range, and then fit the relationship between the

concentration and the SD with a polynomial so one can have a good estimate

of the SD with which each single level is measured, so one can then fit

according to the Fisher information of each concentration, namely the

reciprocal of the variance of each data point? The problem is that the

coefficient of variation is hardly ever constant, and the SD needs to be

known over its entire working range.

If one uses the log transformation, for example, a concentation of 10

units has only 1/100 the weight (Fisher info) of a concentration of 1 unit,

and only 1/10,000 the weight of a concentration of 0.1 units. Is this

realistic? I don't think so. I really don't understand all this discussion

about a point that can be easily answered simply by calibrating each assay,

by determining its error pattern over its working range. This point is

discussed more fully in Therapeutic Drug Monitoring 15: 380-393, 1993.

Of course there are other errors than just the assay. There are those

associated with errors in preparing and giving each dose, and in recording

the times when the doses were given, and with recording the times when the

serum samples were drawn. What sense does it make to assume that all of

these these are also part of the measurement noise, and then to use the log

transformation or 1/y^2 as the description of them? Most of them are

actually part of the process noise, not the measurement noise. But whatever

is done, why not start by knowing what the assay errors actually are?

Sincerely,

Roger Jelliffe

Roger W. Jelliffe, M.D. Professor of Medicine, USC

USC Laboratory of Applied Pharmacokinetics

2250 Alcazar St, Los Angeles CA 90033, USA

Phone (323)442-1300, fax (323)442-1302, email= jelliffe.at.hsc.usc.edu

Our web site= http://www.usc.edu/hsc/lab_apk - On 11 May 2000 at 21:00:45, Roger Jelliffe (jelliffe.-a-.usc.edu) sent the message

Back to the Top

Dear Steve:

I don't understand your question. When you are doing PK work, and have

information about the times at which doses were given and serum

concentrations (or other responses)were obtained, then you know the drug is

present, and your only concern is how much drug is present. If you

determine the assay error over its working range, down to and including the

blank (which is needed to determine the lower limit of detection for

toxicological work) then you can give correct weight to each measurement,

even down to and including the blank. Am I missing something? Can you

expand on your question?

Best regards,

Roger Jelliffe

Roger W. Jelliffe, M.D. Professor of Medicine, USC

USC Laboratory of Applied Pharmacokinetics

2250 Alcazar St, Los Angeles CA 90033, USA

Phone (323)442-1300, fax (323)442-1302, email= jelliffe.-at-.hsc.usc.edu

Our web site= http://www.usc.edu/hsc/lab_apk

Want to post a follow-up message on this topic? If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "Weighting" as the subject

PharmPK Discussion List Archive Index page

Copyright 1995-2010 David W. A. Bourne (david@boomer.org)