Back to the Top
Dear All,
We are working on a molecule with about 1600 fold calibration curve
range. When we are using 1/x2 (Linear) as weighting factor for
quantitation of this compound the two of the highest standard of the
calibration curve are found with the percent accuracy of about 86-87%
whereas rest of the standards are having accuracy of more than 95 % of
their nominal value. For this compound if we use 1/x2 (Quadratic) as
weighting factor all the standards as well as QC's are found showing
more than 95% accuracy. Just wanted to know for bioequivalence studies
can we use 1/x2 (Quadratic ) as weighting factor.
Regards
Arpana
Back to the Top
Probably want to avoid using quadratic model as there can be two
solutions for each response. What you want is a model that will give
you one solution.
Guidance is to use the simplest model to fit the data.
Quadratics are up one but can give two solutions how will you rule one
in and one out when dealing with unknowns???
Even with the quadratic you need to use a weight of 1X^2. That
sounds like you are pushing your system to a place it doesn't want to
go. If the detector is clean perhaps yor detection or ionization is
affected at the high end. The simplest thing and the thing the data is
telling you, is that your ULOQ is not easy to achieve, with your
present LLOQ. You may need to restructure your curve by dropping out
some of the high points, possibly changing the injection volume and or
the fainl sample (extraction? volume).
Have you optimized the system? Do you have other models beside the
quadratic that you might not need to weight?
Back to the Top
The following message was posted to: PharmPK
Arpana,
For what its worth, regulatory authorities would have no scientifically
valid reason to deny a quadratic regression method, although this
assumes
the equipment itself is performing linearly.
The issue to be concerned with (practically speaking) would be the
software
you are using to fit the data... not that there will be anything in
error,
but that you should have enough or the appropriate lack-of-fit metrics
to
evaluate the quadratic regression with weighting.
Also, as you may know, using a large range has more risk with
homogeneity of
variance ... so a log/log transformation should also be evaluated for
better
model predictability and curve fitting.
Good luck,
-Shawn
Shawn D. Spencer, Ph.D., R.Ph.
Assistant Professor of Biopharmaceutics
Florida A&M College of Pharmacy
Tallahassee, FL 32307
shawn.spencer.aaa.famu.edu
Back to the Top
The following message was posted to: PharmPK
Yes. Indeed.
The use assumes a single valued monotonically increasing function.
Very important.
Back to the Top
The following message was posted to: PharmPK
Dear Arpana:
I have never been clear why some people use 1/x or 1/x2 or anything
like that. However, for events that are countable, like radioactive
counts,
the variance is equal to the count and the SD to the square root of the
counts. However, I do not understand how that relates to the
precision of
an assay unless it has been specifically determined to be the case.
First, let me ask just why you wish to assign a weight to any
measurement. It seems to me that we usually want some quantitative
measure
of the credibility for each measurement. This is NOT a percent error
like
CV%. Such a percent error is never found in any book I know as a valid
measure of the credibility of a measurement.
Instead, what we find is the reciprocal of the variance with which a
measurement is made (see, for example, Morris De Groot, Probability and
Statistics, 2nd ed., Addison-Wesley, 1986, p.423. This is a widely
known and
well-accepted quantitative measure of credibility. I think the problem
in
the lab has been that most lab data has not been used to fit assay
data in a
quantitative sense until quite recently. As a result, there has been a
great
cultural blind spot within the laboratory community.
Another result of this blind spot has been the fact that as a
measurement approaches zero, the CV% grows greatly, and eventually
becomes
infinite. This, then, has led to the false belief that low assay data
must
be censored, for example, when the CV% reaches some value, say 20%,
above
which 'the error becomes unacceptably great".
The fact is that the standard deviation usually still becomes still
smaller, or reaches a plateau, or may even begin to increase slightly
as a
measurement approaches zero, but it ALWAYS REMAINS FINITE, even at the
blank, where you have the finite and easily measureable machine noise.
The
signal is never "lost in the noise". You always have a measurement and
you
always have the easily measureable noise.
Because you always have a valid measure of credibility, even at the
blank, there in never any need to censor low data, and the entire
concept of
LLOQ of LLOD is an intellectual illusion and blind spot.
It is easy to do much better. Simply take several samples that cover
the working range of the assay, just as you do now for QC. Assay each
sample, including a blank, in replicate. Not just in duplicate or
triplicate, like for a mean. Variability is more difficult to estimate
than
a central tendency. Run them as many times in replicate as you can. Five
replicates per sample is OK. More is always better. Get the mean and
SD for
each sample. Then fit a polynomial to that relationship, so for any
measurement in the future that comes through your lab assay system,
you now
can have a good estimate of the SD for each measurement. Square the SD
to
get the variance, and then take its reciprocal to get the Fisher
Information
associated with that data point. This is a well known quantitative
measure
of credibility. It is exactly analogous to the Fisher information
matrix of
an array of numbers whose precision is quantified in this manner.
In contrast to samples for toxicology, we usually know in fact that
the substance we are assaying is actually there. We are not asking
that it
be some socially acceptable number of SD's above the blank.
It is really easy to make both the fitters and the toxicologists
happy. Simply report thte assay result as, for example, 0.1 +/- 0.5, for
example. Then both types of people have what they need. Don't bother
to ask
if the stuff is there or not. If you want to treat HIV, for example,
kill
the virus - don't settle for <50 copies. There is a big difference
between
45 +/- 5 and 3 +/- 5. Censoring such low data (which it is actually most
important NOT to do!) walks away from this extremely significant
problem,
which then is ignored by us all, as that is what the lab says (most
wrongly)
is common practice.
I think we do best when we use the actual precision of the assay at
any measurement. This is the reciprocal of the variance with which the
result was obtained.
Very best regards,
Roger Jelliffe
Back to the Top
Arpana: The guidance is clear on using the simplest model to fit the
data. That being said you can step up the model to quadratic, four or
five parameter regressions with or without weighting. You should be
aware that you may need to provide evidence that you tried other
models with little success.
LLOQ and ULOQ must meet accuracy and precision tolerances, not just
precision tolerances this is defined in the guidance docs. Currently
these address both precision as % CV and accuracy as % Bias.
Bioanalysts are also examing the use of Total Error- a concept used in
clinical chemistry that relies on both %CV and %Bias.
The tolerances for methods are suggested as 15%CV and +/- 15%Bias for
instrumental analyses and 20% CV and +/- 20%Bias for ligand binding
assays. Again these are guidances and if you you have made honest
attempts to meet these but cannot you may use other tolerances with
proof.
Your range seems a bit excessive and you may get better tolerances and
a simpler model if you reduce it.
Back to the Top
Just one thing to verify : Solubility of the compound in the upper
calibration samples.
It depends on how you make your dilution : do the calibration samples
are prepared by dilution of the ULOQ or do you directly spike ULOQ,
ULOQ-1 and ULOQ-3 for example. If you spike these first high samples
which is a good way of work, maybe your non linearity is due to a bad
solubility in the higher samples (you can't see that by serial
dilutions of ULOQ).
If there is non solubility trouble, and if you need to use a quadratic
calibration curve, it's important, in my opinion, to prepare QC
samples, in the upper concentration range (in the non linear range).
If the calibration and QC samples acceptance criteria are ok, I think
there is no reason to be worry about your results.
I don't know what is your detector (API3200 or API4000), but you could
try to increase the curtain gas to the max to try to minimise the non
linearity. Sometimes it works.
Hope it could help.
Fabrice Guillet
XENOBLIS
France
Back to the Top
The following message was posted to: PharmPK
The danger in using quadratic equations is the inherent two
solutions. How
will you control for this? In QC samples it's quite obvious but in
unknowns
you need to be prepared. Will you run each sample in a series of
dilutions?
That may control the two solution issue but it is costly in terms of
time
and material. If you do not insert a better set of controls you may be
under reporting sample values.
Back to the Top
The following message was posted to: PharmPK
Dear all,
From a mathematical point of view quadratic equations will yield to two
solutions - OK.
Nevertheless, this is not really relevant for analytical applications if
you will keep a sound safety margin from the region were the slope of
the
function decreases to zero (e.g. saturation of the detector).
So if you do not push your calibration range to the max quadratic
regression will often give a better fit, and will generate results
with a
higher accuracy - especially for samples in the range of the ULOQ.
Unknown samples >ULOQ will be safely identified in this set-up.
Therefore I do not see any benefit in using strictly linear regression
models.
Maybe check both - linear and quadratic - regression. If precision is
OK in
both models and the quadratic model gives a better fit regarding
accuracy
it is reasonable to my understanding to chose the quadratic model.
Best wishes
Sven
E-Mail: sven.poetzsch.-at-.merck.de
Merck KGaA
Frankfurter Str. 250
D 64293 Darmstadt
Back to the Top
Regarding quadratic equations for standard curves, my experience for
LC-MS methods is that a quadratic curve is appropriate. Although the
possibility of two concentrations exists for one response, in
actuality only one of the two makes sense. For example, one of the
two solutions may be a negative concentration, or it may be impossibly
high. I have never seen both values fall within the range of the
calibration curve. This is easily checked when the curve equation is
generated.
-Tom
Thomas L. Tarnowski, Ph.D.
Bioanalytical Development /Analytical /Development
Elan Pharmaceuticals, Inc.
800 Gateway Boulevard
South San Francisco, CA 94080
thomas.tarnowski.-at-.elan.com
Back to the Top
The following message was posted to: PharmPK
Why is it that so many people limit their thinking to linear
regressions?
It's fine if they work, and they often do, but we should not expect
linearity in all cases.
Did anyone ever tell Mother Nature that she's supposed to be linear?
Walt Woltosz
Chairman & CEO
Simulations Plus, Inc. (NASDAQ: SLP)
42505 10th Street West
Lancaster, CA 93534-7059
U.S.A.
http://www.simulations-plus.com
Phone: (661) 723-7723
FAX: (661) 723-5524
E-mail: walt.at.simulations-plus.com
Back to the Top
The following message was posted to: PharmPK
Good for you, Walt!
Roger Jelliffe
Back to the Top
I like 4 and 5 PL as well as linear and quadratic, etc, etc Whatever
model is the simplest and best fits the data- but quadratic carries
some baggage. I have been in a situation using LC-MS and quadratic
running unknowns that were that were only resolved with dilution. It
is useful but the control process must be there.
Back to the Top
Dear all,
I want to know that,whether we can use quadratic curve in place of
linear curve for calibration standards? and i want to know in what
circumstances can we use quadratic curve?
Is this acceptable from regulatory point of view?
Thanks
Jacob.
[What type of assay - db]
Back to the Top
Jacob-
Regarding use of non-linear curves, for Bioanalytical applications the
FDA guidance is to use the simplest equation that fits the data. So
collect the calibration curve data and fir it to linear and quadratic
equations , and also use various weighting schemes (none, 1/conc, 1/
Conc Squared). Perform a residuals analysis. pick the one with
generally the lowest sum of residuals. if quadratic has lower
residuals than linear then consider using it. However it is also
always a good idea to understand why the curve might be nonlinear
(e.g., approaching detector saturation at high concentrations).
Tom
Thomas L. Tarnowski, Ph.D.
Bioanalytical Development / Analytical Development
Elan Pharmaceuticals, Inc.
800 Gateway Boulevard
South San Francisco, CA 94080
thomas.tarnowski.-at-.elan.com
Back to the Top
Yes; you can use a quadratic curve in preference to a linear curve:
one could prefer a quadratic curve if it afforded a significantly
better fit to the range of concentrations of the calibration
standards. In this situation based on scientific judgement one may
have to us more then 3 QC concentration levels to control the
quality of the samples across the concentration range of interest.
Angus McLean
BioPharm Global Inc.
Back to the Top
The following message was posted to: PharmPK
Good for you, Angus!
You can fit a polynomial curve to the relationship between the assay
SD and the measurement, each done in replicate. Take about 5 samples, a
blank, a low, a medium, a high, and a very high one. Measure each one at
least 5 times (more is better - SD's and variances need more samples for
good estimates. Get the mean and SD for each of the samples. Now you
can fit
that relationship between the mean measurement and its SD with a
polynomial
of up to 3rd order, for example. Now you have a way to get a good
estimate
of the SD of any single sample that comes through your assay system.
The other thing you can do with this is to square the SD to get the
variance, and then take its reciprocal, 1/variance. This provides a
well-known means of weighting data measurements by their credibility.
It is
not in the lab culture, but it is very well known everywhere else.
Look in
any statistics book and you will find it, usually under the heading of
Fisher information, for Sir Ronald Fisher, who developed it all.
The other very significant benefit of this is that you do not have
to censor low data. The whole idea of LLOQ or of LLOD is an illusion
perpetrated upon the lab community by itself, because of the false
ides that
C V% is a correct measure of assay error. It is not. The lab community
has
its head in te sand about this, and it is time to take advantage of the
opportunity to use SD and 1/var as the correct measure of the error.
` If you are treating a patient with HIV you don't want to drive the
viral load just to < 50 copies - what is that? You want to drive it to
0.0
and document that fact. YHou simply cannot do that with CV%, but it is
EASY
to do using SD and 1/var. You c an track a measurement all the way
down to
and including the blank, and you need to know the SD at the blank (the
machine noise).
Also, you might ask Nick Holford what he thinks about this.
You might also look at:
Jelliffe RW, Schumitzky A, Van Guilder M, Liu M, Hu L, Maire P, Gomis P,
Barbaut X, and Tahani B: Individualizing Drug Dosage Regimens: Roles of
Population Pharmacokinetic and Dynamic Models, Bayesian Fitting, and
Adaptive Control. Therapeutic Drug Monitoring, 15: 380-393, 1993.
Very best regards,
Roger Jelliffe
Back to the Top
The following message was posted to: PharmPK
Hi,
> Roger W. Jelliffe wrote:
>
> Also, you might ask Nick Holford what he thinks about this.
I agree with Roger that the chemical analysts have got their heads stuck
in the sand by refusing to understand about measurement error can be
quantified and used in pharmacokinetic and pharmacodynamic data
analysis.
Data analysts can deal with the measurements no matter how imprecise the
seem to be. But this requires that the chemical analysts do not
deliberately conceal information by refusing to reveal the measured
values less than the lower limit of quantitation (LLOQ). The LLOQ is a
parameter used for assay validation. It has no useful meaning for PKPD
data analysis.
Nick
--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
n.holford.aaa.auckland.ac.nz
Back to the Top
The following message was posted to: PharmPK
So you would then be fine with us reporting out data from assays in
which
calibrators and QCs fail accuracy as well as precision? Each of
those are
determinants of LLOQ we do not consider precison alone.
Back to the Top
The following message was posted to: PharmPK
efoconnor wrote:
>
> So you would then be fine with us reporting out data from assays in
which
> calibrators and QCs fail accuracy as well as precision? Each of
> those are
> determinants of LLOQ we do not consider precison alone.
I presume this is in response to my comment:
"Data analysts can deal with the measurements no matter how imprecise
the seem to be. But this requires that the chemical analysts do not
deliberately conceal information by refusing to reveal the measured
values less than the lower limit of quantitation (LLOQ). The LLOQ is a
parameter used for assay validation. It has no useful meaning for PKPD
data analysis."
I said data analysts can deal with imprecise measurements. That does not
mean they can deal with biased results. The chemical analyst still needs
to try to avoid bias e.g. caused by carryover from a previous sample if
sufficient time is not left for washout.
The LLOQ is a parameter used for assay validation. It is typically
performed before actual measurements are made. It is not a property of
an individual measurement or batch of measurements. That is why it is
not useful for interpreting measurements on real samples.
The chemical analyst should continue to do quality control checks. If
they fail for a batch then the whole batch is suspect and all
measurements should be discarded and the samples run again. Discarding
the whole batch does not cause bias in the data analysis that is caused
by selectively discarding samples which are less than the LLOQ.
--
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
n.holford.at.auckland.ac.nz
Want to post a follow-up message on this topic?
If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "Nonlinear standard curve" as the subject | Support PharmPK by using the |
Copyright 1995-2011 David W. A. Bourne (david@boomer.org)