Back to the Top
Dear all,
I am testing the linearity of a bioanalytical method using APCI and
ESI LC/MS interfaces, working on a single quadrupole. I am working in
isocratic mode with a 60 water/40 0.02M ammonium acetate mixture and
I am testing the range 1500ng/ml-1.5ng/ml. Temperature was set at
400*C. Flow at 400 uL/min. Now, while I am using the APCI injecting
the spiked plasma samples, after precipitation with acetonitrile,
evaporation and reconstitution with eluent, I get a r2 = 0.9996 which
sounds good. But as I switch to ESI with the same set of samples,
linearity goes down to 0.965....as I am new to the use of any of the
two interfaces, I don't understand why. Is it related to the
ionisation mode? As usual, you will forgive me I hope, I am asking
for a hint.
Best regards,
Federica
Federica Vacondio
Dipartimento Farmaceutico
Parma
ITALY
Back to the Top
1. ESI is more susceptible to ion supression than APCI, especially in
case of your sample prep method.
2. in ESI you can have some adducts (in your mobile phase with
ammonium or Na), which are not in the same percent as time analyte
conc is changing.
regards,
laurian
Vlase Laurian
MD, PhD, Pharm. Chem.
Teaching Assistant
Dept. of Pharmaceutical Technology and Biopharmaceutics
Faculty of Pharmacy
University of Medicine and Pharmacy "Iuliu Hatieganu"
13, Emil Isac
Cluj-Napoca, Cluj 400023, Romania
email:vlaselaur.at.yahoo.com
Back to the Top
The following message was posted to: PharmPK
>I get a r2 = 0.9996 which
>sounds good. But as I switch to ESI with the same set of samples,
>linearity goes down to 0.965....as I am
r2 is not a true indicator of linearity although many use it that
way. It is
not a goodness of fit statistic.
r2 is a measure of how much of the variability in the data is
accounted for
by the model, in this case a linear model. It may sound like a
goodness of fit
indicator but it is not.
The assumptions underlying least-squares regression fitting may not
hold by
the two different analytical methods. For one example, one
instrument might
produce unequal variability at different calibrator levels, violating
the
assumptions, and require weighting.
Consider the Lack of Fit test for linearity with replicate
observations at
each calibrator level. It is a reliable goodness of fit indicator.
Regards,
Stanley L. Alekman
S.L. Alekman Associates Inc.
Pharmaceutical Consultants
Inverness, Illinois
Back to the Top
The following message was posted to: PharmPK
Dear Federica,
APCI and ESI are quite different ionization techniques: with APCI
ionization occurs in the gas-phase wile with ESI it happens in the
liquid phase. There are numerous reviews and also Web-based tutorials
available. Your instrument Manual/Tutorial may also explain the
differences in more detail. ESI is more sensitive to ion suppression.
You may also want to compare the S/N at low concentrations (negative
mode can give better S/N). As a rule of thumb ESI is the method of
choice for more polar analytes that ionize well in solution and APCI
tends to work better with less polar material. ESI is also regarded as a
'softer' ionization method than APCI (i.e., less
fragmentation/dissociation). Optimal flow rates tend to be a little
lower for ESI than for APCI. Out of interest, what is the pH of your
ammonium acetate?
You'll need to optimze other instrument parameters (e.g., drying gas
flow, nebulizer pressure, corona current, fragmentor voltage, etc.) or
at least use the default (recommended?) settings when changing from APCI
to ESI and vice versa.
In short, it would be surprising if you didn't see any differences at
all in your calibration curves/lines between APCI and ESI.
Kind regards,
Frederik Pruijn
Back to the Top
The following message was posted to: PharmPK
Is it also possible that your sensivity was better in ESI than in
APCI and
that your detector was out of is range of linearity ?
ESI is known to have more linearity than APCI, we have seen that a
lot of
time with several drugs.
If your sensitivity is good for your limit of quantification you can
try to
inject a lower volume of sample.
You can also use a non linear calibration model (with ponderation) as
for
example a quadratic model.
Fabrice Guillet
Bioanalytical Group
Fournier Pharma
Back to the Top
The following message was posted to: PharmPK
Dear frederik,
thanks for the answer. Indeed, I had expected that all source dependent
parameters (e.g., drying gas
flow, nebulizer pressure, corona current) would affect mainly the
sensitivity of the method, as when the technicians came to show us
how to
optimise the performance of the two sources he employed just one
concentration of the test compound.
He had optimised compound dependent parameters (i.e. fragmentor
voltage and
other voltages applied to ionisation) by ramping in FIA, then chose
the flow
and with multiple injections in the loop he optimised source dependent
parameters. Then he tried elution also in the other source (APCI). I
guess I
will have to look deeper into it. Anyway, ammonium acetate pH = 6.
Thanks
again. When you are new to things, a hint from someone who knows is a
great
help to proceed!
Federica
Back to the Top
The following message was posted to: PharmPK
If R2 is not an indicator for goodness of fit, though it has been
used by many. What parameter/parameters should be considered to
check goodness of fit of the predicted model and observed data? Any
particular software will you recommend to use?
Thanks,
Valeria
Back to the Top
The following message was posted to: PharmPK
Federica Vacondio,
ESI generally gives narrower linearity range versus
APCI. 500 to 1000 dynamic range in ESI is typical, and
this is maybe the one reason. Second, since you use
protein precipitation, ion suprresion can be higher in
ESI than in that when using APCI. In addition, ion
suppression can also narrow down the linearity range.
Xiaodong
Back to the Top
>If R2 is not an indicator for goodness of fit, though it has been
>used by many. What parameter/parameters should be considered to
>check goodness of fit of the predicted model and observed data? Any
>particular software will you recommend to use?
>
>Thanks,
>
>Valeria
Hi Valeria,
In my post, I proposed the Lack of Fit test. If you like, I will send
you a file that describes it.
I am a statistician and not a Pharm Pk expert. My statements are
based on statistics and data management alone.
Lack of Fit is available in most statistical software.
It requires several readings or measurements from each calibrator.
That is generally easy in HPLC work that chemists do, simply
additional injections from the same vial or additional vials of the
calibrator solutions. I don't know if that is readily done with Pk work.
While R2 is not appropriate, it is very widely used. Differences in
R2, however, cannot be directly linked to the difference in degree of
linearity.
Regards,
Stan Alekman
Stanley L. Alekman PhD
Pharmaceutical Consultant
S.L. Alekman Associates Inc.
Inverness, Illinois
Back to the Top
The following message was posted to: PharmPK
Dear Valeria,
Of course R2 (which is not really a squared parameter!) is a measure of
goodness of fit; it just isn't the only measure and it does not tell you
the 'whole story'. Remember that you could use an n-th order polynomial
to 'fit' your data that will result in R2=1 (perfect fit) but this is
obviously not very useful for the intended purpose.
The best thing to do is (as with ALL statistics & modelling) to use
common sense! Inspect your experimental and fitted (i.e., the regression
line) data in graphical and tabular format. Very informative is to also
inspect the residuals. The experimental data should be normally
(randomly) distributed around the 'best fit' (i.e., the line), the
average distance from point to line should be the same along the line,
and any systematic deviations from this pattern need to be
pondered....... In addition, have a look at the confidence intervals
(limits) as they should not be too large.
Any (calibration) method needs to be validated and this is the real
test!!
HTH
Frederik Pruijn
Back to the Top
The following message was posted to: PharmPK
Hi,
I read an article on LC/GC,in which the author
recommended using the sum of the absolute values of the
relative error of each read-back accuracy of your
calbration standards. For example, if the weighing
factor is 1/x or 1/x.x, you can use the method I
mentioned above to check which fit is better.
Hope this can help.
Xiaodong
Back to the Top
The following message was posted to: PharmPK
Stan:
Can we use R2 to check how the observed data fitting in a predicted
model (not necessary linear, it could be a decay or association curves)?
Thanks,
valeria
Back to the Top
The following message was posted to: PharmPK
Dear Valeria:
As Xhiadong, I read an article entitled Selecting the best curve fit
from
LC.GC Europe, 17 (3), 138-143, 2004 that was
(I'm not an statistician) quite clear for me and I suppose, will be
useful
for you too.
Nelida Mondelo
Experimental Pharmacology Department
Gador SA
Back to the Top
>Can we use R2 to check how the observed data fitting in a predicted
>model (not necessary linear, it could be a decay or association
curves)?
Valeria,
R2 applies to any data model, linear or otherwise.
It simply reports the amount of variability in the data that is
accounted
for by the model. This variability is known as pure error or
precision error.
The remaining, unaccounted for error is variability that cannot be
associated
with the model and is considered bias error.
The Lack-of-Fit test I mentioned earlier addresses any model, not only
linear.
Again, if you like, I can send you something understandable on the
Lack-of-Fit test.
Regards,
Stan Alekman
Stanley L. Alekman PhD
S.L. Alekman Associates
Pharmaceutical Consultants
Inverness, Illinois
Back to the Top
>I read an article on LC/GC,in which the author
>recommended using the sum of the absolute values of the
>relative error of each read-back accuracy of your
>calbration standards. For example, if the weighing
>factor is 1/x or 1/x.x, you can use the method I
>mentioned above to check which fit is better.
Hi Xiaodong,
Weighting is probably only needed when the observed data is
heteroscedastic, that is, the replicate readings at each calibrator
level have different variance or standard deviation. Are multiple
readings at each calibrator level in this kind of work?
Regards,
Stan Alekman
Stanley L. Alekman PhD
S.L. Alekman Associates Inc.
Pharmaceutical Consultants
Inverness, Illinois
Back to the Top
The following message was posted to: PharmPK
Dear Stan,
You wrote to Valeria:
> R2 applies to any data model, linear or otherwise.
> It simply reports the amount of variability in the data that is
> accounted for by the model.
> This variability is known as pure error or precision error.
> The remaining, unaccounted for error is variability that cannot be
> associated with the model and is considered bias error.
I am not a statistician and I may be wrong, but I don't think that this
description is correct. The amount of variability in the data that is
accounted for by the model is not 'pure error' or 'precision error',
since
it is no 'error' at all (at least, if one assumes that the model is
correct); it is the variability we are looking for, i.e. the
prediction how
much Y changes if X changes (this is not restricted to simple linear
regression).
Residual error, 'unaccounted for' error is due to both random errors and
bias. R2 does not discriminate between random errors (experimental
error,
precision error, pure error) and bias. In this context bias is a
systematic
error between experimental data and the model predictions, due to
e.g. model
misspecification. To discriminate between random errors and bias,
repeated
measurements are required, e.g. as in the test on Lack-of-fit you
mentioned.
In addition I would like to repeat my amazement on the persistent
misuse of
R2 as a measure of goodness-of-fit. It is not R2 itself to blame for
(it is
a useful statistic in many applications), but the scientists and other
workers ignoring the very important and classical paper by Lewis
Sheiner and
Stuart Beal in 1981 (Sheiner LB, Beal SL. Some suggestions for measuring
predictive performance. J Pharmacokinet Biopharm 1981;9:503-512).
This paper
is highly recommended!
Any comments are welcome.
Best regards,
Hans Proost
Johannes H. Proost
Dept. of Pharmacokinetics and Drug Delivery
University Centre for Pharmacy
Antonius Deusinglaan 1
9713 AV Groningen, The Netherlands
tel. 31-50 363 3292
fax 31-50 363 3247
Email: j.h.proost.aaa.rug.nl
Back to the Top
Hi Hans,
The term pure error in statistics refers to variability or precision
error. I labeled the remaining sum of square errors which are not
accounted for by the fitted model with its pure error (precision) as
bias because it is not associated with the fitted model. It is what
is left over after the model has been fitted or specified. All the
data have been partitioned into two categories, that associated with
the fitted model, and everything left over.
The use of correlation and regression coefficients as goodness of fit
indicators is wide spread, spread in part by organizations like the
International Conference on Harmonization (of drug development and
quality standards) and analytical chemistry curricula (at least at
some colleges and universities). Cannot blame practitioners: not
everyone is a statistician and thank heaven for that...
Regards,
Stan Alekman
Back to the Top
The following message was posted to: PharmPK
Dear Federica,
At pH=6 of your ammonium acetate would you expect this to be sufficient
to ionize your analyte? In other words, do you know the pKa of your
compound? This is quite important for ESI as in this case you need to
ionize in solution.
Kind regards,
Frederik Pruijn
Back to the Top
The following message was posted to: PharmPK
Dear Stan,.
I fully agree with your description in your last message (16 Sept):
> The term pure error in statistics refers to variability or precision
> error. I labeled the remaining sum of square errors which are not
> accounted for by the fitted model with its pure error (precision) as
> bias because it is not associated with the fitted model. It is what
> is left over after the model has been fitted or specified. All the
> data have been partitioned into two categories, that associated with
> the fitted model, and everything left over.
but this is different from the description in your first message (15
Sept):
> It [i.e., R2, jhp] simply reports the amount of variability in the
> data that is accounted for by the model. This variability is known
> as pure error or precision error. The remaining, unaccounted for
> error is variability that cannot be associated with the model and
> is considered bias error.
I trust you made a 'pure error' in your first message (15 Sept), and
that
was the reason for making my comment.
Best regards,
Hans Proost
Johannes H. Proost
Dept. of Pharmacokinetics and Drug Delivery
University Centre for Pharmacy
Antonius Deusinglaan 1
9713 AV Groningen, The Netherlands
tel. 31-50 363 3292
fax 31-50 363 3247
Email: j.h.proost.-a-.rug.nl
PharmPK Discussion List Archive Index page
Copyright 1995-2010 David W. A. Bourne (david@boomer.org)