Back to the Top
As a rule, our analytical laboratory does not release data falling below our validated limit of
quantification for any given analytical method. We have always defended this, and backed up our
arguments with numerous guidances and regulatory references. However, we are constantly subjected to
unrelenting pressure from our pharmacometrics counterparts to release this data for their modelling
I have found references advocating for the release of BLQ data points – even one on the FDA website,
but am still unsure as to how the analytical laboratory controls the use of this data following
Is our standpoint outdated? Should we be releasing this data? Where do we draw the line (perhaps
signal to noise ratios)?
Many thanks for your input!
Quality Assurance Manager
K50.30 Division of Clinical Pharmacology
Old Main Building, Groote Schuur Hospital
Back to the Top
Unless you can show a quantitative relationship between your analytical response and the actual
concentration of a given sample, you should never release data that has not been appropriately
qualified. Ever. Now, there are requirements outlined in the BMV guidance. There will also be
symposium co-hosted by the EBF and the DVDMDG January 20 on a tiered approach to assay
qualification. But below the LLOQ, you do not know what is going on. You have to have empirical data
with generally established constraints. The pharmacometrics folks should understand that.
Christopher J. Kemper, Ph.D.
Pharma Navigators, LLC
Back to the Top
Your email is well-written and you have asked an excellent question. There is no common consensus in
the industry of the "appropriate' thing to do with data outside of the qualification range. There
are however two main "camps" or opinions on the matter.
The first was eloquently stated by Chris. Essentially, there is no confidence in data outside of the
qualification range so it should not be reported.
The second opinion is that any analytical method is simply a relationship between drug concentration
and some "response" variable (usually peak area ratio). While there is lower confidence outside of a
qualified analytical range, there is some confidence in those values when an instrument response is
registered. Therefore, data outside the qualified range should be reported with the appropriate
confidence (i.e., %CV at that concentration level). The reported value and the confidence can then
be used even when measurements are outside the qualified range.
As an analytical laboratory, you should work closely with your pharmacokinetic expert to devise a
data release plan for each study. There are situations where each of the opinions mentioned above
might be appropriate. I would recommend that you build a more flexible policy that imparts
responsibility and collaboration with the pharmacokinetic scientist to decide the appropriate data
release mechanism in each situation.
Nathan S. Teuscher, PhD
Founder and President, PK/PD Associates
Back to the Top
I am certainly no expert on PK but did some bioequivalence studies a few years ago and creeping in
then was something called weighted bio equivalence for highly variable compounds, basically a means
of weighing the BE outcome in favour of a compound deemed highly variable and the compounds under
investigation seem to increasing in variability. This is not bible but may give you some leads as to
why dat BLQ can even make a contribution to an AUC.
Senior statistical programmer at Parexel international.
Back to the Top
Unfortunately, you cannot control the use. And that can land the analytical lab in receipt of some
nasty letters from FDA. Even reporting out the instrument responses is not innocent enough. Since
some are all to eager to convert those to values. Jurgen Venitz lets it rest with just indicating
good arguments. Unfortunately there is no rational middle ground although attempts have been made
they all appear foolish. For example if less than LLOQ report as some fraction of the LLOQ.
Back to the Top
[A few replies - db]
In my opinion below BLQ data should not be released.
Prof. Dr. Zafar Iqbal
All that one needs to do is to look at almost any statistics book. The information has been
there for DECADES, but the lab community remains blind to it. The correct measure of precision of a
measurement that has Gaussian noise is the reciprocal of the variance. What is all the DEBATE about?
There is nothing to debate. Look at the real information. The only other thing I have learned is
that the detector response in some MS assays may not be smooth near zero. That is a different issue
entirely. But as long as detector response is smooth, just look at any statistics book, for example,
DeGroot M: Probability and Statistics, 2nd ed, 1989, pp. 403, 423.
Jelliffe RW, Schumitzky A, Van Guilder M, Liu M, Hu L, Maire P, Gomis P, Barbaut X, and Tahani B:
Individualizing Drug Dosage Regimens: Roles of Population Pharmacokinetic and Dynamic Models,
Bayesian Fitting, and Adaptive Control. Therapeutic Drug Monitoring, 15: 380-393, 1993.
What is the matter with the FDA? What is the matter with the College of American Pathologists?
Why don't they get some nasty letters? What is the matter with the lab community that there is any
debate at all about this? What are the ethics of RELEASING data? What are the ethics of withholding
it when there is, and has been for decades, good evidence that there is no LLOQ at all!
All the best to you all,
Your "validated" limits of quantification are simply not valid. There is NO SCIENCE is
arbitrarily setting some CV% as an upper limit of "precision" when judgment is only that, and not
science? CV% is simply NOT A VALID criterion of precision. 1/variance is the VALID measure. Let's
all get real, and use a correct measure of lab assay precision, PLEASE! Simply LOOK at the
references I gave. Please SEE how you can do so much better. How would you feel if you had HIV and
someone in the lab simply reported <50 copies? Suppose it was really 45? Or 5? How would you feel if
that person were YOU, and someone in the lab withheld that from you?
All the best to all,
Question out of curiosity.
Is there no way to release the whole dataset with an attached evaluation?
Basically you divide the results in "validated following XYZ regulation" and "not validated"
And you clearly state that you are not held responsible for the use of the "not validated" dataset.
If the client want to use them for whatever reason, they are to be held responsible.
Back to the Top
[A few more replies - db]
This is an old war. There are two sides:
1. The control freak chemical analysts (look at the recent posts in this thread for evidence of
their control freak attitudes).
2. The pharmacokinetic scientists who eventually have the responsibility of aiding patients through
understanding of drug disposition and effects.
One of the differences between the statements made by these groups is that the chemical analysts
only mention (without explicit references) sources such as " numerous guidances and regulatory
references", acronyms (BMV, EBF, DVDMG) and other non-peer reviewed utterances.
On the other hand the PK scientists can point to the published scientific literature which has
repeatedly demonstrated the bias caused by not having access to the information that is concealed by
the chemical analysts (plus suggestions for partially dealing with the problems caused by hiding the
1. Beal SL. Ways to fit a PK model with some data below the quantification limit. Journal of
Pharmacokinetics & Pharmacodynamics. 2001;28(5):481-504.
2. Duval V, Karlsson MO. Impact of omission or replacement of data below the limit of
quantification on parameter estimates in a two-compartment model. Pharm Res. 2002;19(12):1835-40.
3. Thiebaut R, Guedj J, Jacqmin-Gadda H, Chene G, Trimoulet P, Neau D, et al. Estimation of
dynamical model parameters taking into account undetectable marker values. BMC Med Res Methodol.
4. Ahn JE, Karlsson MO, Dunne A, Ludden TM. Likelihood based approaches to handling data below
the quantification limit using NONMEM VI. J Pharmacokinet Pharmacodyn. 2008;35(4):401-21.
5. Byon W, Fletcher CV, Brundage RC. Impact of censoring data below an arbitrary quantification
limit on structural model misspecification. J Pharmacokinet Pharmacodyn. 2008;35(1):101-16.
6. Bergstrand M, Karlsson MO. Handling data below the limit of quantification in mixed effect
models. AAPS J. 2009;11(2):371-80.
7. Xu XS, Dunne A, Kimko H, Nandy P, Vermeulen A. Impact of low percentage of data below the
quantification limit on parameter estimates of pharmacokinetic models. J Pharmacokinet Pharmacodyn.
8. Senn S, Holford N, Hockey H. The ghosts of departed quantities: approaches to dealing with
observations below the limit of quantitation. Stat Med. 2012;31(30):4280-95.
It is a principle in civilized countries to tell the truth, the whole truth and nothing but the
truth. I encourage the chemical analysts to read the scientific literature and to tell the whole
If you talk to pharmacometric scientists in regulatory agencies you would learn that they are in
full agreement with telling the whole truth. The reason you don't find this in the old guidances is
because of regulatory inertia and political infighting which means the old guidances are not updated
to give science based guidance.
Cannot simply state that you are not responsible. You know what use the Blq data will be put to.
That is why there is the requirement to indicate Blq and nothing else
I do not know what happened to my original reply. Anyway I will try to reconstruct. Some agree
that the data should stop at
Others believe that results less than LLOQ should be reported as some fraction of the LLOQ which is
the weakest argument. Assays consist of two major components accuracy- how close we are to the
expected value and precision- how closely values agree. The LLOQ in assays is related to noise and
to the performance of accuracy as %bias and precision as standard deviation or % cv. In MS assays
the LLOQ should be 5 to 10 times the noise, in ligand binding assays the LLOQ should S
above the noise . So the LLOQ must be removed from noise and meet accuracy and precision
requirements. Not just performance in terms of
Precision but in terms of accuracy and a relationship to noise.
Reporting out instrument responses of samples below LLOQ seems innocent but this will be used to
generate concentrations and the CRO will be held responsible for the sins of the Sponsor.
My view is that If you use the BLQ values which are obtained far apart at the end of the
concentration vs time curves, you may end up with, far apart values, that hoover around the limit of
detection. Instead of getting a trapezoid you will be dealing with a rectangle that will have a very
large but incorrect AUC. It is better to use the correct values then to knowingly use an unreliable
value, especially for regulatory submissions.
BLQ values may be useful in pharmacometrics but I would not recommend their use in
non-compartmental PK analysis. In the latter case I would consider BLQ values either as "missing" or
Just my opinion.
Back to the Top
A summary, as I see it, with some additions from bioanalysts here in the states. A very old but
still hot topic.
You have heard, no doubt, more advice about releasing BLQ datapoints than you ever wanted to hear. I
have contacts within the bioanalytical sector here in the US and asked them their opinions as well.
The core issue is this: you will be held responsible for any number you send out from your
laboratory and you have no idea where it will end up. As Edward said “you cannot control the use”.
Any qualifiers you may add may also be lost or misunderstood. I have had reviewers question why the
numbers from a preliminary PK data release were different from the numbers in the report. Basically,
they said that 123.45 was different than 123. It took a while for them to understand the concept
that I had originally sent raw data not rounded to an appropriate number of significant figures.
So I reiterate my original advice: send nothing out (including raw data) unless you can justify your
numbers according to recognized best practices in bioanalytical analysis. To date, these practices
are outlined in ICH and FDA guidelines. There is a move afoot to develop more flexible guidelines
for early development bioanalytical projects. If the PK/stats people want to play with the data,
they should take responsibility for that.
Nathan’s point , however, is on the money: if there is “some confidence in those values when an
instrument response is registered…[the] data outside the qualified range should be reported with the
appropriate confidence (i.e., %CV at that concentration level). The reported value and the
confidence can then be used even when measurements are outside the qualified range”. To make this
approach viable, multiple responses at fractions of the LLOQ need to be made. But the practical
truth of the matter is that there is rarely any kind of information about instrument responses for
and resources needed to characterize the region below the LLOQ would be worth it. The modelers are
experts in their area and they would be responsible for justifying how they use the data. The
bioanalytical folks would be responsible for providing data with the appropriate caveats.
Workarounds for dealing with “left censored” (as Peter Bonate describes missing data such as “BLQ”)
have their issues. Omission of the data (“missing”) can cause severe bias in multi phasic models: CL
is underestimated and peripheral volume and T1/2 overestimated. Censoring observations like this can
lead to a wrong choice in the PK model. Setting the value to “0” can make some models impossible to
evaluate and tends to introduce a negative bias in concentration summary statistics. Using the LLOQ
value would introduce a positive bias. Frequently, left censored data are imputed using some
fraction off the LLOQ because of the ease in implementation. Using the LLOQ/2 was the method of
choice at Wyeth Laboratories when I was there in the ‘80s. Bonate recommends a maximum likelihood
approach and found that it frequently does better than substitution methods. While a bit dense,
Bonate goes into more detail in his book “PK-PD Modeling and Simulation”.
In the end, as always, there is no substitute for empirical data to somehow characterize the BLQ
region. Then Roger Jelliffe’s frustrations about using existing statistical methods might be
satisfied. What kind of data (and at what cost) that will satisfy everyone’s needs is the question,
and has been for a long time.
Christopher J. Kemper, Ph.D.
Pharma Navigators, LLC
Back to the Top
WE may be controlled freaks but not control freaks. Our limits are based on both accuracy and
precision. We can often get very good reproducibility below our LLOQ. However the accuracy
associated with those values may be greater than +/- 50% so that the reliability goes from "that is
either Manhattan or Brooklyn" to "that is either Manhattan or Buenos Aires". Would anyone be happy
getting a result that is +/-50% BIAS or greater for accuracy but has a CV of 5%?
Back to the Top
> WE may be controlled freaks but not control freaks. Our limits are based on both accuracy and
> precision. We can often get > very good reproducibility below our LLOQ. However the accuracy
> associated with those values may be greater than +/- 50% > so that the reliability goes from "that
> is either Manhattan or Brooklyn" to "that is either Manhattan or Buenos Aires". Would > anyone be
> happy getting a result that is +/-50% BIAS or greater for accuracy but has a CV of 5%?
> You demonstrate very clearly your misunderstanding of the problem.
Your description of the bias as "+/-50%" is a meaningless statistical statement.
If you say the bias is +50% at some concentration with an imprecision ("CV") of 5% then this means
you are using the wrong model to describe the true concentration versus observed response (e.g.
"peak height", etc) relationship. You should choose a "standard curve" model that produces a bias
approaching zero at all concentrations.
If the "bias" at the same true concentration has a CV of 50% (perhaps that is what you mean by "+/-
50%") but an average around 0 then your model is fine but there is a random source of bias. This
kind of random source of bias can be handled very easily by standard residual error models.
So if you learned some statistics and some modelling I think you would understand better why
pharmacokinetic scientists are frustrated by your censored view of the world. It would also improve
the concentration measurements that you pass on for interpretation by scientists.
Back to the Top
So if we use your Buenos Aires example. Can I ask would it always be Buenos Aires (+/- 5%) for
every sample or could it sometimes (on a different day with different conditions) be Vancouver (+/-
There are always two issues here:
1) regulations & beliefs of analytical chemists
2) science & beliefs of pharmacometricians
Clearly we are never going to agree on everything. But, if for the time being we left the
regulations to one side, it would be nice to learn about the beliefs of analytical chemists that
imply that BLQ samples as so risky...
Stephen Duffull | Chair of Clinical Pharmacy | Otago Pharmacometrics Group
School of Pharmacy, University of Otago
PO Box 56, Dunedin, New Zealand
Back to the Top
May I just beg the LLOQ crowd to answer logically and honestly a few very simple questions (and no,
"because the guidance says so" is not an answer). There are only four possible ways to deal with the
data below LLOQ:
1. Leave it as is.
2. Replace with zero or LLOQ.
3. Replace with "missing"
4. Replace with some "intelligent" value, such as LLOQ/2.
Can anyone please explain why any of the options 2, 3, or 4 is better than option 1?
Now, regarding the question of whether to report or not to report the BLOQ data. If you think that
you are not reporting it by calling it "non-reportable", or by declaring it missing, you are wrong -
you are reporting it anyway, by influencing the rest of the dataset, by omission! Declaring the
sample missing is the worst, because it implies that you know nothing about that sample, whereas in
reality you know a lot - you know that it existed, that it was measured, and the concentration was
low, namely, it was between zero and LLOQ. So calling is missing is an outright lie, and please
forgive me for calling a duck a duck. You cannot escape the fact that the sample existed, and you
cannot pretend otherwise simply because you did not like the data. Imagine you had a blood pressure
monitor that was not as reliable in measuring low pressure as it was in measuring high pressure -
would you pretend that you did not see the patients who had low blood pressure?
Please, just think about it for a minute. And it is not about whether you are a bioanalyst or
pharmacokineticist (I, for one, am both). It is about simple, unbiased logic.
Back to the Top
Andrew and Chris,
You are forgetting the much more commonly used likelihood based methods of handling BQL data -
namely the most popular 'M3' method. You can read a number of papers but one easy to understand one
Likelihood based approaches to handling data below the quantification limit using NONMEM VI. Jae Eun
Ahn, Mats O. Karlsson, Adrian Dunne, Thomas M. Ludden
J Pharmacokinet Pharmacodyn. 2008 August; 35(4): 401–421
Also, I don't think you are correct in your statement of
> you know that it existed, that it was measured, and the concentration was low, namely, it was
> between zero and LLOQ.
It is my understanding, and I have never had data reported to me that distinguished 0, there is
simply < LLOQ, which can be quite frustrating for trials with prolonged rich sampling.
For Chris, while I appreciate your concern regarding the manhattan to buenos aires example, if that
were the case, the rest of our data would be like measuring the distance from earth to other
planets, thus even that 'huge' margin for error is minimal compared to the rest of the error in the
model, and as Nick mentioned this is quick easily accounted for with mixed error models. I'd be
satisfied knowing at least a measurable amount of analyte was detected vs nothing detected,
regardless concerns about the accuracy of the measurement within the boundaries of limit of
quantitation to limit of qualification.
Back to the Top
[A few replies - db]
1. We leave it as is and report a) that it is an extrapolated point. Surely most folks are sensitive
to that! b) we also describe the precision (SD or CV) and accuracy (BIAs) associated with the LLOQ
and perhaps the two or three anchor points below the LLOQ so that you can "see" the inaccuracy
Associated with points below the LLOQ or perhaps you as Pkists would prefer we not trivialize data
by our concerns regarding accuracy.
2. We can do nothing to prevent Pkists from assigning a zero or LLOQ value to the data we report to
3. We report the same consideration used to declare LLOQ in LC-Ms assays and even Roger permits those assays some
considerations in declaring an LLOQ. See earlier entries in this train.
4. We report
when there isn't one. But again this approach is covered under 2 above.
If I am to understand the blood pressure analogy you would report a value you new might be precise
but inaccurate because? The honest thing would be to indicate you could not assign a value, that
Might arrange to find a better instrument with better accuracy rather thane report the blood
pressure as between
Please read my email again. This is all I was asking - stop and think for a minute, with an open
mind and without blindfolds. The paper you mentioned is a perfect example of a solution in search of
a problem - because there is no scientific need to find "approaches to handling data below LOQ".
Although I'm sure the authors had fun polishing their NONMEM skills. Just don't call it science.
Remember phlogiston? - there was a lot of theorizing around that too. The only problem - it did not
exist. Nor does LOQ.
As for my other statement, I am not sure what you disagree with. Did the sample exist? - yes. Was it
measured? - yes. Was the result obtained? - yes. So why is it missing?
I suppose that Ed means an unknown bias within +/- 50% randomly
distributes between samples but CV within a sample is 5%.
In my opinion, as far as the bioanalyst thinks that the responce below
LLOQ is questionable it should not be reported as a value.
Back to the Top
It is not missing, it is
You have provided a very thoughtful and logical assessment of how to treat the BlQ values in
standard PK anslysis. Here are some of my additional comments (sre my previos e-mail on this
subject) after reading various
e-mails on BLQ values.
1. I do not prefer to use values below the limit of detection in PK analysis since these values by
definition do not meet the assay specificaton.
2. We could mention it as "missing" but this is not true since the sample was NOT missing. It was in
fact assayed and certain values were produced.
3. We could consider as it as "zero" but this too Is not cotrect since measurable values were
produced. The only exception is the zero hour sample.
4. We are then left with one credible and defendable argument and that is "concentration below the
limit of detection (BLQ) were not used in PK analysis"
I would love to hear the PK forum comments on proposal 4.
As a theorician newbie in applied analytical chemistry & PK, I am very
interested by the discussion on handling
I tried to figure out all comments made to date, and I have a few
concerns a question, for both approaches.
1) I did not saw a clear definition/method of what is limit of
detection and limit of quantification (at least the lower ones).
I can imagine it quite easily when the assay does not have a blank,
as may be suggested by ICH when advising to make at least 5 points
at 80 %, 90%, 100 %, 110 % & 120 % of the target concentration, and
probably other approaches also : it is simply not possible to have
anything below the slowest concentration used in the calibration
curve building, because we have no information on what's going one,
for instance is it still linear for the simplest model of a
straight line as a calibration curve. I'm not clear if this would
be LOQ or LOD, however. But I guess I agree with defenders of « just
say < LOQ ».
It's more difficult for me to apprehend what it could be if a blank
is used in the calibration curve building, because we have a model
for the whole range, so it should be always possible to have a
concentration and its confidence interval --- in this setting, I
guess I agree with defenders of « no LOQ ».
Is this first distinction something that seems relevant?
By the way, it seems that assay results are never given with a
confidence interval on the value, is there any reason for that? or
is it a special case of the results I got so far?
2) Just to counterbalance the previous comment: even if blank is
present, if confidence interval contains 0 and negative values,
because of the model, there is clearly something wrong. Just
truncating the interval would lead to overestimating its
coverage. In that case, it's difficult to say anything about the
precision (what is the guarantee that delta-method gives a good
approximation of sigma? and even so, since distribution cannot be
Gaussian, what can we really do with that sigma?), nor the accuracy
(the model is probably wrong): is it really better to use these
values than use < LOQ and suited statistical methods to hande
2) For both case, I've read that LOD/LOQ is the first concentration
for which we can say we are above noise. But I can see at least two
ways to do that:
- first concentration (X) for which predicted measure (Y) is
significantly different from 0
- first concentration (X) whose confidence interval does not include
I would prefer the second one, but which is the one used to define
3) I've read that LOD is for 99% / 3 sigma [if Gaussian] intervals,
whereas LOQ is for 10 sigma intervals. If true, why such a choice?
Why not 2 (95 %) and 9, or even non-integer values?
(obviously, all such concern do not apply directly if LOQ is just the lowest
reference concentration for assay curve, but I guess can be
transposed if the aim is to be above this lowest concentration
instead of 0).
4) For precision/accuracy: I thought recent approaches were not so
focused on these, but just be sure that results could be guaranteed
to lie between predefined acceptance limits, and it does not matter
if the method is biased or not, precise or not, at long as it
guarantees that --- same idea than equivalence tests, but with
kinds of equivalence bands and be sure that results are in it. Is
that an oversimplification?
5) An approach I didn't read to handling
randomly a value between 0 and this several times, at least to see how large is the influence of
arbitrary setting 0, LOQ/2, LOQ or any other fixed values, and
probably not as good than using methods for censored data, but does
anyone has experience with it?
6) A problem with
discordant results when repeating the measure on the same sample two
Thanks in advance for any clarification/precision/correction on these
I am not sure re Nick's comment about a measure of accuracy. Our models are predicated on
selecting the simplest model returning the best measureless of performance in terms of accuracy and
precision. In most assays the accuracy profile is u shaped with the lowest inaccuracy generally
around the midpoint of the range and the greatest inaccuracy at the bottom and top of the curve. I
Open to learning about better curve models and would appreciate any direction. Currently we use
linear to 5pl fits both with and without weighting. But if there are others applied in analytical
applications I will take a look. On another point, what do you,Nick, use as a measure of
separation between observed values and expected ones?
Andrew (& Ed)
I don't think you have fully summarised the methods that are possible for dealing with data that is
reported as LLOQ.
There are three methods (at least):
1) discard the data reported as BLOQ.
2) impute the data reported as BLOQ (this is essentially your methods 2 & 4, but there are other
methods as well for this imputation).
3) compute the probability that the data is indeed BLOQ given the lab has reported it as BLOQ.
Your (1) "leave it as it is" is not possible as "< LLOQ" is not a numerical value and hence cannot
be used in calculations for PK purposes.
I think it is important to remember that just because the lab reports the sample as BLOQ does not
mean that it is indeed BLOQ - just that it was BLOQ on that occasion. Hence your statement " you
know that it existed, that it was measured, and the concentration was low, namely, it was between
zero and LLOQ." is not true.
If we conduct a thought experiment:
Imagine it was possible to repeat the assay 100 times on the BLOQ sample. On some occasions it
would, by chance, be greater than the LLOQ and hence the assayed value reported and on other
occasions it would be BLOQ and hence "< LLOQ" reported. If we compute the proportion of times that
the assay identified the sample as "< LLOQ" then this is the probability that the sample was indeed
This thought experiment outlines the integration that must be performed over the non-reported value
which is what pharmacometricians don't like doing if they can avoid it...
Back to the Top
All. In development and validation we do try to select the simplest model to fit the data. That
fit is not assessed by r2 but by both the precsion and accuracy associated with points across the
calibration range. THis can be done in a tabular form and graphic form. I have include the URL
for an accuracy curve below:
The accuracy is plotted as % Bias (absolute) on the y axis and concentration on the x axis, Using a
20% limit for the tolerance define the LLOQ and ULOQ and also illustrates the increase in
uncertainty beyond both the LLOQ and ULOQ. We genrally do this or somethig mcuh like this when
evaluating each model to set the LLOQ and ULOQ at the best limit of tolerance we can get. The same
approach is used across platforms. We can assure our end users we try the best. We cannot assign
an accuracy measure to unknowns because they are just that. WE can and do state the limits of the
assay in terms of precision and accuracy in the validation and repeat it in the report along with a
summary of performance during sample analysis that performance is also listed in the method and in
the sample analysis outline.
Back to the Top
Dear Ed and all,
What is the point of all this when all you do is wind up thinking in terms of percent and not
the real data which is SD? I saw an interesting thing the other day on the Westgard rules, which
seem to be a non at all unreasonable way to evaluate assays. They are heavily based on the
assumption that lab errors have a Gaussian distribution, which is also quite reasonable. What I
really DO NOT understand is why you guys drop all this and then only think in terms of percent
accuracy and percent precision. ONCE AGAIN, look at the statistics books, for example,
Dear Ed and all,
My reply got truncated. Here is more. I recently saw an article on the Westgard rules for
quality evaluation. They sound quite reasonable to me. They are all based on the assumption of
Gaussian noise in assays. They evaluate assay SD. That is the way to go. Do not depart from SD to
CV%. That is where the lab qc goes wrong. That is when they think they must censor low values. That
is what is crazy. Why do labs do that and never look at a statistics book? That also is what is
crazy. There are NONE SO BLIND as those who CHOOSE NOT TO SEE. Once again, for example, I beg you to
look at DeGroot, Probability and Statistics, 2nd ed, page 403, where it says "The precision of a
normal distribution is defined as the reciprocal of the variance". If you guys don't like that, what
is your response? You cannot remain silent. That simply will not fly. You must mount a rational
response to this. I must say that I have been listening intently for a discussion of this point, but
have heard NOTHING scientific. Why do labs limit themselves so much when they can do so much better?
For all that, if the lab guys are so socially responsible that they try to make us think that THEY
must decide what to RELEASE rather than withhold, why do they not release the errors with which they
make any measurements? It is not that the MD's will be confused.... don't go there, please. Don't
think they are so stupid as that.
Please, please, just look, just once. See what you are missing out on.
Best to all,
Roger W. Jelliffe, M.D., F.C.P., F.A.A.C.P.
Professor of Medicine Emeritus,
Founder and Director Emeritus
Laboratory of Applied Pharmacokinetics
USC School of Medicine
Consultant in Infectious Diseases,
Children’s Hospital of Los Angeles
4650 Sunset Blvd, MS 51
Los Angeles CA 90027
Back to the Top
You raise an interesting point: what is the noise power spectra "typically" observed for today's
chromatography? Does anyone have contemporary actual noise power and actual noise plus signal power
spectra they can share?
And if so, what is the noise power effect of the intermediate frequency filtering that is used for
diode array detectors?
Frank Bales, Ph.D.
Bales Pharma Consulting LLC.
Back to the Top
Roger: it is again that we need to consider accuracy as well as precision. We could set and
report limits in terms of SD for precision which would be relative to A 20% CV so that instead of
reporting a 20% Cv where the SD is 20 for a mean of 100 we would just indicate 20 rather than 20% or
1 where the the Sd is 1 for a mean of 5 rather than reporting 20%. What does this gain us? Level
Jennings plots for westgard rules can also be plotted using percent CV.
Back to the Top
Of course, Ed. But accuracy, like precision, is not percent. And, of course, the Westgard plots can
also be done in terms of CV%. But I have not heard anything from you or anyone else that refutes the
fact that 1/var is the correct way to describe precision (or some unit of accuracy that is not a
percent). Come on - you are not responding to the statement about 1/var. Fish or cut bait. I have no
opinion on the matter. I just look at the evidence and go where it takes me. There is no scientific
reason to LEAVE Sd, var, and 1/var and switch to a percent CV. None. Now, say something. Say
something about accuracy that is not percent. I am listening. Say something about accuracy in terms
of the units measured, for example.
I am listening, Ed. I have not heard a thing yet from any of these supposed authorities who think
they are qualified to RELEASE data. I cannot tell you how disappointing this is. From their
behavior, I doubt the ability of the lab community to understand anything except percentages. So
Why do they think they have to SET any limits?
Tell me HOW YOU propose to consider accuracy, please.
Once again, what all this GAINS for you is the ability to describe assay errors without any LLOQ!
Did you forget that? You don't have to do that any more! Don't you SEE what you can gain by that?
All the best, with hope that springs eternal,
Back to the Top
I do not know. What I do think I know is that one can make replicate measurements of any value
down to and including zero and determine the mean and SD of the results. CV% blows yup as the
measurement approaches zero, but NOT the SD or var or 1/var. That is why CV% is such a poor measure
of precision for anyone with more than a 6th grade education. Look at any statistics book, as I have
suggested MANY times. 1/var is the correct way to describe precision of measurements having Gaussian
noise, and this is correct all the way down to and including zero. Just look. Just look. You ccan
then decide about how many SD's above the blank mean you wish to consider as recognizing the
presence of something in the sample.
Back to the Top
So we measure a nominal at 50ng and the returned or back calculated is 75 ng, which is + 25 ng above
the nominal. You would be happier with this report rather than saying the error is + 50%? We
measure another curve point at 300ng and we find the value is 450ng or +150ng above nominal. It
would be more appropriate in your eyes to indicate that the error is 25 ng at 50 and the error is
150ng at 300 ng rather than just indicate that the method has a + 50% bias? For a point measured
below an LLOQ for an assay we measure 2, 3 and 4 ng. The mean is 3, the Sd is 1 ng and the variance
is also 1. Some how the sd of 1 and the 1/1 (1/variance) conveys more information than the CV% of
(1/3)x100 or 33.3%? How does the cv corrupt the understanding and interpretation of precision?
Back to the Top
[A few replies - db]
My opinion (as a believer):
Results below the LLQ that are still above the LOD are informative results and should be reported,
although with a disclaimer that bias and imprecision are unknown. Results below the LLQ AND below
the LOD are also informative, but harder to interpret from a pharmacometric point of view and should
be reported as
Rob ter Heine
Hospital Pharmacist-Clinical Pharmacologist
Meander Medical Center, Amersfoort, The Netherlands
Just a few comments from a theoretical point of view, to both Ed and
Roger. Would be happy to have comments on these. In short: why results
are not given with their 95% (or other confidence level) confidence
interval, that would convey all information about the precision and
accuracy of the results (for the accuracy, as far as the method is not
too biased, that is as far as one is not completely out of the
1) As a statistician, I don't like CVs as all other kind of
percentages for anything else than purely description of results,
because they always lose the "sample size" or equivalent
information, which is important. I mean, one can always compute CVs
using mean and SD, but if I have only CVs, i cannot do the other
way, I need something else; as all percentages, it is also not
always clear what was used as the denominator (theoretical or
experimental mean, and so on).
But I think it is not the worst situtation in assays, since I guess
mean used in the CV given is always the determined concentration.
2) Conversely, I have a few theoretical problem with the « just SD /
Gaussian » approach of Roger, for two reasons:
- SD gives hints on precision (« random error », « noise »...), but not
any information about accuracy (« bias », « systematic error »...);
- if you assume a Gaussian noise on lab measures, it means the Y
values (optical density, peak array or whatever is measured) is
Gaussian conditionnaly to the X values (concentration). However,
what we are interested in in the X value, obtained from the
measured Y and the calibration curve --- let's say a straight line
for sake of simplicity Y = a*X+b. Hence, X = (Y-b)/a. But not only
Y is random, also a and b, and also Gaussian --- hence X is not
Gaussian, but a ratio of two correlated Gaussian. It has not a
finite variance, and not even a finite mean (as can be seen in the
simplest case, where the ratio follows a Cauchy distribution), so
speaking of mean and SD of concentration is at best an
approximation, quite often acceptable, but not always.
It can be seen when computing confidence intervals on the X values,
for instance graphically using the prediction region of the
calibration curve: the determined X value is not at the center of
the confidence interval, and the effect (asymetry) is higher when
one is far from the middle of the calibration curve and when the
uncertainty of the calibration curve is higher (either because
experiment is noisy, or because it is not so accurate, since both
will increase the residual variance). Hence, SD will be difficult
to interpret (not even speaking of CV...), as always when the
distribution is not symetrical
So, why not use « confidence » intervals, as is advised in many other
areas where « effect sizes » or « parameter values » are needed? It
will give all information about the result precision and, somehow,
accuracy, and give natural warnings about low concentrations: if the
lower sided is 0, no certainty; the upper side will give a upper limit
for the value of this given sample, but can be different for another
sample, hence will be more informative than a fixed LLOQ.
The only remaining problems are
- what level of confidence to use? (95 %, 99 %...) ;
- how far is the Gaussian approximation used to build the prediction
region acceptable to trust this confidence interval?
And if it is too complex (?), why not give the value, the SD, the
accuracy and the CV so that anyone can find what he prefers?
Just LOOK at the reference. Everything you say or imply can be twisted that way, but why create
such a perversion of the truth? Please JUST LOOK at the reference. Why twist it into something less
useful than it is? Why do you WISH to have a BLQ at all? Isn't life better (more useful) without it?
Come on, Ed. Open your eyes! Do you really prefer % to the correct measure? And WHY? There IS NO
LLOQ? Can't you SEE that? How does CV% corrupt the understanding and interpretation of precision?
Because it is incorrect, and leads you into the paths of useless SETTING of a poor measure of
precision that limits the usefulness of what you do. That is why. Use of CV clearly takes you to the
incorrect measure of precision. There is no LLOQ. Why in the world do you wish to INVENT something
(an LLOQ) that is NOT needed and which is NOT useful,, and which LIMITS your ability to correctly
describe percision? As the measurement approaches zero the CV% gets larger and finally approaches
infinity, but the SD always remains finite, and so do Var and 1/var. That is the answer. That is HOW
cv corrupts the understanding and interpretation of precision. That is it. Just LOOK. PLEASE, your
holiness, just look in my telescope! Just look!
Yes, Ed. For a nominal measurement at 50ng and a returned or back calculated of 75 ng, yes, I
would be happier with this report rather that saying that the error is 50%. Yes, BECAUSE, as the
measurement approaches zero, we have a finite measure of precision rather than something that BLOWS
UP and becomes infinite. That is why there is no LLOQ. Why do you want to use such an imperfect
measure of precision? You are exactly right. What is the CORRECT percent error of a blank
measurement that has an SD of 1, a var of 1, and a 1/var of 1? That is exactly HOW cv corrupts the
understanding and interpretation of precision. Because it is USELESS as the measurement approaches
zero. Why persist in this view when 1/var is so much more useful, and is correct for a measurement
having a Gaussian distribution of its errors?
I would like to answer Frank about his questions.
It is hard (if not possible at all) to determine a workable value for the NPS.
This one should help understand why.
Roger and Edward topic:
It is indeed true that the nearer to the zero, the more sensitive to small variation the CV gets.
But I cannot think about an example of calibration curve which include a 0 level. And near 0 points
can be easily treated to avoid the problem you mentioned.
I would be happy to see the SD in a report because it allows me to build confidence intervals
Dear Ed and Roger,
This is my two-cents contribution to your interesting and vivid discussion. For clarity: I'm on
Roger's side. From a pharmacometric point of view the question is simple: we need the results of ALL
measurements for data analysis, since ALL measurements contain information about the behavior of the
drug. So, nothing should be omitted, for whatever reason. But ALL measurements (i.e. the best
estimates of the concentrastion) should be accompanied by the best estimates of their credibility,
most practical the standard error. Leaving out measurements or standard deviations implies loss of
information that is needed for optimal data analysis. This does not only apply to BLQ values, but to
ALL measurements. A value without a standard error or confidence interval is meaningless, in
pharmacometrics as well as in the rest of the world. Not '<20%' but the best estimate, over the
whole range of measured values. If you persist on expressing the standard errors in %CV, I don't
object, as long as you provide the complete info.
Johanns H. Proost
Dept. of Pharmacokinetics, Toxicology and Targeting
University Centre for Pharmacy
Antonius Deusinglaan 1
9713 AV Groningen, The Netherlands
Thank you SO MUCH for participating in this discussion. Are you aware that you are the only one?
Good for you!
All the best, always,
Back to the Top
Thanks, David, for putting these all together. And thanks to all for joining in the discussion.
I have no opinion. I just try to look at the evidence and act on that.
First, to you, Rob, I ask you again to look at the evidence and to reconsider what in your view
is reportable or not. The job of the lab is not to release or withhold, but to make their
information public. Others are probably as smart as you, and just as able to draw conclusions from
the data. What makes you think you are the one who must decide what to RELEASE? There is no LLQ.
Look at the evidence presented by SD, var, and 1/var, all of which are finite all the way down to
and including zero. Any LOD is a negotiable number of SD's above the SD of the blank. We all speak a
common language and we all can make intelligent judgments about the values reported under any
circumstances. What do you mean that bias and imprecision are unknown? You have a value and it has
an SD (or 95% conf limit). Bias, I think you mean, is a deviation from a regression line. This is
not well expressed as a %, any more than is imprecision, but in the correct units of the
measurement. Also, do you think the pharmacometricians are stupid and should have a value less
than what in YOUR opinion is detectable deliberately withheld from them? That corrupts the data.
What is your expertise compared to theirs? No less, but no more. Again, there is no LOQ, and any LOD
is a totally negotiable number of SD's above the blank. To withhold such information is to corrupt
life as all of us see it. If you think that, I would suggest that you might review why we do
Now to you, Emmanuel, and the meat of the discussion. Good for you for bringing up many new
things. I will study up on the Cauchy distribution. Bias. I agree. This should be described not in
percent, but in the units of the assay used, usually in relation to a regression line, or
polynomial, reflecting the assumed relationship between X and Y. What I suggest is to first, get the
assay in acceptable shape, with replicate standard samples, including blanks, to determine the
relationship between concentration and assay SD. You also need enough replicates in each sample to
obtain a realistic estimate of the assay SD. You might look at
1. Ahn S and Fessler JA: Standard Errors of Mean, Variance, and Standard Deviation Estimators.
Technical report, The University of Michigan, July 24, 2003.
2. Seber G.A.F. and Wild C.J.: Nonlinear Regression, Wiley, New York, 1989, pp. 536 – 537
The relationship between the number of replicates and the error in the estimate of the sample SD is
described below -
ERR of σ≈1/√(2(n-1) )
Or, after rearranging, as
Where ERR is the standard error of the estimate of the SD (a number from 0 to 1.0), σ is the true
value of the SD, n is the number of replicate samples, and ≈ means “approximately equal to”. The
more replicates measured, the more precise is the estimate of the SD. For 3 samples, ERR is 0.5 or
50%. For n=5, it is 0.353 or 35.3%. For n=9, it is 0.25 or 25%. So, clearly, the more replicates the
Having validated your assay, then I would suggest putting real samples through it, and using a
polynomial relationship to obtain a reasonable estimate of assay SD for each single sample that goes
through your assay system. From this you can get whatever you want - the useless CV%, or the useful
SD, var, and 1/var. Then all you have to do is to report what you get - the value itself, and the
associated SD, CV%, var, 95% conf limit, or whatever. You can also describe the mean error of the
assay in relationship to the equation used for its calibration, in the correct units, not percent.
This is my ideal. I would suggest that what is found in the sample assayed, NOT the regression
relationship from the standards, which you can get without doing the assay at all, is the best
reflection about what is in the sample.
And YES! Let's report out (RELEASE) ALL the real stuff! The measured value, and the SD, var,
1/var, and even CV%, so EVERYONE can have what they want! I am surely with you on that!
Best to all, and many thanks for a great discussion!
Back to the Top
Roger et al: 1. Bioanalytical reports contain mean, SD and counts in addition to %CV and %bias.
It's there if you have ever examined a BA validation or report. That there is so much discussion
about including those data suggest perhaps the reports are not reviewed thoroughly by end users.
Both Cv and bias were inventions by statisticians to compare parameters across bounds in our case
across assays across platforms etc,
It must be remembered that we are not the end users of the data. PKPD folks can request us to
report and provide any measure of accuracy and precision they desire. Although they can request
otherwise, most PKPD request or are content with CV and Bias. And most do not challenge reporting
Now, if PKPD wanted instrument response data released to them we could do that provide they get
clearance from FDA, EMA, MHRA, etc., that would hold the bioanalytical lab free from threat of 483s
etc for releasing such data.
Again, most PKPD accept the concept of LLOQ and live with it and do not request anything more. The
LLOQ is usually designed to provide at least 3 to 4 points which characterize the terminal phase and
permit estimation of half life with some degree of certainty.
Want to post a follow-up message on this topic?
If this link does not work with your browser send a follow-up message to PharmPK@lists.ucdenver.edu with "Releasing BLQ datapoints" as the subject
Copyright 1995-2014 David W. A. Bourne (email@example.com)