Back to the Top
Dear all
I have a problem with HPLC method validationin. My question is: should
the
lower concentration in linearity range be the limit of quantification
(LOQ)?? Any reply would be greatly appreciated.
Thank you
Parvin
[Traffic on PharmPK has been rather light this week. I hope it isn't
something I've said ;-) If you have sent a message to the list since
the weekend and it hasn't appeared please send it directly to me at
david.-a-.boomer.org or david-bourne.aaa.ouhsc.edu and I'll forward it
directly. Thank you - db]
Back to the Top
The following message was posted to: PharmPK
The lowest standard on calibration curve should be accepted as the
limit of
quantification if the following conditions are met: 1) the analyte
response
at the LLOQ should be at least 5 times the response compared to the
blank
response 2) analyte peak should be identifiable discrete and
reproducible
with a precision of 20% and accuracy of 80-120%
Francoise
francoise.bree.-at-.biopredic.com
Back to the Top
The following message was posted to: PharmPK
Dear Ms "Parvin Zakeri"
LOD and LOQ are refered to limit of detection and limit of
quantification as follow:
Limit of Detection (LOD) - Lowest level of analyte that can be
detected, but not necessarily quantified.
Limit of Quantitation (LOQ) - Lowest level of analyte that can be
reliably measured with acceptable accuracy and precision.
I hope these help
With the best wishes
Dr. Abolfazl Mostafavi
Faculty of Pharmacy and Pharmaceutical Sciences
Isfahan University of Medical Sciences
Isfahan, I.R. Iran
Back to the Top
The following message was posted to: PharmPK
I would just like to add that LOD is lowest level detected with a S/N
(signal to noise) ratio of 3 or more. Please correct me if not so.
Thanks,
Chandra Chaurasia
Back to the Top
The following message was posted to: PharmPK
I would add to other replies that you could have additional points
below your
LOQ. This could be usefull to improve your curve fitting. Which may
improve
other assay characteristics like accuracy, precision, LOD, LOQ, etc.
On the other hand, you don't want to have too many points.
However, all of this should really be part of you assay development.
Roy
Back to the Top
The following message was posted to: PharmPK
Regarding the possibility of calibration standards lower than the LLOQ
in a previous message:
"I would add to other replies that you could have additional points
below your LOQ. This could be usefull to improve your curve fitting.
Which may improve other assay characteristics like accuracy, precision,
LOD, LOQ,
etc."
The FDA guidelines on Bioanalytical Methods
(http://www.fda.gov/cder/guidance/4252fnl.htm) strongly discourage this
for chromatographic assays:
"The LLOQ should serve as the lowest concentration on the
standard curve and should not be confused with the limit of detection
and/or the low QC sample."
Thomas L. Tarnowski, Ph. D.
Project Team Leader
Dept. Head, Drug Metabolism and Pharmacokinetics
Roche Palo Alto
3431 Hillview Avenue, Palo Alto, CA 94304
* Phone: (650) 852-3182
* FAX: (650) 852-6428
* email tom.tarnowski.aaa.roche.com
Back to the Top
The following message was posted to: PharmPK
Let me start by saying that I am not a bio-analytical scientist. Having
said that, I think the general agreement is that one should not
extrapolate out of the limits of the standard curve (SC). You won't
(generally) represent a sample's concentration as 250 ng/ml if your SC
covered concentrations between 10-100 ng/ml. The same is valid of course
for the other end: using the same SC, one wouldn't accept a sample's
concentration being reported as 3 ng/ml (and believe me, I have seen
examples of these!). So the question is, how can we have anything lower
than LOQ as the lower limit of our SC? In the example above, 10 ng/ml is
what we with some certainty (at a minimum over 80%) can quantify. I am
not sure how a 5 ng/ml or 2 ng/ml sample will help getting a better
standard curve since samples with concentrations below 10 ng/ml are
deemed not be quantified reliably.
On the other hand, it would be strange not to use the lowest reliably
quantifiable concentration as our lowest point of the SC. In the example
SC above, why would one use 25 ng/ml as the lowest point, risking to
miss a reliable quantification of samples below 25, which might be above
10? There are of course exception, e.g. if it is already known that the
samples would be in a certain range and maybe there is no need to use
the LOQ as the lowest point of the SC.
Toufigh Gordi
Back to the Top
Dear Parvin:
Why do you have a lower limit of quantification at all? It is
true, for toxicology, where the specimen itself is the only source of
information, you need it. However, for PK work, where you also know the
time of the sample and the time of the last dose, for example, you know
the drug is there, since it usually decreases with a half-time. Because
of this, you are not asking if the drug is present or not, as in
toxicology. You know, from the other information, that the drug is
there, and you are asking HOW MUCH is there? You might look at pp
136-139 in Handbook of Analytical Separations, ed by Georg Hempel, Vol
5, Drug Monitoring and Clinical Chemistry, Elsevier, 2004, ISBN
0-444-50972-0, for more on this.
Very best regards,
Roger Jelliffe
Roger W. Jelliffe, M.D. Professor of Medicine,
Division of Geriatric Medicine,
Laboratory of Applied Pharmacokinetics,
USC Keck School of Medicine
2250 Alcazar St, Los Angeles CA 90033, USA
Phone (323)442-1300, fax (323)442-1302, email= jelliffe.-at-.usc.edu
Our web site= http://www.lapk.org
Back to the Top
The following message was posted to: PharmPK
During the development and validation of bioanalytical method, the
importance of limit of quantification should be given importance. As
mentioned by Dr Mostafavi LOQ is defined as:
Limit of Quantitation (LOQ) - Lowest level of analyte that can be
reliably
measured with acceptable accuracy and precision.
During the development, the analytical researcher should try to get the
lowest possible Lower Limit of Quantification because this will
determine
acceptable low values. This LLOQ should be also included in the
calibration curve and should be considered as the lowest acceptable
value.
Any value lower than LLOQ should be considered as unreliable and is
usually
reported as zero. If the calibration range is wide, it is also
advisable to
add calibration point/s near LLOQ value for a more reliable response
function.
Reference:
The FDA guidelines on Bioanalytical Methods
(http://www.fda.gov/cder/guidance/4252fnl.htm)
Best regards,
Dr. Charina De Silva
Back to the Top
The following message was posted to: PharmPK
Dear Roger:
you said that "Why do you have a lower limit of quantification at all?
It is true, for toxicology, where the specimen itself is the only
source of information, you need it. However, for PK work, where you
also know the time of the sample and the time of the last dose, for
example, you know the drug is there..."
On the contrary, to my opinion, it is often the case in toxicology,
were you just want to know if the analyte is present or not in the
organism signing a toxilological effect, for example if you test the
presence of LSD in the urine, you just want to proove that your
patient has taken the drug or not. In PK studies, the lowest
concentration are the last points, their impact on linearisation and
thus often on Half-life determination is important. If you have a lack
of precision on these points your model will not be very accurate. For
this reason, I think that it is important to consider LOQ for PK
studies and not always for toxicological studies.
One often consider a S/N ratio > 7 for LOQ determination WITH a
relative error below 20%, but these limits have to be discussed and
fixed for every type of studies, analytes and materials.
I hope this helps
Frederic Lagarce
--
Frederic Lagarce, Pharm-D, Ph-D
Inserm U 646, Ingenierie de la vectorisation particulaire
10 rue A Boquel, 49100 Angers
tel 33 (2) 41 73 58 55
fax 33 (2) 41 73 58 53
Back to the Top
The following message was posted to: PharmPK
Regarding using points below LOQ in a standard curve (SC).
I'm probably guilty of overemphasizing the point I was trying to bring
up (at
least of not being clear). Also, this is probably "over-killed" for most
applications.
I'm not disagreeing with any previous posts on this topic (except that
you
COULD have "extra" points below LOQ).
In particular I would emphasize the use of caution in going beyond any
official
"requirement" (e.g. US code, USP, FDA guidance, etc.), I mean as a
general QA
policy.
Also, I'm definitely no suggesting that you would interpolate outside
of the
range of interpolation, that would be extrapolating and you would need
additional assumptions for that.
Having said this let me try to clarify a bit what I was trying to
suggest.
(I'll use the example from Toufigh Gordi's post, as quoted below.)
It relates to curve fitting (CF) (response vs. concentration for our
applications).
And would mainly apply to the developmental phase of an assay (as
opposed to
the validation phase itself)
A) Improving curve-fitting (CF) correlation
Let's assume some correlation for a linear CF in the range 10-100 ng/ml.
Running additional points like 5 ng/ml, 2 ng/ml, etc. and trying
different CF
models (e.g. 4 parameter logistic, log/log, etc), we MIGHT be able to
"improve"
the CF correlation which COULD bring a "lower" SC point (e.g. 5 ng/ml)
into the
accepted accuracy range (80 % to 120 %). Now, in this case we would be
changing
the LOQ (i.e. from 10 ng/ml to 5 ng/ml) so we could argue that we're
still
using the LOQ as the lowest SC point.
B) Curve-fitting (CF) "weighting"
A "region" of a SC can be "weighted" by adding extra points to that
"region",
this is inherent to most (if not all) CF algorithms. This is
particularly the case for lower correlations (not too close to 1.000)
There are also other weighting methods like error analysis weighting.
Also, "weighting" can:
B1) "improve" the CF correlation (see A)
B2) "improve" the "CF correlation for the region" of the SC being
weighted, if
you will. For example, weighting the "region" of the SC around 10 ng/ml
COULD
"improve" the accuracy for that "region".
In any case, any CF and any "weighting" should be validated (e.g. by
running
controls, etc.)
Roy C.
Back to the Top
The following message was posted to: PharmPK
Dear Frederic
I always shiver when I see that so much concern if put on the
reliability of the very last C(t) observation with the intension of
estimating the smallest elimination rate constant, or its corresponding
half-life, for a particular PK model. In fact, if for this purpose a
simple linear regression of the log-linear terminal phase finds enough
leverage (in terms of the Hat matrix) for the latest point to be able
to influence the slope, then it most certainly shouldn't be used since
it doesn't pertain to the same relationship. Many reasons can be
advanced for this, but it doesn't make sense to minimize the
information conveyed by earlier time points (in regression terms) just
to accommodate one more because a certain LOQ rule was defined. If by
any chance it doesn't influence the slope estimate, then it doesn't
matter if it complies or not with the rule, as Roger pointed out. In
kinetics, although one may be uncertain about the precision of this
last point (the smaller the variance, the greater the precision), one
thing can be assured for certain, that is, concentrations will decline
asymptotically towards zero over time as long as the 1st order
assumption holds and the laws of physics and diffusion are not
challenged. So a concerned pharmacokineticist should put more emphasis
on the (later times) sampling scheme than on the LOQ. Why do you
suggest S/N>7 and rel.error<20%? Why not 6 and 25, or 8 and 10? I guess
a crucial statement in your comment is the phrase "your model will not
be very accurate". Although I think you were referring to model
precision, since accuracy is defined as "exact conformity to truth" or
"freedom from error" and that no model can claim, I refer you to the
work of George Box and his statement that "models are to be used, not
believed in".
Best Regards
Luis
Luis M. Pereira, Ph.D.
Assistant Professor, Biopharmaceutics and Pharmacokinetics
Massachusetts College of Pharmacy and Health Sciences
179 Longwood Ave, Boston, MA 02115
Phone: (617) 732-2905
Fax: (617) 732-2228
Luis.Pereira.at.bos.mcphs.edu
Back to the Top
The following message was posted to: PharmPK
Hi Roy,
You wrote:
"
A) Improving curve-fitting (CF) correlation
Let's assume some correlation for a linear CF in the range 10-100 ng/ml.
Running additional points like 5 ng/ml, 2 ng/ml, etc. and trying
different CF models (e.g. 4 parameter logistic, log/log, etc), we MIGHT
be able to "improve" the CF correlation which COULD bring a "lower" SC
point (e.g. 5 ng/ml) into the accepted accuracy range (80 % to 120 %).
Now, in this case we would be changing the LOQ (i.e. from 10 ng/ml to 5
ng/ml) so we could argue that we're still using the LOQ as the lowest SC
point.
"
I found this logical and acceptable. However, if one thinks that the LOQ
can be pushed down taking this approach, it must be validated before the
actual sample analysis (as you also mentioned in your mail). If we have
shown that in the example SC, 5 ng/ml can indeed be quantified with
desirable degree of confidence, then it can be added to the SC. But, as
you mention above, the question still remains: what is the use of adding
2 or 1 ng/ml to the SC (now 5-100 ng/ml)? Obviously no much improvement
can be done since what could be done has already taken place using the
new LOQ, i.e. 5 ng/ml concentration sample. So why include samples below
that?
I am afraid I still do not understand how adding samples below the LOQ
will add to the quality of the analysis.
Toufigh Gordi
Back to the Top
The following message was posted to: PharmPK
Luis,
You could stop shivering if you use a method that puts appropriate
weight on each and every point in the regression. Some interesting
approaches to this when there is missing data e.g. caused by
application of the LOQ to discard an observed value, can be found here:
Beal S. Ways to fit a pharmacokinetic model with some data below the
quantification limit. Journal of Pharmacokinetics and Pharmacodynamics
2001;28(5):481-504.
Luis Pereira wrote:
>
> Dear Frederic
> I always shiver when I see that so much concern if put on the
> reliability of the very last C(t) observation with the intension of
> estimating the smallest elimination rate constant, or its
corresponding
> half-life, for a particular PK model.
You could stop shivering if you use a method that puts appropriate
weight on each and every point in the regression. Some interesting
approaches to this when there is missing data e.g. caused by
application of the LOQ to discard an observed value, can be found here:
Beal S. Ways to fit a pharmacokinetic model with some data below the
quantification limit. Journal of Pharmacokinetics and Pharmacodynamics
2001;28(5):481-504.
> Although I think you were referring to model
> precision, since accuracy is defined as "exact conformity to truth" or
> "freedom from error" and that no model can claim, I refer you to the
> work of George Box and his statement that "models are to be used, not
> believed in".
For the sake of *accuracy* I think the actual quote is "All models are
wrong but some are useful" See: Box GEP Robustness in the strategy of
scientific modelbuilding. In: Launer RL & Wilkinson GN Robustness
inStatistics. New York: Academic Press, 1979:pp. 202. as cited by
Duffull in http://www.boomer.org/pkin/PK01/PK2001250.html
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.aaa.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
Back to the Top
Dear Frederic:
Of course, you are correct. For toxicology. You may just want
to know if something is present or not, and you usually have no
information except the result in the sample itself. You are also
correct concerning the modeling aspects, and you can certainly get a
long tail T 1/2 with many values very close to zero. Good experimental
design and a sensitive assay are always to be desired.
My basic point is that when you are doing therapeutic drug
monitoring, however, in the usual setting of sparse sampling, it is not
at all helpful to be given a result reported as "less than XX units". A
very valuable clinical data point is lost. Since you know, from the
time of the dose and the time of the sample, that the drug is present,
you can certainly report that value all the way down to and including a
blank. Look at the material I referred to and see what you think. For
example, if one can set up a blank determination where the machine can
give both positive and negative readings, then you can easily get the
SD of the blank. Now, suppose this is 0.5 ug/ml, for example. Suppose
the assay gets a result of 0.3 ug/ml. What do you do with this? If you
are talking toxicology, people often report this as "less than 1.5
ug/ml, the LOQ, if one takes that as 3 times the blank SD. You actually
do not know if the drug is there or not
The point here is that you can make both types of people happy,
the toxicology and the TDM people both, by reporting the result as
"0.3 ug/ml, less that our usual LOQ of 1.5 ug/ml". Then both types of
people are happy. You are happy because you like the idea of the LOQ. I
am happy because I have a data point to fit in the usual scenario of
Bayesian adaptive control with TDM. If I did not have that result, I
could not do a good job of fitting that data point and doing good
Bayesian model-based, target oriented TDM. And if I were a Medicare
administrator, I frankly would not pay a lab doing TDM simply to report
a result as "less than such and such". When you have the information
from the data of time of dose and time of sample, and when you are
doing TDM, it is really not useful simply to report a result as "less
than XXX", and I would not pay for that in a clinical setting of TDM.
We can do much better, and actually do not need the idea of an LOQ in
that clinical setting. Don't think of the signal getting lost in an
infinitely great CV% as the result goes to zero. The SD and the
variance are always finite, and one can then give correct weight to the
result, according to its Fisher Information, the reciprocal of the
assay variance, at any concentration. It is SO easy to do the job so
that both types of people are helped by the lab. Using the Fisher
information quantifies the approach to the data. The idea of a rigid
LOQ categorizes the approach and is very much overly restrictive.
Does this help?
Roger Jelliffe
Roger W. Jelliffe, M.D. Professor of Medicine,
Division of Geriatric Medicine,
Laboratory of Applied Pharmacokinetics,
USC Keck School of Medicine
2250 Alcazar St, Los Angeles CA 90033, USA
Phone (323)442-1300, fax (323)442-1302, email= jelliffe.-at-.usc.edu
Our web site= http://www.lapk.org
Back to the Top
The following message was posted to: PharmPK
Dear Nick
I feel much warmer now since you're making my point exactly. An
appropriate weighting scheme (regression based) should be the option
for tail end heterocedastic data, regardless of LOQ as a criterion to
exclude data.
I thank you for the correction on the quote. Although we may certainly
agree that Professor Box through out his long career pronounced many
more wise thoughts than those found in quotes lists, the remark he made
as "Models are to be used, not believed." is originally by Henri
Theil(Professor of Econometrics, Eminent Scholar Emeritus, Univ. of
Florida) in "Principles of Econometrics", 1971, John Wiley and Sons.
I constantly find inspiration in either quote.
Best regards from a long time admirer,
Luis
Luis M. Pereira, Ph.D.
Assistant Professor, Biopharmaceutics and Pharmacokinetics
Massachusetts College of Pharmacy and Health Sciences
179 Longwood Ave, Boston, MA 02115
Phone: (617) 732-2905
Fax: (617) 732-2228
Luis.Pereira.aaa.bos.mcphs.edu
Back to the Top
The following message was posted to: PharmPK
Luis,
I'm glad you are feeling more comfortable and we obviously share the
same interests.
Thank you for the additional quote from Henri Theil. A useful
complement to the one from Box.
Best wishes,
Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New
Zealand
email:n.holford.at.auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
Back to the Top
Dear Luis and Nick:
Exactly. An appropriate weighting scheme is needed. You can
assay about 5 samples in at least quadruplicate - a blank, a low, a
middle, a higher, and a very high sample, and get the mean and SD for
each. The more points above a quadruplicate you have, the more
confidence you will have in the SD's you estimate. Then, fit the
relationship between the means and the SD's with a polynomial of up to
3rd order. Usually a 2nd order is enough to capture the gentle upward
curve you usually see in this relationship. Then, using the polynomial,
you can get a good estimate of the SD with which any single sample was
probably measured. In this way, you can give weight to each data point
by its Fisher information, the reciprocal of its variance.
The remaining error (the SD of the other sources of noise) from
the environmental errors in preparation and administration of the
doses, the recording of the times of the doses and the serum samples,
and the structural misspecification of the model, can then be estimated
as either an additive or a multiplicative term. In this way, you can
have a good idea of your assay error, and the fraction of the overall
noise the assay noise represents. If the environmental noise term is
small, sometimes less than 2, then you can have pretty good confidence
that the study was well done. On the other hand, if it is over 10, say,
then you have a good deal of noise in the clinical setting, over and
above the assay noise. All this is useful information. You can now give
weight to the data with good knowledge of the overall noise, and also
good knowledge of how much is the assay and how much is the
environment. Often, in a good study, the assay noise is a significant
part of the overall environmental noise.
Very best regards,
Roger Jelliffe
Roger W. Jelliffe, M.D. Professor of Medicine,
Division of Geriatric Medicine,
Laboratory of Applied Pharmacokinetics,
USC Keck School of Medicine
2250 Alcazar St, Los Angeles CA 90033, USA
Phone (323)442-1300, fax (323)442-1302, email= jelliffe.aaa.usc.edu
Our web site= http://www.lapk.org
Back to the Top
The following message was posted to: PharmPK
Dear Roger,
Thank you for the precision you made, now I think I understand better
your point of view. Of course it is very tempting to give a result such
as 0.3 ug/ml instead of "less than 0.5" but is it correct ? If the
relative error is important this 0.3 could have been 0.1 or 0.2 or 0.4
or even more. There are chances that the person receiving the result
will take this 0.3 as a fully correct value , giving it to much
importance. In this case I prefer to answer "less than 0.5" or "not
quantifiable", it is what was the policy in the hospital I used to make
TDM.
Dear Luis,
My limits for LOQ (S/N>7 ...) were of course extracted from what we
often used to consider, but as I said these limits have to be
re-discussed on a case by case basis.
"Accuracy" or "precision" of a model is not a good term, I do agree
with you , in fact I was more thinking about relevancy.
Frederic
--
Frederic Lagarce, Pharm-D, Ph-D
Inserm U 646, Ingenierie de la vectorisation particulaire
10 rue A Boquel, 49100 Angers
tel 33 (2) 41 73 58 55
fax 33 (2) 41 73 58 53
Back to the Top
Dear Luis:
What we know is the measurement and the variance of it, which
is what you can use to give it weight according to its credibility. The
result you have is always the most correct value you have. The error of
it can easily be known and quantified from the assay error polynomial,
so you can give it importance according to its Fisher information. Once
again, the SD or the variance is the thing, not the CV%. Read the stuff
in our web site under teaching topics. I am not sure you have read it
yet. I am sorry your hospital decided to withhold a lot of data it
could have used. If your lab had a policy to withhold such a data point
by reporting it as "less than XXX", would totally disagree with such a
policy. It is extremely wasteful of time, effort, money, and most
important, data for the care of the patient. Again, if I were a
government administrator, I would not pay for such a result in TDM,
especially when it is so easy to report it so that both people get what
they need, as 0.3 ug/ml, +/- 0.5 ug.ml, for example, which is below our
usual LOQ of 1.5 ug/ml, for example, 3 SD above the blank.
Very best regards,
Roger Jelliffe
Roger W. Jelliffe, M.D. Professor of Medicine,
Division of Geriatric Medicine,
Laboratory of Applied Pharmacokinetics,
USC Keck School of Medicine
2250 Alcazar St, Los Angeles CA 90033, USA
Phone (323)442-1300, fax (323)442-1302, email= jelliffe.-at-.usc.edu
Our web site= http://www.lapk.org
Back to the Top
I might have missed some points in the discussion Extrapolated or
interpolated values for drawing scientific/academic merit of the study
may be acceptable for salvaging some data. However if you want to
submit this as part of regulatory approval or making specific claims,
crystal city guidance is kind of writing on the stone and we better
have to follow a validated analytical method and follow steps outlined
in the FDA guidance otherwise be prepared to recreate your entire study
according to the GLP guidance.
Regards,
Prasad Tata
Mallinckrodt, Inc.
Back to the Top
The following message was posted to: PharmPK
Dear colleagues,
I agree with Roger Jelliffe's plea for reporting and using concentration
values below LOQ as their best estimate and standard deviation (actually
this should be done also for values above LOQ!). Of course this requires
that PK software takes into account this standard deviation in an
appropriate way, e.g. as described by Roger.
Roger also wrote:
> It is extremely wasteful of time, effort, money, and most
> important, data for the care of the patient. Again, if I were a
> government administrator, I would not pay for such a result in TDM,
> especially when it is so easy to report it so that both people get
what
> they need, as 0.3 ug/ml, +/- 0.5 ug.ml, for example, which is below
our
> usual LOQ of 1.5 ug/ml, for example, 3 SD above the blank.
Again I agree, but IMHO this is rather exagerated. Indeed, a reported
value
of 0.3 +/- 0.5 ug/ml gives some information about the actual drug
level, but
the information is quite vague, and in general it will affect only
marginally the parameters obtained by a Maximum A Posteriori Bayesian
estimation.
Instead of a further discussion about this subject, I suggest that Roger
provides an example demonstrating the importance of, e.g. a reported
value
'0.3 +/- 0.5 ug/ml' compared to a reported value 'below 0.5 ug/ml'. This
might convince everybody more than words.
Best regards,
Hans Proost
Johannes H. Proost
Dept. of Pharmacokinetics and Drug Delivery
University Centre for Pharmacy
Antonius Deusinglaan 1
9713 AV Groningen, The Netherlands
tel. 31-50 363 3292
fax 31-50 363 3247
Email: j.h.proost.-at-.rug.nl
PharmPK Discussion List Archive Index page
Copyright 1995-2010 David W. A. Bourne (david@boomer.org)