Back to the Top
Looking for any suggestions / thoughts on a way to monitor for AG plasma levels "bed side"
(less than 48 hr turn around from sample taken) that wouldn't require antibody development
for a TDM. It seem that while in theory LCMSMS would work, in practice the turn around is
too long. The dynamic range we are looking for ~0.5 to 250 ug/mL.
-Tarra
Back to the Top
The following message was posted to: PharmPK
You can have a look on what has been done using immunochromatographic strips for
rapid semi-quantitative drug detection.
Henri
Back to the Top
> Looking for any suggestions / thoughts on a way to monitor for AG
> plasma levels "bed side"
> (less than 48 hr turn around from sample taken) that wouldn't require
> antibody development
> for a TDM. It seem that while in theory LCMSMS would work, in practice
> the turn around is
> too long. The dynamic range we are looking for ~0.5 to 250 ug/mL.
I assume you have considered asking any clinical lab to do this for you.
This is a very widely used and I assume pretty cheap assay. I'd be
interested to hear why you wouldn't take this obvious path to get concs
measured.
Nick Holford, Professor Clinical Pharmacology
Dept Pharmacology & Clinical Pharmacology, Bldg 505 Room 202D University
of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford
-
The following message was posted to: PharmPK
Abbott has a 5 minute system. I assume you mean Aminoglycoside with AG
Tim Cacek
-
The following message was posted to: PharmPK
Dear Tarra:
A good question. But one issue is the LOQ. This is actually a
cultural illusion among the laboratory and TDM community, but not among
the
well-trained PK/PD community. The idea that CV% is the measure of assay
error in simply not correct. The lab community is not used to having
their
data fitted in a mathematically correct way,, and they resist it. Why, I
do
not know. The correct way to describe assay error is by the precision
with
which any measured value is reported. This is the SD. But expressing the
SD
as a percent CV is not correct. We have the illusion that a constant %
error
is correct, because of the way it looks visually to us. A measurement of
10
units with a CV of 10% has an SD of 1. A similar result is 20 has an SD
0f
2. The really correct way is by the variance 0 the SD squared. So the
reciprocal of the variance with which an assay result is obtained is
the
correct way to describe the data when it is being reported.
What does this mean? It really means that there is in fact, no LOQ.
Of course, as the result approaches zero, the CV% gets greater, and
usually
above 120 or 15 or 20%, they simply decide to censor data below that
LOQ.
This ignores the real issue of the precision with which low results are
reported. Many assays (HIV PCR, HCV, etc, all should ideally be treated,
not
to an LOQ such as < 50 copies, but all the way down to zero, into the
machine noise of the assay, where the CV% is infinite, but the machine
noise
is always infinite and quantifiable.
You might look at
1. Jelliffe R, Schumitzky A, Bayard D, Leary R, Botnen A, Van Guilder
M, Bustad A, and Neely M: Human Genetic variation, Population
Pharmacokinetic - Dynamic Models, Bayesian feedback control, and
Maximally
precise Individualized drug dosage regimens. Current Pharmacogenomics
and
Personalized Medicine, 7: 249-262, 2009. There is a good discussion of
the
issue here, with a very relevant figure.
and
2. Jelliffe RW, Schumitzky A, Van Guilder M, Liu M, Hu L, Maire P,
Gomis P, Barbaut X, and Tahani B: Individualizing Drug Dosage Regimens:
Roles of Population Pharmacokinetic and Dynamic Models, Bayesian
Fitting,
and Adaptive Control. Therapeutic Drug Monitoring, 15: 380-393, 1993.
This issue has been discussed many times in PharmPK, and it is
making some progress, but not as much as it should. Much can be found in
the
archives of PhamPK. Ask David how to get at them. This is because, I
think,
the MD's are not trained in this at all, and the lab simply wants to
present
the MD with a result which gives him/her a clinical impression (low, ok,
high, etc.,), rather than a really quantifiable result.
Further, trough samples are so popular, even though they are usually
the least informative ones in TDM, as the result is least affected by
errors
in times when doses are gives and samples are drawn. This makes the
trough
usually the least informative one from the point of view of PK
information
about the behavior of the drug in question. Optimal design strategies
such
as D-optimal are much better. And also should be employed. With once
daily
aminoglycoside dosing, the trough is often "undetectable" or BLQ.
Samples at
8 hr, plus a peak (out of the opposite arm at the end of the infusion)
are
much better.
Sensitive assays are always better than insensitive ones, but the
real issue is always reached at the low limit. I would also suggest that
you
look for a reasonably precise variance for samples below 0.5 ug/ml, as
you
will often fine them.
Very best regards,
Roger W. Jelliffe, M.D., F.C.P.
Professor of Medicine, Co-Director, Laboratory of Applied
Pharmacokinetics,
USC Keck School of Medicine, 2250 Alcazar St, Room 134-B Los Angeles CA
90033
www.lapk.org
Want to post a follow-up message on this topic?
If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "TDM - non-antibody based TDM for novel Aminoglycoside" as the subject | Support PharmPK by using the |
Copyright 1995-2011 David W. A. Bourne (david@boomer.org)