Back to the Top
hi everybody, my name is nisa i am a young lecturer from indonesia and
know i'm preparing my study to get a master degree, i interested to
study pkpd because this is a new field of study in my country.
i'm so interested in making PK/PD experiment in order to increase my
knowledge and my plan to get a master degree in pk/pd..
i'm a new comer in this field so may be any one can give me reference
a step to make an experiment in pk/pd
regard,
khoirotin nisak
Lecturer staff in departement of clinical pharmacy
faculty of pharmacy Airlangga university
Surabaya-Indonesia
Back to the Top
Hi,
As you know, it's easy to design a protocol for pk.
The key in pk/pd is how to determine the indicator of pd, I think.
After you solve this problem, next, what you need to do is building/
choosing a proper model to link pk and pd.
Hope these will help you!
Back to the Top
In my experiences with running PK/PD studies I have learned that there
are some points that really need to be thought about in order to
generate good reliable PD data.
Think about the mechanism of action of the drug and how it relates to
the response you are measuring. Do you expect to see them peak at
about the same time or will there be a lag time and will it be minutes/
hours/days? This will impact not only your decision making in time
points but also the type of model you will use. For example, if there
is a delayed effect resulting in a hysteresis loop then you'll
probably approach your data with a linked model or indirect model.
On the topic of time points, you need to think about this carefully.
You want to make sure you are able to capture the onset of response,
the Emax, the dissipation of response and have enough time points in
between that you are able to characterize the shape of the curve to
decide if a sigmoidal factor needs to be written into your model. If
there is a lot of variation in the response your measuring throughout
the time course of collecting your samples (e.i. cyclic patterns or
physiological response to stimuli such as with heart rate) then more
samples may need to be taken to decrease the error in variation and
allow the program to fit a better model.
Make sure you have good baseline values. With something like glucose
levels or blood pressure, interindividual variation is high so you
might want to take 3 baselines and model with the mean and possible
take them at varying points.
It is difficult to make specific recommendations without knowing what
kind of study you are doind. If it is something more molecular or in
vitro such as measuring cAMP levels in response to a stimuli the
approach will be much different than an in vivo study that might be
more clinical such as the effects of an anesthetic agent on
physiological parameters (w.i. heart rate, blood pressure, GI
motility, etc.)
Good luck and I hope this helps. Main Points- think about the question
you are trying to answer. Is the response your measuring a good
indicator to answer this question? What are the mechanism of action of
the drug and how does it relate to the response. And unlike with PK
where the shape of the drug-time curve mediates the model you pick,
with PD the mechanism of action facilitates the way you approach the
model.
--
Kristin Grimsrud
Equine Analytical Chemistry Lab
DVM/PhD Graduate Student
Pharmacokinetics, Pharmacodynamics & Pharmacometrics (P3)
Pharmacology/Toxicology Graduate Grp
School of Veterinary Medicine
University of California Davis
Back to the Top
The following message was posted to: PharmPK
Dear Kristin:
You might also consider just how you weight your data (of any type)
quantitatively according to its credibility or precision. CV% is not the
correct method for describing any assay error. The laboratory
community is
very much in error on this point. Fisher information is a good
quantitative
index of credibility of any data point. Look it up in any statistics
book.
It is the reciprocal of the variance with which that data point was
measured. This is true for ANY PK or PD response. You will not get
correct
model parameter values or distributions without weighting your data
correctly.
This matter has been discussed many times, and here it comes up
again. I have copied in some material below, as PharmPK does not take
attachments.
Very best regards,
Roger Jelliffe
Also, you might see:
1. Jelliffe R, Schumitzky A, Van Guilder M, Liu M, Hu L, Maire P, Gomis
P, Barbaut X, and Tahani B: Individualizing Drug Dosage Regimens:
Roles of
Population Pharmacokinetic and Dynamic Models, Bayesian Fitting, and
Adaptive Control. Therapeutic Drug Monitoring, 15: 380-393, 1993.
2. Jelliffe R: Explicit Determination of Laboratory Assay Error
Patterns - A Useful Aid in Therapeutic Drug Monitoring (TDM). Check
Sample
Series: Drug Monitoring and Toxicology, American Society of Clinical
Pathologists Continuing Education Program, Chicago, Il, 10 (4) : pp.1-6,
1990.
2. FITTING DATA BY ITS FISHER INFORMATION, NOT BY ASSAY COEFFICIENT OF
VARIATION (CV%) OR BY SOME ASSUMED OVERALL ERROR MODEL.
2.1 Consequences of using the assay CV%.
When making models or doing TDM to care for patients, use of the
traditional percent error of an assay to describe its precision has
several
significant negative consequences. Using CV%, the apparent precision
of an
assay drops markedly as the measured concentration becomes lower and
lower.
An "acceptable" categorical lower limit of quantification is often
taken as
something like a CV of 10 or 20%. This varies between laboratories.
Regulatory bodies often make decisions about what is said to be
"acceptable"
based on judgment, but, sadly, not on science. An intuitively taken
policy
is simply decided upon. There is much discussion at meetings about
just what
constitutes an "acceptable" CV% which reflects "acceptable" precision.
Below
this value a categorical cutoff is usually made, and the data is
censored.
However, even a blank sample is measured with a certain SD. Of course
at the
blank, the CV% is infinite. However, the SD at the blank is always
finite,
and is easily determined as the machine noise.
2.2 The first major problem with CV% - the illusion that one has to
censor low measurements.
Very low measurements are thought of as the signal being "lost in
the noise". Below a selected cutoff, the measurement is not felt to be
"precise enough" for acceptable quantification (the lower limit of
quantification, or LLOQ), or further down, for detection (the lower
limit of
detection, or LLOD). Data below these judgmentally selected cutoffs are
censored and are either not reported, or are reported simply as being
"less
than" some selected LLOQ or LLOD. Such data reported as "below
detectable
limits" often eventually become regarded by physicians (and by their
patients as well) as though the substance being measured (a Philadelphia
chromosome, an HIV PCR, or a drug concentration, for example) somehow
is not
really there. This often leads to serious clinical and pharmacokinetic
misperceptions, as "nondetectable" eventually becomes mentally equated
with
zero. Actually, several specific policies have been developed to deal
with
this problem of censored data [17]. None has been successful. The actual
measurement, whatever it is, is the best reflection of what is actually
there, along with its SD.
2.3 The second major problem with CV% - no way to give correct
weighting
of measured data for modeling.
The other problem, an increasingly important one, is that there is
no way to assign a proper quantitative measure of credibility to a data
point using CV%. This is a problem relatively new to the laboratory
community. Data points are increasingly being used clinically now for
TDM
and Bayesian pharmacokinetic analysis to make individual patient
pharmacokinetic models. It is interesting but sad that in the statistics
books, one never finds CV% as a mathematical or statistical measure of
credibility. Instead, one finds the Fisher information of a data point
[18].
This is the reciprocal of the variance with which any data point was
measured.
It is also thought that the assay SD is much less constant over its
operating range. This is often not the case (see Figure 2 below). What
is
important is to use a well known and documented quantitative measure
of the
credibility of a data point. This is the Fisher information [18]. It
should
not be corrupted by the measurement itself, as is the case with the CV%.
2.4 Fisher Information - the reciprocal of the assay variance.
The Fisher information of a data point is the reciprocal of the variance
with which that data point was measured. Take the assay SD at that
point.
Instead of dividing it by the measurement to obtain the CV%, simply
square
the SD to obtain the variance, V. Take its reciprocal, 1/V. Multiply
the
measured result by 1/V to assign proper weight to that assay
measurement.
This procedure is a well known and widely used measure of statistical
credibility [18-20].
2.5 Relationship between CV% and Fisher information.
Let us consider a hypothetical assay with a coefficient of variation
of 10% throughout its range. Suppose there is a measurement of 10
units. Its
SD is 1.0 unit, as its CV is 10%. Because of this, its variance is
1.0, and
its Fisher information is also 1.0. Now consider another measurement
from
another sample, where the value is 20 units. The CV being 10%, the SD
is now
2.0. The variance, however, is now 4.0, and the Fisher information is
now .
This is the important difference between the Fisher information and
the CV%. It is because the variance about a data point is the square
of the
SD. So if an assay has a constant CV%, doubling the measured value
results
in a weight of . Also, as an assay result gets lower and approaches
zero,
the SD usually gets smaller and smaller, though not always (see Figure 2
below). In any event, while the assay SD usually gets smaller and the
Fisher
information becomes greater, the CV%, as everyone knows, becomes
greater,
and eventually becomes infinite. One may erroneously think that the
measurement becomes "lost in the noise". This is the perceptual
problem when
using CV%. It is because of the perception of assay error as CV% that
leads
people to make artificial and categorical cutoffs such as LLOQ and LLOD.
Data are then arbitrarily withheld and censored. This problem is
illustrated
in Figure 2 below. The figure is based on the documented error of the
Gentamicin assay at the Los Angeles County - USC Medical Center several
years ago. At the high end, a value of 12 ug/ml, measured in
quadruplicate,
had an SD of 1.71 ug.ml, and a CV of 14.3%. A value of 8.0 ug/ml,
similarly
measured in quadruplicate, had an SD of 0.79 ug/ml and a CV of 9.96%. A
value of 4.0 ug/ml, again in quadruplicate, had an SD of 0.41 ug/ml
and a CV
of 10.83%. A value of 2.0 ug/ml, again in quadruplicate, had an almost
identical SD of 0.42 ug/ml, but the CV now rose to 21.15%. Finally, a
blank
measurement, also done in quadruplicate, had an SD of 0.57 ug/ml. The
CV%,
of course, was infinite.
2.6 Using Fisher Information, there is no LLOQ or LLOD, and no need
to
censor low measurements.
A problem arises when a result is in the gray zone, below the LLOQ
but a little above the blank. There has been much discussion about
what the
best thing is to do about this problem. Some have said it should be
set to
zero. Others say it should perhaps be set to halfway between the blank
and
the LLOD. Commonly, laboratories have reported the result simply as
being
"less than" whatever the LLOQ, in their judgment, is considered to be.
However, when doing therapeutic drug monitoring or any
pharmacokinetic modeling, this is a most unsatisfactory situation. The
measurement simply cannot be used in any procedure to fit data
quantitatively or to make a proper population pharmacokinetic model of a
drug.
It is extremely easy to do all this, and to make both the
toxicologists and the pharmacokineticists happy at the same time, by
reporting the result both ways. For example, a gentamicin sample might
be
reported as having a measured concentration of "0.2 ug/ml, below the
usual
LLOQ of 0.5 ug/ml". Both parties can easily have what each needs for
their
work. The assay error polynomial can be stored in software to do the
proper
weighting and fitting of the data.
(Fig 2 was here).
Figure 2. Relationship between measured concentration (horizontal
scale), CV% (right hand scale) and Assay SD (left hand scale). CV%
(diamond
symbols) increases as shown at low values. On the other hand, the
assay SD
is always finite at any value, all the way down to and including the
blank.
Because of this, there is no need to censor any data at all. The
measurement
and the SD, done in this way, enhance the sensitivity of any assay all
the
way down to and including the blank, with a well documented statistical
measure of credibility.
It is a good thing that much attention has been paid to determining
the error of assays. However, once the assay has been shown to be
"acceptably" precise, that error has usually been forgotten or
neglected.
For example, many error models simply use the reciprocal of the assay
result
itself, or its squared value, and forget the actual error of the
assay. On
the other hand, they often assume a model for the overall error
pattern and
estimate its parameter values. This is usually done because it is
assumed
that the assay SD is only a small part of the overall error SD, due to
the
many other significant remaining environmental sources of error. That is
clearly not so, as we shall see further on.
2.7 Determining the Assay Error Polynomial
Optimally, one should first determine the error pattern of the assay
quite
specifically, by determining several representative assay measurements
in at
least quintuplicate, and to find the standard deviation (SD) of each of
these points [19,20]. An example of this is shown in Figure 3.
One can measure, in at least quintuplicate (and the more the better
- some say 10), a blank sample, a low one, an intermediate one, a high
one,
and a very high one. One can then fit the relationship between the serum
concentration (or any other measured response) and the SD with which
it has
been measured, with a polynomial of up to third order if needed, so
that one
can then compute the Fisher information associated with any single
sample
that goes through the laboratory assay system.
(Fig 3 was here).
Figure 3. Graph of the relationship between serum Gentamicin
concentrations, measured by our hospital's assay in at least
quadruplicate
(the dots) and the standard deviations (SD's) of the measurements. The
relationship is captured by the polynomial equation shown at the top.
Y assay SD, X = measured serum concentration, Xsq = square of serum
concentration.
One can then express the relationship as
SD = A0 + A1C + A2C2 + A3C3 (1)
where SD is the assay SD, A0 through A3 are the coefficients of the
polynomial, C is the measured concentration, C2 is the concentration
squared, and C3 is the concentration cubed. A representative plot of
such a
relationship, using a second order polynomial to describe the error
pattern
of an assay of gentamicin, is shown in Figure 3.
2.8 Determining the Remaining Environmental Noise
In addition, a parameter which we have called gamma, a further
measure of all the other environmental sources of intra-individual
noise,
can also be computed by software for population PK modeling. We use it
in
our population modeling software as a multiplier of each of the
coefficients
of the assay error polynomial as described above. The nominal value of
gamma
is 1.0, indicating that there is no other source of variability that the
assay error pattern itself. Gamma is therefore usually greater than
1.0, but
may sometimes be less. It includes not only the various environmental
errors
such as those in preparing and administering the doses, recording the
times
at which the doses were given, and recording the times at which the
serum
samples were obtained, but also the errors in which the structural model
used fails to describe the true events completely (model
misspecification),
and also any possible changes in the model parameter values over time,
due
to the changing status of the patient during the period of data
analysis.
Gamma is thus an overall measure of all the other sources of
intraindividual
variability besides the assay error.
In this way, one can calculate just how much of the overall SD is
due to the assay SD, and how much is due to the remaining
environmental SD.
Determining gamma helps greatly to explain the impact of the
environmental
variability found in any fit. If gamma is small (2-4), it suggests
that the
sum of the environmental sources of noise is small. If it is large
(10), it
suggests that the overall environmental noise (the total effect of all
the
other factors mentioned above) is large.
Very best regards again,
Roger Jelliffe
Back to the Top
The following message was posted to: PharmPK
Hi Roger. Again, with instrumental approaches there is no measurement
of
variance. There is a single determination of each samples analyte
concentration. With ligand binding you may have this information for a
sample since traditionally those assays measure each sample in
duplicate.
You can extrapolate the precision from the curve but extrapolation is
never
a favored approach. We are stuck here unless instrumental analyses are
required to measure samples in duplicate. ( There is actually no
requirement
to measure ligand binding assay in duplicate- it is the practice!)
Back to the Top
Dear Ed:
I ask you to read again what I sent. Why (and how) do
people now, in the lab community, decide that an assay is precise
enough? They run replicates (there should be at least 5) and report
the CV% at various concentrations. This is the estimate of the error
of an assay as it stands today.
What is wrong with me that I cannot reach lab people? You
are the only one that talks with me, and I thank you profusely for that.
You say "You can extrapolate the precision from the curve but
"extrapolation is never a favored approach". Reasons for statements
are useful, not words like "favored". However, I am not talking about
extrapolation. Yes. Extrapolation beyond any assay error
determination, whether CV% or SDE, can result in dangerous errors in
the estimate of assay precision. THAT is why it is "not favored".
I am talking about interpolation. I am talking about determining the
assay SD over its working range, You say "We are stuck here unless
instrumental analyses are required to measure samples in duplicate.
(There is actually no requirement to measure ligand binding assay in
duplicate- it is the practice!)".
Why is this the PRACTICE if there is no real reason for it? Does the
lab community not have real reasons for what they do?
Ed, once again:
The idea is to measure several representative samples in
replicate - at least 5 replicates per sample. The more the better. It
takes more replicate samples to get good estimates of SD than it does
for means. That is why about 5, not just 2 or 3, is a minimum for this.
1. A blank sample - 5 replicates. Get the mean and SD
2. A low sample - 5 replicates. Get the mean and SD
3. A midrange sample - 5 replicates. Get the mean and SD
4. A high sample - 5 replicates. Get the mean and SD
5. A very high sample - 5 replicates. Get the mean and SD
Now you have 5 sets of mean and SD. Fit the relationship between means
and SD's with a polynomial to find the GENERAL relationship between an
assay measurement and its SD. Square this to get the variance V. Take
the reciprocal 1/V as the weight for each individual determination.
Now, use this relationship as a good estimate of the SD of the SINGLE
samples that come through the system. The idea here is to use the
assay error polynomial to obtain a useful quantitative estimate of
the precision with which EACH SINGLE sample was measured.
See DeGroot, M: Probability and Statistics, 2nd edition, Addison -
Wesley, 1989, p 423.
See also Jelliffe RW, Schumitzky A, Van Guilder M, Liu M, Hu L, Maire
P, Gomis P, Barbaut X, and Tahani B: Individualizing Drug Dosage
Regimens: Roles of Population Pharmacokinetic and Dynamic Models,
Bayesian Fitting, and Adaptive Control. Therapeutic Drug Monitoring,
15: 380-393, 1993.
Is this so difficult? The lab guys do it all the time to get the CV%.
The idea is simply to use the SD instead of the CV%, and to USE it, in
the form of the polynomial. Don't simply file it away somewhere to
decide if an assay error is "acceptable" or not. Follow the steps
above. Use any available software for fitting the polynomial to the
data. We also have a routine to make it easy to do this fitting.
USE the polynomial. Put it in our population model software for the
relevant drug. In this way, each measured serum concentration can be
given its quantitative measure of credibility according to its weight
1/V. This improves the fitting process and gives more correct
parameter values for either a population model of an individual
patient model.
All the best,
Roger Jelliffe
Back to the Top
The following message was posted to: PharmPK
We can generate precision and accuracy for standard curve points and
sets of
QCs since they are run most often in duplicate for instrumental assays
where
samples are usually single analyses.
In ligand binding assays most often the standard curve points, QC and
samples are run in duplicate.
The modification and comparison of fisher vs current measures of
performance
could most easily be done with ligand binding assays-at least initially.
Running instrumental assays for subject samples in duplicate will
result in
an almost doubling of the run time. That would be a difficult change to
accept.
Back to the Top
Dear Ed:
You are getting there. However, duplicate samples simply do
not give a reasonable estimate of anything but the mean. It is much
better to do as I have suggested - to run, not duplicates, but sets of
at least 5 replicates per sample (again, the more samples, the more
reliable are the estimates of SD). It takes more samples to get a good
SD than it does of the mean.
All this can be done with any assay. Just do it. Why is it
any easier to do it with ligand assays? Yes - running everything in
duplicate will double the run time. THAT is why you only run several
typical representative samples (like the 5 replicates per sample I
mentioned) get the means and SD's, and make the polynomial. After
that, only run single samples, just as before. You now have the
polynomial with which to get a good estimate of the SD with which any
single assay result is obtained.
No prolongation of run time. Just a little calculation,
which you already do, then get the polynomial. That's all. Look up the
SD for the single measurement by calculating it from the polynomial.
All the best,
Roger
Want to post a follow-up message on this topic?
If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "How to make experiment in PKPD" as the subject | Support PharmPK by using the |
Copyright 1995-2011 David W. A. Bourne (david@boomer.org)