Back to the Top
"Why should an hospital install a PK program ?"
Hi friends,
Unfortunately, here where I live, the system of health is still poor, and
insists in stay as it is...
What kind of datas you could send for me in order to organizer " Why should
they do". In our reality we don't have enough materials about this subject.
How much could them economize ?
Sorry ask it... But it's time for pharmacokinetics programs enter in the
third world :)
affectionately,
F Rios
Federal University of Bahia - Brazil
Laboratory of Toxicology
fabrios.at.ufba.br
Back to the Top
Having had some experience with using pharmacokinetics both in a
research and clinical setting, I have found that computer-based
pharmacokinetics programs are not terribly useful at patient bedside. I
say this because, in my experience, when a computer enters the picture,
common sense leaves. The vast majority of drugs that have drug
concentration levels that are meaningful follow a one-compartment first
order elimination model that can easily be calculated by hand (see
example by Sawchuk & Zaske for Gentamicin in "Clin Pharmacol Ther 1977
Mar;21(3):362-9"). Furthermore, the vast majority of drugs for which
pharmacokinetic estimation using serum or plasma concentrations is
necessary follow a simple dose-concentration proportionality, i.e. you
double the dose-you double the concentration. Taking this and some
knowledge about basic drug pharmacology/pharmacokinetics and how drug
elimination and distribution may be altered in various disease states,
there is probably very little need for elaborate computer modelling,
especially Bayesian forecasting that, in my experience with critically
ill patients, provides very little additional information to a common
sense approach to therapeutic monitoring. I should also state that many
drugs, have alternate surrogate therapeutic endpoints - eg.
antiarrhythmics (ECG), vasoactive drugs (vital signs, SG monitoring),
for which plasma concentration monitoring is relatively useless. I
appreciate input from others regarding this very anti-technological view
point.
Sincerely,
Bertil Wagner, Pharm.D., FCCM
Documentation Specialist
Hoffmann-La Roche Inc.
Pharma Development Registration
340 Kingsland Street
Nutley, NJ 07110-1199
Tel. 973-562-5515
Fax. 973-562-5509
E-Mail. bertil_k.wagner.-at-.roche.com
Back to the Top
[A few replies - db]
From: "Dan Combs"
Date: Fri, 24 Apr 1998 14:44:09 -0800
reply-to: dzc.aaa.gene.COM
To: PharmPK.at.pharm.cpb.uokhsc.edu
Subject: Re: PharmPK Re: Cost X benefit
Mime-Version: 1.0
Yes, but I would recommend that textbooks on therapeutic drug monitoring
and WA Ritschel's 'Handbook of Pharmacokinetics' provide many useful
equations a clinician might find useful. Some will allow you to quickly
estimate concentrations after a particular dosing regimen based upon a
simple one-compartment model, adjust dose in different therapeutic
situations, etc.
There are also probably many simple computer programs that are much more
clinically oriented than Winnonlin.
_________________________
Daniel L. Combs
PK&Metabolism Dept.
Genentech, Inc.
voice (415) 225-5847
fax (415) 225-6452
e-mail dzc.aaa.gene.com
---
X-Sender: mraucoin.-at-.popalex1.linknet.net
Date: Fri, 24 Apr 1998 21:00:56 -0500
To: PharmPK.-at-.pharm.cpb.uokhsc.edu
From: Robert Aucoin
Subject: Re: PharmPK Re: Cost X benefit
Mime-Version: 1.0
I agree. Several years ago when I started in clinical, I was into using a
long series of formula's to calculate the exact dose and interval of drug
X, Y or Z. After wearing out a few calculators I have found that 95% of
my consults can be done with antique pencile lead. Also, our average
patient stay is 2.4 days. By the time I order a second set of levels on
Gent or Tobra, the patient is on his way out of the door. Along the way I
have also become a proponent of limiting Vancomycin use. Ergo: I get twice
as many consults about all manner of therapy but very few kinetic consults.
The positive offshoot was that doctors have come to trust my judgement.
When they get the kid that they need help with and who will stay around
awhile, they call me straight away. They don't always agree (like today
with an older Hem/Oc who loves to slam everyone with four drugs when they
hit the door.) but at least they listen.
I call it "common sense kinetics." Hard to sell to the suits, but
when the
docs go to bat for you, it gets a lot easier.
stay casual................. Robert
Robert G. Aucoin, R.Ph. tel: 1-888-765-PICU (7428) (new toll free #)
Pediatric Clinical Pharmacist fax: 504-765-7917
The Children's Center e-mail: mraucoin.-at-.linknet.net (home)
Our Lady of The Lake RMC or RAUCOIN.aaa.ololrmc.com (work)
Baton Rouge, LA
---
From: "Robert D. Phair, Ph.D."
To: "'PharmPK.aaa.pharm.cpb.uokhsc.edu'"
Subject: RE: PharmPK Re: Cost X benefit
Date: Sat, 25 Apr 1998 00:00:31 -0400
MIME-Version: 1.0
Bertil Wagner wrote to the list asserting that, " computer-based
pharmacokinetics programs are not terribly useful at patient bedside." He
continues, "I say this because, in my experience, when a computer enters
the picture, common sense leaves". He also says that in his experience "
there is probably very little need for elaborate computer modelling".
I am writing to provide counterpoint.
First, let us acknowledge that when things are simple, common sense is an
excellent tool. Understanding the dosing of any compound that obeys first
order kinetics and is distributed in a single compartment does not require
rocket science. And Dr. Wagner is clearly correct that in a linear system
doubling the dose will double the concentration. But beyond this minimal
agreement, I cannot go.
Human biology is demonstrably complex, and computer modeling is an
essential tool for the study of complex systems. Moreover, it turns out
that the system need not be very complex before the human brain is no
longer an adequate guide to system behavior. The cognitive psychologists
have studied this in some detail, and have found that we can predict the
behavior of physical systems with two, and, if we are very clever, with
three interacting variables. Beyond three, unaided human predictions are no
better than chance. Beyond three even Einstein had to guess.
What complexities are found in pharmacokinetics and pharmacodynamics that
could justify "elaborate computer modeling?" What potential advantages
accrue to those who adopt "elaborate computer modeling"? As a first example
consider the pivotal area of drug interactions. This is an enormous problem
that is palpably beyond the capacity of the unaided human brain. Consider
also the titration of drug effects and side-effects; what are the
determinants of therapeutic index? Think too of the untold tales of
pharmaceutical development in which a very small, very slowly turning over
drug compartment (that slow exponential you decided to ignore so that only
a single compartment would be required) may have precipitated liver failure
or renal failure late in clinical trials. Consider the suffering that might
have been avoided, consider the time and money that might have been saved.
Next, consider hysteresis of drug effect. Or consider inactive pro-drugs
that must be metabolized to the active compound. Consider drugs whose
metabolites have beneficial or deleterious effects. Consider drugs acting
on one of several converging receptor signaling pathways. Consider the
enormous complexity of targeting drugs at the cell cycle. Consider the
toxicology of environmental pollutants. Consider the legion of difficulties
surrounding the delivery of anti-tumor agents. Consider nonlinearities of
gastrointestinal absorption caused by physiological changes in pH, or the
nonlinearities of physiological control systems that must respond
predictably to our drug candidate. Consider the terrible human costs, and
the weeks or months required to "get the meds right" for patients with
various bipolar disorders. And finally consider the vast array of potential
new genomic drugs; how can one hope to predict the consequences of
over-expressing any gene or of knocking out any gene without the assistance
of elaborate computer models? On dusty shelves in every laboratory where
gene targeting and gene therapy are being practiced, there are a few or a
dozen unpublished studies of reproducible but incomprehensible outcomes. I
assert these outcomes will never be understood without "elaborate computer
modeling". This means that except for simple (and therefore rare) monogenic
disorders, Dr. Wagner's dictum of simplicity and common sense will fail
frequently, if not always. Polygenic diseases with multiple environmental
risk factors are the major causes of human morbidity and mortality; they
consume quality of life. Strongly stated, these diseases are system
properties, and rational therapies will never be developed without the
assistance of "elaborate computer models".
I'm sure our colleagues could add substantially to my list.
Now, a word about common sense. Dr. Wagner asserts that when a computer
enters the picture, common sense leaves. This can only mean that the
computer printouts are over-revered, as if they were tablets brought down
from Mt. Sinai or some equivalent artifact from another of the world's
great religions. Surely, this misplaced religious fervor is not common
sense. Dr Wagner is correct about the evanescence of common sense, but this
is hardly the fault of the computer. Indeed, it is just as familiar to see
common sense depart by the nearest exit when an ultracentrifuge or a mass
spectrometer or a polyacrylamide gel enters the picture. We all have gaps
in our technological expertise, and when we do not understand the internal
logic of a machine or its software, we have no way to evaluate its results
effectively. Understandably, people placed in this position often decide to
trust or to mistrust the machine. Neither is scientific.
My bottom line is that before you judge what "elaborate computer modeling"
can or cannot do, you owe it to yourself to seek the advice and assistance
of those who have spent their careers at the interface we now call
computational biology or bioinformatics. These disciplines have already had
an enormous impact in fields that have adopted them, and they will
completely change the way we study biology in the next millenium.
Investigators of all stripes can gain enormous competitive advantages by
exploiting these new computational tools. But the real power of these tools
comes from partnering with the best practitioners. Put a weekend hobbyist
in the best woodworking shop in the world and you get a passable table with
one weak leg; put a skilled craftsman in the same shop and you get an
heirloom.
Regards,
Bob
----------
Robert D. Phair, Ph.D. rphair.aaa.bioinformaticsservices.com
BioInformatics Services http://www.bioinformaticsservices.com
12114 Gatewater Drive
Rockville, MD 20854 U.S.A. Phone: 1.301.315.8114
Partnering and Outsourcing for Computational Biology
---
Reply-To: "Thomas Senderovitz"
From: "Thomas Senderovitz"
To:
Subject: Sv: PharmPK Re: Cost X benefit
Date: Sat, 25 Apr 1998 11:21:41 +0200
MIME-Version: 1.0
X-Priority: 3
Hey all,
Dr. Wagner states that the use of computer programs (including Bayesian
forecasting) offers very little advantage in the daily setting of TDM. I
agree in the point of view that many of the programs are not directly
user-friendly, and that a high level of knowledge is necessary, but I do
NOT agree that they do not offer any advantages - in fact, for sevreal
drugs and settings, I think Bayesian approaches can be very helpful. In TDM
of psichiatric patients, it is very often extremely difficult - if not
impossible - to get steady state trough values. In this setting, BF can
probably be of great help, although there's still some documentaiton work
to be done.
In 3rd world countries, it is probably not the right thing to invest in
advanced computer technology and know-how before you can get the basic
health system to work properly. But in the developed countires, I really
hope that BF (or other approaches) could be implemented much more in the
setting of TDM, i.e. establishing PK service units. There are a lot of
practical problems, but why shouldn't we be able to solve them?
Thomas Senderovitz
E-mail: senderovitz.-a-.dadlnet.dk
---
From: Stephen Duffull
To: "'PharmPK.at.pharm.cpb.uokhsc.edu'"
Subject: RE: PharmPK Re: Cost X benefit
Date: Sun, 26 Apr 1998 23:22:49 +0100
MIME-Version: 1.0
Bertil Wagner wrote:
>Having had some experience with using pharmacokinetics both in a
research and clinical setting, I have found that computer-based
pharmacokinetics programs are not terribly useful at patient bedside...
....
appreciate input from others regarding this very anti-technological view
point.
I feel compelled to respond to Bertil's comment. In part I agree with
Bertil's comments that in many cases simple dose adjustments can be made in
order to individualise therapy based on first principles. However a caveat
of this argument becomes apparent in many clinical circumstances, eg when
the patient is not at steady state, when the plasma concentration was taken
at a time that is difficult to interprete easily (eg during distribution
phase for say digoxin). Both of these scenarios (and many others) make
first principle calculations more difficult. If indeed TDM is required,
which is another argument altogether, then why not use a Bayesian method
(eg MAP) which allows inference wrt dose to be made based on minimal
observation data. The quoted Sawchuk & Zaske method required 4 plasma
samples (however clinically we are lucky to get two and sometimes only 1
sample!). In contrast Bayesian methods may provide a satisfactory forecast
with only 1 or 2 observations. Therefore if modelling based approaches are
to be considered at all then, in terms of getting the maximum value from
any measurement data, the Bayesian MAP method would seem, currently, to be
the most effective. The difficulty lies in finding a user friendly
Bayesian method at the right price.
Regards
Steve Duffull
================
Stephen Duffull
School of Pharmacy
Manchester University
PH +44 161 275 2355
Back to the Top
[A few more replies - db]
Date: Mon, 27 Apr 1998 15:43:28 -0400
From: "Dr. W. Webster"
Subject: Re: PharmPK Re: Cost X benefit
To: PharmPK.aaa.pharm.cpb.uokhsc.edu
MIME-version: 1.0
X-Priority: 3
We have an excellent well trained clinical pharmacist staff and made a
pharmacokinetics package with all the features including modeling and
Baysian forcasting available for their use. All tried it & when it gave
them an answer contrary to their experience (probably because of data input
error) they wouldn't go near again. The other problem was trying to
maintain the data base in a single computer when patients were scattered
over many floors & wards.
The clinical staff found there were more parameters of interest such as ward
staff habits and if the dose was completely infused that did not have a way
to be entered into the data base that bore more influence on the outcome.
Calculations were a small portion of the activity.
WW
--
Date: Mon, 27 Apr 1998 14:19:53 -0700 (PDT)
From: William Dager
X-Sender: szdager1.aaa.dogbert.ucdavis.edu
To: PharmPK.aaa.pharm.cpb.uokhsc.edu
cc: Multiple recipients of PharmPK - Sent by
Subject: Re: PharmPK Re: Cost X benefit
MIME-Version: 1.0
Re: High tech PK programs for bedside dosing adjustments
Some thoughts:
1.One important item to consider when using PK computer programs is how
good your input data is. Garbage in, Garbage out. Time of dose or serum
concentration sampling, assay errors (PHT,Vanco,Dig), or physiological
changes in the patient (Cardiac,Renal etc) are examples of factors that
can be critical in choosing the correct dose for the patient. I find that
in todays environment, with limited time available per patient, that use
of complex models has its place for some drugs in some patients. But most
patients can be managed by proper use of the general models (one
compartment,linear), combined with some knowledge of that drugs unique PK
charastics (distribution rate, onset of present drug-drug interaction
etc). Many advanced programs can't incorporate this imformation.
2. I have noticed that when some of my colleges use the advanced
or complex models, they frequently don't notice the larger picture (how is
the patient doing; beside the SCr, what was the urine output for the
past shift or the dose is right, but were using the wrong drug) since they
are so focused on what the computer is requesting and telling them. I
always tell my students to TREAT THE PATIENT, NOT THE LEVEL.
Just some thoughts for discussion:
William Dager, Pharm.D.,FCSHP
Coordinator, Pharmacokinetics Consult Service
UC Davis, Medical Center
---
Date: Mon, 27 Apr 1998 20:04:04 -0700
From: Brennan
MIME-Version: 1.0
To: PharmPK.at.pharm.cpb.uokhsc.edu
CC: Multiple recipients of PharmPK - Sent by
Subject: Re: PharmPK Cost X benefit
Fabricio,
High quality software for clinical situation pharmacokinetics is well
worth the price. We use the USC Pack software and are very happy with it.
It's also not that expensive. The cost savings involved and the whole point of
using kinetic software is to minimize patient risk. There are many times a
quick linear regression analysis on a cheap pocket calculator can give the
proper perspective on a patient. Sometimes we need the intuition, sometimes we
need the numbers. Mostly we need both.
Bob B.
---
From: "F. Rios"
To:
Subject: Cost X benefit II
Date: Tue, 28 Apr 1998 23:00:33 -0300
MIME-Version: 1.0
X-Priority: 3
Once more time...
Thanks !
But when I said "PK Program", I didn't want like to close the subject in a
"software world". A program in PK holds the cost for the hospital...
Unfortunately my words would not became clear for you (my English isn't so
good, I'm sure). Of course, the software is an important tool, and the
choice isn't so easy.
My doubt lies in a major complex vision... For example :
Who has data about the differences between an empirical treatment and
others with a PK overview, an support . How fast the pacient could be leave
the hospital? One (1) day is something expensive for them.
I ask this question cause we didn't have this in a great part of our
hospitals. I believe now, after I had received several e-mails, that here in
Brazil, we also have good professionals working in this way.
Your friend,
F. Rios
---
X-Sender: jelliffe.-at-.hsc.usc.edu
Date: Tue, 28 Apr 1998 19:13:39 -0700
To: PharmPK.-at-.pharm.cpb.uokhsc.edu
From: Roger Jelliffe
Subject: Re: PharmPK Re: Cost X benefit
Mime-Version: 1.0
Dear Dr. Wagner:
Thank you for your comments. There have been several replies
already, and
I will also add my voice.
I think this is largely a cultural problem, which is still largely
shared
by many teachers of PK in many pharmacy schools (nobody seeme to teach PK
to any degree in medical schools, worse luck!), who have not kept up with
the times.
If you have been raised to use Sawchuk-Zaske, the method only can deal
with stable patients, and only in a steady state situation, and that is all
many ask of a PK approach to patient care. If that is the only situation in
which you use PK or any fitting procedure, then you are quite correct, you
need it hardly at all, and the raw data of serum concentrations is a pretty
good guide by itself.
However, what did SZ show? They showed real improvement in patient care
when they used a model of the behavior of the drug, and developed dosage
regimens to achieve desired target goals. This was a BIG step forward - the
real beginning of model-based, goal-oriented, individualized drug therapy.
Other modeling methods have come along since. They have been
designed not
merely to analyze patients in steady state situations and to develop dosage
regimens for steady state situations, but rather to track the behavior of
drugs in patients who are sick, unstable, and with significantly changing
body weight and/or renal function, for example, and to take hold of a
patient in a nonsteady state condition, and to GET and KEEP desired
concentrations until and after a steady state ensues.
Mathematics, I have been told, is like common sense IF YOU THINK
ABOUT IT.
In the aviation industry and in the military, they certainly do think a lot
about it, and have developed the flight control software for the modern
fighters and the new airliners. Just like flight control and missile
guidance systems, which use models of the inputs (movements of the control
surfaces and the throttles) and the outputs (the responses of the planes or
missiles), we use models of the behavior of the drugs in patients, and
develop methods (dosage regimens) to track and control the behavior of
drugs in patients. Just as the pilot's movements of the stick are seen by
the computer in the flight control system as target goals to be achieved,
and the movements of the control surfaces are computed to best satisfy
these goals, so we do exactly the same with model-based, goal-oriented,
individualized drug therapy, computing the regimen to best achieve the
desired target goal.
It also sounds logical to me that if you can get better models of the
process, you can control it better (more precisely). What are some of the
strengths and weaknesses of SZ, and of other approaches? These have been
discussed at some length in Clin. Pharmacokinetics 21: 461-478, 1991.
Briefly, when you get new serum data, if you use SZ, you throw out all the
old data and start over again as if you knew nothing already about the
patient. Also, you must have at least 2 data points. You must wait for
distribution to be complete before getting a sample after a dose. All this
is very suboptimal. D-optimal design for getting serum levels, even with
its defects, is far better. You do not have to wait for distribution to be
complete, and you usually don't want to. That old approach comes from the
optical illusion seen when viewing PK plots on log paper rather than
linear, and D-optimal designs are now being used, not just for getting the
best bang for the buck in population PK studies, but also in clinical care.
Further, how do you relate 2 different sets of levels taken at 2
different
times, in sick patients with changing renal function? SZ is the most
wasteful method of any when it comes to getting serum samples compared to
nonlinear least squares or to maximum aposteriori probability (MAP)
Bayesian methods. All this is discussed in that article. The current MAP
Bayesian approach, introduced by Sheiner (one of his greatest contributions
to patient care) can use models that can accomodate the Kel to changing
renal function and can track the behavior of patients when they are
unstable and not at all in a steady state, usually over their entire dosage
history. One can also estimate creatinine clearance when serum creatinine
is unstable and changing rapidly. It is a whole lot better than the little
formulas which only work for steady state situations. We have been using
such a method for over 25 years (see the various articles) and like it a
lot. It is a real help at the bedside in understanding what is happening
with a patient.
This is WHAT YOU WANT AT THE BEDSIDE, so you can reconstruct the
behavior
of the drug in the patient over the entire course of therapy, even during
significant changes in body weight and/or renal function. Only then can you
really evaluate the patient's clinical sensitivity to the drug, and only
then can you select your next target goal in the most informed way.
In addition, with some drugs such as digoxin, the correlation of
clinical
effect is not with the serum levels, but with the computed concentrations
in the peripheral, nonserum, compartment, as shown way back by Reuning et
al in J. Clin Pharmacol 13: 128-141, 1973. What has been happening in the
SZ culture to ignore such good work as Reuning's for so long? We have used
his model extremely in clinical settings for over 20 years, and have found
it to be tremendously useful. After a week of intuitive approaches and TDM,
a patient still was not maintained in sinus rhythm, despite being converted
to it 3 times, until a fitted model was made, the patient's clinicial
sensitivity to the drug clearly appraised by using the model, and an
appropriate dosage regimen developed to keep his peripheral compartment
concentrations where they had been when the sinus rhythm had been achieved
in the past. SZ could never show you that. An instance of this is described
in Therapeutic Drug Monit 15: 380-393, 1993.
What about Vancomycin? There is another 2 compartment drug that is very
poorly handled with SZ 1 compartment approaches. See this in the first
article above, in Clin Pharmacokinetics, comparing MAP Bayesian with a 2
compartment model versus SZ.
In addition, SZ, by transforming concentrations to their
logarithms, makes
an erroneous assumption about the lab assay errors, by assuming a constant
CV%. This says that a measurement of 10, 1, and 0.1 ug/ml carry roughly
equal weight when transformed to their logs. In fact, a level of 1.0 is
given 100 times the weight of a level of 10.0 by SZ, and a level of 0.1 is
given a weight of 100 x 100, or 10,000 times the weight of a level of 10.0.
Is this realistic? I think not. It does not make common sense to me.
Visibly different, if not significantly different, parameter values are
found (see Clin PK, 1991, page 469).
Optimally, each assay error should be carefully determined over its
working range, and the levels weighted by the reciprocal of their variance.
Yes, there are other sources of error, such as model misspecification and
errors in dosage preparation and administration, and in recording the times
samples are drawn. Most of these are not measurement noise but process
noise. Be that as it may, it is useful to KNOW the assay and to include its
easily determined errors to optimize the fitting process. This does seem to
make common sense.
When you use methods such as these, you can see that you actually
CAN be
fairly precise in the achievement of desired target therapeutic goals
(individualized for each patient according to his/her need for the drug).
One wants to maximize the precision with which such target goals are
achieved. There is a new method, used by us and by the group of Mallet and
colleagues, called multiple model (MM) dosage design. It is desctibed in
Clin Pharmacokinet 34: 57-77, 1998. It uses nonparametric population PK
models, and is the main reason, in my mind, for using nonparametric models
over parametric ones. For the first time, one now has a way to specifically
examine the predicted FAILURE to achieve the target goal, and to choose a
dosage regimen which specifically is designed to minimize that failure.
The aviation community uses MM approaches such as these, and planes fly
better. The military does also, and their missiles hit the targets with
smaller error circles.
Model-based, goal-oriented drug therapy can do the same for our
patients,
but one will never see this if all one asks is that the patient be
perceived as a 1 compartment system, without dynamic effects, without being
able to examine diffusion into endocardial vegetations, for example,
without modeling the growth and kill of microorganisms that are bathed in a
profile of serum concentations determined by thoughtful fitting of data to
obtain a good combined PK/PD model. The more capable our models and control
strategies, the better we will do at the bedside.
Sincerely,
Roger Jelliffe
************************************************
Roger W. Jelliffe, M.D.
USC Lab of Applied Pharmacokinetics
CSC 134-B, 2250 Alcazar St, Los Angeles CA 90033
Phone (213)342-1300, Fax (213)342-1302
email=jelliffe.at.hsc.usc.edu
************************************************
Take a look at our Web page for announcements of
new software and upcoming workshops and events!!
It is http://www.usc.edu/hsc/lab_apk/
************************************************
Back to the Top
REPLY:
Can you give some feedback.
We cooked up a model that uses some exotic black-box mathematics that is
sufficiently refined in its metabolic modelling that covariant relations
have been accurately forecast on a consistent basis (within predicted
limits of accuracy of the pharmacological knowledge).
Drug response is predictable based on a priori PK and PD metabolic data to
within a 5 percent error rate in a general clinical population (no
discernable pattern of patient selection). The error rate achieved is the
same as that predicted. There is a limit to the completeness of the data
fed into the model which is accounted for automatically.
The model is applied slightly differently than the one you describe.
Instead of seeking out patient response, it seeks to identify patient
variance from the predicted outcome based on a pre-determined clinical
protocol. The model precludes trials of patient populations that are not
expected to respond, it serves as useful tool for quickly developing and
administering clinical protocols with unmatched accuracy.
The next step is gain some feedback on the projected cost savings this
model can generate in clinical trials as well as patient care. Once the
success of the model was proven, we were left with no control to determine
how much was actually gained over other methods. Predictive medicine works
well, when it works at all, but there is no room for control studies to
ensure optimal utilization. The only control is past failures, and the data
is much too skewed to be of any use. Comparison to other environments is
necessary to perform cost-benefits analysis.
DG
Back to the Top
Dear All:
I forgot one more thing, relating to drug interactions. When you
have an
individualized, patient-specific PK fitted model of how a drug is behaving
in a patient, you have not only discussed the issue of drug-drug
interactions, you have actually quantified their combined effect in that
patient, of all the interactions we know about, and all the rest as well
that we have not yet discovered.
Chris Destache has done a number of articles describing cost and
benefit,
most of them saying that PK consults are useful and also save money and
shorten host and ICU stays.
Very best regards,
Roger Jelliffe
************************************************
Roger W. Jelliffe, M.D.
USC Lab of Applied Pharmacokinetics
CSC 134-B, 2250 Alcazar St, Los Angeles CA 90033
Phone (213)342-1300, Fax (213)342-1302
email=jelliffe.at.hsc.usc.edu
************************************************
Take a look at our Web page for announcements of
new software and upcoming workshops and events!!
It is http://www.usc.edu/hsc/lab_apk/
************************************************
Back to the Top
Nick,
I will try to be clearer about the approach. The PK and PD models you used
are statistical analysis tools for analyzing patient data sets where there
the data is assumed to have some margin of error. This is simply a choice
of mathematical modeling.
The model we use assumes that the data we have is perfect and limited only
by its ability to address specific questions. One does not obtain
covariances or variances between assumed relations in response to questions
about the data set in the model, the model automatically generates sets of
relations and describes them in the form of a mathematical symulation. (It
sounds tricky, but works out to be a lot easier to use than baysian
analysis.)
In English, this is the same as pouring in the data and having the data
automatically churned until it comes out as a pictorial description of the
metabolic pathways and their covariant relations that anyone can understand
by merely studying the picture.
This approach was adopted because a decision was made that the data
generated by both clinical treatment and basic research was fundamentally
so accurate that it did not make sense to assume that the doctors or the
laboratory equipment were making any significant errors. Consequently, a
more appropriate mathematical modeling approach than baysian analysis was
called for.
And this is where things become hard to follow because one must be a
mathematician to understand the mathematics that was used. One uses the
same kind of mathematics for modeling black holes in space, economies, and
other very complex systems. It actually works if the data collected is
correct and indicates when any built-in assumptions are erroneous. And it
turns out that doctors have been much more clever researchers than any of
the suppliers of mathematical tools have ever assumed. They do tend to
produce very accurate data whether or not their analysis of the data is
correct or not. The model simply fills in for the difficulty of drawing
conclusions, but plays to the strength of medicine---that doctors are very
well trained in gathering data accurately.
Please do not ask about the specific mathematics. First, it will make your
head spin even if you are a mathematician. Second, it is proprietary. All
you need accept to understand the fundamentals is that the analytical
approach taken uses a different kind of mathematical analysis that is
appropriate for analyzing good data.
The mathematics you have been using is intended for extracting hidden
information from poorly collected data. The model works and baysian
analysis flops in real life. It remains our conclusion that the answer is
obvious: data collection is not the problem, it is simply that the
analytical tools being used are inappropriate and are making garbage out of
good data.
I hope I was clearer this time.
At the bottom of this is a synthesis of some very clever mathematical
modeling and good medicine. The medicine will be familiar although some of
the data that was used to build the model is unique (and proprietary)
simply because it never occurred to anyone without the heavy-duty
mathematical support we had to treat it as data.
But all the real credit goes to the medical research. The mathematics
simply put the pieces together in a way that one could draw useful
conclusions consistently. We simply had a means of ensuring that we used
all of the useful data. The modeling tools you are familiar with cannot
assist you in selecting your data sets. They were never intended perform
this kind of analysis and will not work if you try. They are more
appropriate to extracting information about heterogeneous populations, not
the homogeneous population of a clinical study.
Medicine is just too accurate today to treat clinical studies as generating
results that assume significant error in the data generated. Unfortunately,
the tools that are currently available for the analysis of clinical studies
assume that the data produced has large built-in errors and generates
erroneous conclusions as a result.
Daro Gross
Back to the Top
Nick,
I will try to be clearer about the approach. The PK and PD models you used
are statistical analysis tools for analyzing patient data sets where there
the data is assumed to have some margin of error. This is simply a choice
of mathematical modeling.
The model we use assumes that the data we have is perfect and limited only
by its ability to address specific questions. One does not obtain
covariances or variances between assumed relations in response to questions
about the data set in the model, the model automatically generates sets of
relations and describes them in the form of a mathematical symulation. (It
sounds tricky, but works out to be a lot easier to use than baysian
analysis.)
In English, this is the same as pouring in the data and having the data
automatically churned until it comes out as a pictorial description of the
metabolic pathways and their covariant relations that anyone can understand
by merely studying the picture.
This approach was adopted because a decision was made that the data
generated by both clinical treatment and basic research was fundamentally
so accurate that it did not make sense to assume that the doctors or the
laboratory equipment were making any significant errors. Consequently, a
more appropriate mathematical modeling approach than baysian analysis was
called for.
And this is where things become hard to follow because one must be a
mathematician to understand the mathematics that was used. One uses the
same kind of mathematics for modeling black holes in space, economies, and
other very complex systems. It actually works if the data collected is
correct and indicates when any built-in assumptions are erroneous. And it
turns out that doctors have been much more clever researchers than any of
the suppliers of mathematical tools have ever assumed. They do tend to
produce very accurate data whether or not their analysis of the data is
correct or not. The model simply fills in for the difficulty of drawing
conclusions, but plays to the strength of medicine---that doctors are very
well trained in gathering data accurately.
Please do not ask about the specific mathematics. First, it will make your
head spin even if you are a mathematician. Second, it is proprietary. All
you need accept to understand the fundamentals is that the analytical
approach taken uses a different kind of mathematical analysis that is
appropriate for analyzing good data.
The mathematics you have been using is intended for extracting hidden
information from poorly collected data. The model works and baysian
analysis flops in real life. It remains our conclusion that the answer is
obvious: data collection is not the problem, it is simply that the
analytical tools being used are inappropriate and are making garbage out of
good data.
I hope I was clearer this time.
At the bottom of this is a synthesis of some very clever mathematical
modeling and good medicine. The medicine will be familiar although some of
the data that was used to build the model is unique (and proprietary)
simply because it never occurred to anyone without the heavy-duty
mathematical support we had to treat it as data.
But all the real credit goes to the medical research. The mathematics
simply put the pieces together in a way that one could draw useful
conclusions consistently. We simply had a means of ensuring that we used
all of the useful data. The modeling tools you are familiar with cannot
assist you in selecting your data sets. They were never intended perform
this kind of analysis and will not work if you try. They are more
appropriate to extracting information about heterogeneous populations, not
the homogeneous population of a clinical study.
Medicine is just too accurate today to treat clinical studies as generating
results that assume significant error in the data generated. Unfortunately,
the tools that are currently available for the analysis of clinical studies
assume that the data produced has large built-in errors and generates
erroneous conclusions as a result.
Daro Gross
Back to the Top
Daro,
On Mon, 4 May 1998 14:11:26 -0500 Daro Grosswrote:
> I will try to be clearer about the approach. The PK and PD models you used
> are statistical analysis tools for analyzing patient data sets where there
> the data is assumed to have some margin of error. This is simply a choice
> of mathematical modeling.
Agreed.
>
> The model we use assumes that the data we have is perfect and limited only
> by its ability to address specific questions.
Fair enough. That's your choice.
> In English, this is the same as pouring in the data and having the data
> automatically churned until it comes out as a pictorial description of the
> metabolic pathways and their covariant relations that anyone can understand
> by merely studying the picture.
Sounds like the same way a bunch of monkeys and their typewriters produced
the works of Shakespeare. I
look forward to your announcement of a perpetual motion machine.
>
> This approach was adopted because a decision was made that the data
> generated by both clinical treatment and basic research was fundamentally
> so accurate that it did not make sense to assume that the doctors or the
> laboratory equipment were making any significant errors.
Which planet are you living on? Is maildrop.-at-.iname.com in the Solar System?
Perhaps the dark side of
the moon?
> And it
> turns out that doctors have been much more clever researchers than any of
> the suppliers of mathematical tools have ever assumed. They do tend to
> produce very accurate data whether or not their analysis of the data is
> correct or not. The model simply fills in for the difficulty of drawing
> conclusions, but plays to the strength of medicine---that doctors are very
> well trained in gathering data accurately.
Ah! You must be referring to Dr McCoy on Star Trek.
> Please do not ask about the specific mathematics. First, it will make your
> head spin even if you are a mathematician. Second, it is proprietary.
Does that mean my head would not spin if I paid you enough?
> I hope I was clearer this time.
Clear as mud - but thanks for trying :-)
I am sorry I cannot take your claims seriously but you should take heart
that prophets are rarely
recognized in their own time.
--
Nick Holford, Center for Drug Development Science
Georgetown University, 3900 Reservoir Rd NW, DC 20007-2197
email:n.holford.-at-.auckland.ac.nz tel:(202)687-1618 fax:687-0193
http://www.phm.auckland.ac.nz/Staff/NHolford/nholford.htm
PharmPK Discussion List Archive Index page
Copyright 1995-2010 David W. A. Bourne (david@boomer.org)