Back to the Top
Dear all,
Does anyone use the LIMS Watson in connection with the PK Software
Kinetica, both from Innaphase ?
We need contact to persons experienced with the data transfer and blq
flagging within the LIMS.
Thanks for your help!
Dorothee Krone
Scientist Pharmacokinetics
Viatris GmbH & Co. KG
Early Phase Development
Bioanalytics & Pharmacokinetics
Weismüllerstrasse 45
60314 Frankfurt/Main
Germany
e-mail: Dorothee.krone.at.viatris.de
Back to the Top
THE ANSWER:
Watson automatically flags values that are above the ULOQ or below the LLOQ.
Watson has the capability to export data directly to Kinetica--just choose
Kinetica from the PK menu and all the study data is exported. Any data that
is BLQ is marked as "<" (i.e., BLQ) by Kinetica.
Does that help? If not, then what is it that you are trying to do?
Best Regards,
Dan Hirshout
InnaPhase Corporation
dhirshout.-a-.innaphase.com
Back to the Top
Dan:
There was some recent traffic on the NONMEM listserve about how to
handle BLQ data. How is this handled by Kinetica?
Paul
Paul Hutson, Pharm.D.
Associate Professor (CHS)
UW School of Pharmacy
777 Highland Avenue
Madison, WI 53705-2222
Back to the Top
Dear All:
Concerning BLQ data and lab assay errors and sensitivity. We feel
that the best thing is for each lab to do is first to determine its own
assay error pattern. Using our USC*PACK software, the assay error
polynomial for the assay data is that the
assay SD = I + J x C + K x Csq, where C is the concentration, Csq is the
square of the concentration, and I, J, and K are the coefficients in the
polynomial describing the usually nonlinear relationship between the assay
concentration and the SD with which it is measured. This is an easy and
cost-effective way to get an estimate of the SD with which any single
sample is measured. It permits fitting the data by its Fisher information,
the reciprocal of the assay variance.
I am following this with a repeat of what I mentioned some time
earlier. About accuracy and precision. They ARE important. We are modeling
population data so we can act on it OPTIMALLY, that is, to develop dosage
regimens to achieve desired target goals with maximal precision. It is not
just that the assay should be acceptably precise over its working range,
but also that the error be carefully determined so it can be used to fit
data by its Fisher information. Different weighting schemes clearly yield
different population model parameter values. This is one of the reasons
that linear regression on the logs of the levels, with its inappropriate
weighting scheme built into the fit, often yields significantly different
parameter values compared to weighted nonlinear least squares or the MAP
Bayesian fitting procedure, as these can take the correct weighting scheme
based on the assay error polynomial, and linear regression cannot.
The issue of LOQ is also important here. When we have no other
info about the specimen except the measured value itself, then there most
certainly IS a LOQ. However, when we do most PK work, that is not the case.
We know, with reasonable precision, when the doses were given and when the
samples were obtained. So we know the drug is really present. Even simple
linear models show us that the last molecule is theoretically never
excreted. So, instead of having to ask, as we must in toxicological work,
if the drug is PRESENT OR NOT, and having therefore to develop a LOQ in
that situation, in PK/PD work we know the drug really is present. The
question being asked is not the same as in toxicology. It is instead - HOW
MUCH drug is present?
Most people agree that weighting data by its Fisher information is
appropriate - the reciprocal of the variance of the data point. It works
quite well. The point is that when you determine the assay error and
express it as a polynomial function of the concentration, that important
relationship continues over the entire range of the assay, down to and
including the blank, if you set it up correctly. This point is discussed in
more detail in an article in Therap Drug Monit 15:380-393, 1993, especially
the section on Evaluating the Credibility of Population Parameter Values
and Serum Level Data, pp. 386-391. Thus, not only should one determine if
the assay is sufficiently precise or not, but even after that decision is
made, there remains the issue of fitting the data correctly by its Fisher
information. Determining the assay error polynomial in this way is a cost
effective way to do this. It has the fringe benefit that there is no LOQ
for PK work.
Finally, there comes the issue of the remaining part of the
intraindividual variability - that due to the errors with which the various
doses have been prepared and administered, the errors in recording when the
doses were given, the errors in recording when the various serum samples
(or other responses) were obtained, the misspecification of the structural
model, and any unsuspected changes in parameter values that have taken
place during the period of the data analysis. All these are remaining
sources of intraindividual variability. They can be computed as an overall
single parameter, if you use the iterative 2 stage Bayesian (IT2B)
population modeling program in the USC*PACK collection, as a parameter
which we call gamma. In this way it is possible to have a reasonable
estimate of the relative amount of noise due to the assay error, and that
due to the other sources of error.
This computation of gamma is also starting to be implemented in
our new nonparametric adaptive grid (NPAG) population modeling software, so
that eventually it will not be necessary to use parametric modeling
software for this purpose any more. Parametric software using the FOCE
approximation for the log-likelihood function was recently compared with
NPAG by Bob Leary at the PAGE meeting in Paris in June. He showed, in a
carefully simulated population study, that the FOCE approximation was
associated with loss of statistical consistency and significant errors,
while nonparametric NPAG method did not have this problem, as the
likelihood calculations are exact. In addition, the FOCE approximation was
associated with a significant loss of statistical efficiency and
statistical convergence, while NPAG was much more efficient. Dr. Leary's
data and the graphs of the results can be seen, under "New developments in
population modeling", on our web site, www.lapk.org.
Once again, in PK/PD work, there does not have to be any BLQ or
LOQ. I look forward to discussing this more with you.
Very best regards,
Roger Jelliffe
Also see: http://www.boomer.org/pkin/
Roger W. Jelliffe, M.D. Professor of Medicine,
Laboratory of Applied Pharmacokinetics,
USC Keck School of Medicine
2250 Alcazar St, Los Angeles CA 90033, USA
email= jelliffe.-a-.hsc.usc.edu
Our web site= http://www.lapk.org
Back to the Top
Paul,
FOR AUC Calculations
When linear rule is applied to AUC calculation, BLQ or zero data points are
not included in AUClast (AUC from t=0 to last sampling time) calculation if
there are no normal status data following the BLQ or zero data points.
BLQ Data (Can be set as Default, 0 or as Missing)
Default- the default will take whatever the LQ value (e.g. <0.2 will be
calculated as 0.2).
Set as 0 - will calculate all BLQ data as 0.
Set as missing- will skip the BLQ data and will not use the BLQ in the
calculation.
Note: In order to flag data as BLQ, identify each undetectable data point
with a "less than" sign (<) before the data.
Moreover, BLQ before first non-zero normal data = 0
(Select the check box to treat all BLQ before the first quantifiable data as
0 under the AUC* method options).
Best Regards,
Dan Hirshout
InnaPhase Corp.
PharmPK Discussion List Archive Index page
Copyright 1995-2010 David W. A. Bourne (david@boomer.org)