- On 16 Apr 2002 at 09:40:59, David Foster (david.foster.-at-.adelaide.edu.au) sent the message

Back to the Top

Hello all,

Im just wondering how others handle the situation where the

concentration range within a PK profile is very large, for example

10000-fold (0.05-500 ng/ml), due to a very sensitive assay. From the

rich data that I have it is clear that there is at least 2

compartments, 1 rapid decline and one much longer decline. The former

is more relevant for dosing the multiple dosing, while the latter is

much more "interesting" to me as a phenomenon in this case. The problem

I experience is that I have a choice:

1. "weight" the data heavily (1/y^2) which results in a poor capture of

the peak concentrations

or

2. dont "weight" the data at all and seriously under/over estimate the

terminal phase concentrations

I dont really understand the problem, as the terminal phase is very

well characterised (5+ samples) as is the initial decline. Im

performing POP-PK analysis, but have others experienced this problem

more generally, and how did you deal with it? Hope this sparks a bit

of discussion...

David Foster, PhD

Department of Clinical and Experimental Pharmacology

Faculty of Health Sciences

Adelaide University

Adelaide, South Australia 5005

Email: david.foster.-at-.adelaide.edu.au

http://www.adelaide.edu.au/Pharm/index.htm

[Do you have an estimate of the variance in the high concentration

data and the low concentration data (and in between)? This should

guide you with the choice of weight. A 'simple' 1/val^2 for all data

may not be satisfactory. You might be able to develop a variance -

concentration (v-c) relationship and use this to estimate appropriate

weights. With a good idea of the form of the v-c relationship,

extended least squares could be used - db] - On 16 Apr 2002 at 11:59:05, David Bourne (david.-a-.boomer.org) sent the message

Back to the Top

[Two replies - db]

From: Iñaki Fernández de Trocóniz

Date: Tue, 16 Apr 2002 17:21:40 +0200

To: david.aaa.boomer.org

Subject: Re: PharmPK Large concentration ranges in PK analysis

Dear David,

What about using Log transformation of your concentration data,

Best,

Iñaki

Iñki F. Trocóniz Ph. D

Farmacia y Tecnología Farmacéutica

Facultad de Farmacia

Universidad de Navarra

Pamplona 31080

Spain

e-mail: itroconiz.aaa.unav.es

---

From: "Bachman, William"

Date: Tue, 16 Apr 2002 12:19:05 -0400

To: david.-at-.boomer.org

Subject: RE: PharmPK Large concentration ranges in PK analysis

In situations where the data covers a large concentration range such as

yours, the possibility exists that different error structures may be

exhibited over the range. Often, near the limit of quantitation, the error

may be homoscedastic whereas at higher concentrations, the error may be

heteroscedastic. Some population software, such as NONMEM, allows you to

model the error structure of your data instead of apriori choosing a

weighting scheme. For example, you might code an additive plus proportional

random error model to cope with the differing error structure across the

concentration range. The data will then determine if this error model is

appropriate. (If either the additive or proportional component of the error

model predominates, the model parameter representing the non-dominant

component will go to zero and can be dropped from your model.)

William J. Bachman, Ph.D.

GloboMax LLC

7250 Parkway Dr., Suite 430

Hanover, MD 21076

bachmanw.at.globomax.com - On 16 Apr 2002 at 15:17:49, David Bourne (david.aaa.boomer.org) sent the message

Back to the Top

[Three more replies - db]

From: Joel Owen

Date: Tue, 16 Apr 2002 13:42:31 -0400

To: david.-a-.boomer.org

Subject: Re: PharmPK Large concentration ranges in PK analysis

David,

You might consider a model which treats the second phase as drug 'binding

with high affinity and to a significant extent' to a target site which has

some limited total amount. The approach is outlined in a recent article

by Mager and Jusko entitled "General Pharmacokinetic Model for Drugs

Exhibiting Target-Mediated Drug Disposition", in J. Pharmacokinetics and

Pharmacodynamics, vol 28, No. 6, December 2001. An example of this

phenomenon is the receptor binding of ACE inhibitors.

Joel S. Owen, Ph.D.

PK/PD Scientist

Cognigen Corporation

395 Youngs Road

Buffalo, NY 14221

(v) (716) 633-3463 ext. 247

(f) (716) 633-7404

(e) joel.owen.aaa.cognigencorp.com

http://www.cognigencorp.com/

---

From: Stephen Day

Date: Tue, 16 Apr 2002 14:54:05 -0400 (EDT)

To: david.-at-.boomer.org

Subject: Re: PharmPK Large concentration ranges in PK analysis

David,

I'm not sure this is relevant to your question, but

isn't is possible your data is is good (has little

error) but the model is bad?

For example, is it possible that your drug is rapidly

eliminated as parent (or unstable conjugate) in bile

or urine and is then slowly re-absorbed until the

feces (or urine) is excreted? This could give rise to

the long "terminal elimination" phase you see, and

would not fit a two compartmental model.

Steve

Stephen Day

Merck-Frosst Centre for Therapeutic Research

Kirkland, QC CANADA

---

From: Nick Holford

Date: Wed, 17 Apr 2002 07:08:56 +1200

To: david.-a-.boomer.org

Subject: Re: PharmPK Large concentration ranges in PK analysis

David,

Weighting is not a binary choice (1/y^2 or 1). If you use an extended

least squares objective function (ELS) then you can be more flexible

in modelling the residual error. Programs such as MKMODEL, ADAPT and

NONMEM offer this choice. The use of a mixed additive and

proportional error model is often helpful.

Peck CC, Beal SL, Sheiner LB, Nichols AI. Extended least squares

nonlinear regression: A possible solution to the "choice of weights"

problem in analysis of individual pharmacokinetic parameters. Journal

of Pharmacokinetics and Biopharmaceutics 1984;12(5):545-57.

Nick

--

Nick Holford, Divn Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

email:n.holford.aaa.auckland.ac.nz

http://www.phm.auckland.ac.nz/Staff/NHolford/nholford.htm - On 16 Apr 2002 at 17:12:54, "Serge Guzy" (GUZY.-at-.xoma.com) sent the message

Back to the Top

Using cste cv from untransformed data or homoscedastic assumption

using logarithmic transformation leads to similar results. Usually

logarithmic transformation is more stable but I do not think that the

fitting curves are really different.

Serge Guzy

Head of Pharmacometrics

Xoma - On 16 Apr 2002 at 22:10:33, Roger Jelliffe (jelliffe.aaa.hsc.usc.edu) sent the message

Back to the Top

Dear David:

You might consider weighting first by what you really know

about your data - the assay error pattern, especially when the

concentrations vary over such a wide range. We feel that weighting

is best done according to the relative credibility of the data

itself, for example, the Fisher information, the reciprocal of the

variance with which each data point is known.

You can start by determining the assay error pattern first by

determining a blank, low, middle, high, and very high sample covering

the working range of the assay, each in at least quadruplicate, for

example. The more samples and the more determinations per sample the

better. Then you can fit a polynomial to this relationship such that,

for example, the assay SD - A0 + A1C1+ A2C2+A3C3, where C1 is the

mean of the concentrations for each sample, C2 is C1 squared, and C3

is C1 cubed, and the A's are the coefficients. Usually only the

squared term is needed to get a pretty good relationship, but this is

much better than simply an intercept and a slope. This has been trrue

for almost all assays we have seen. This has been discussed in

Jelliffe R, Schumitzky A, Van Guilder M, Liu M, Hu L, Maire P, Gomis

P, Barbaut X, and Tahani B: Therapeutic Drug Monitoring 15: 380-393,

1993.

After this, the remaining intraindividual variability, which

we call gamma, can be estimated using a parametric population

modeling program such as the IT2B iterative Bayesian program in the

USC*PACK collection. This lets you see the relative contribution of

the assay error against the other environmental factors such as the

errors in the preparation, administration, and recording of the

doses, the errors on recording when the samples were obtained, the

model misspecification, and any unsuspected changes in the PK/PD

parameter values during the period on the data analysis.

In this way, weighting is not an art form, but is done in a

way that respects the relative contributions of the assay error and

the other sources of noise in the system. If gamma is low (2 for

example) then you can say you have a pretty clean study. If it is 10,

then there is considerably more noise in the environment. With proper

skepticism, gamma might even be a way to compare the relative

therapeutic precision in which a certain form of drug therapy is

given to a group of patients.

Very best regards,

Roger Jelliffe

Roger W. Jelliffe, M.D. Professor of Medicine,

Laboratory of Applied Pharmacokinetics,

USC Keck School of Medicine

2250 Alcazar St, Los Angeles CA 90033, USA

email= jelliffe.aaa.hsc.usc.edu

Our web site= http://www.lapk.org - On 16 Apr 2002 at 23:21:37, "O'Connor, Ed" (eoconnor.aaa.Therimmune.com) sent the message

Back to the Top

Then by definition wouldn't the data be heteroscedastic??? How finely can

one dissect the data? It might be more appropriate to transfrom the data as

suggested rather than assemble a montage of differing fits. And if we are

basing conclusions on the assumption that drug effects are greater than

inon-drug effects would not a non-parametric regression be more

appropriate??? - On 17 Apr 2002 at 21:34:26, Roger Jelliffe (jelliffe.-a-.hsc.usc.edu) sent the message

Back to the Top

Dear PharmPK guys:

When the assay SD or variance varies in some way with the

measured data, then it is said to be heteroschedastic. This is also

true after transforming to log concentration data, as it assumes a

constant assay coefficient of variation (CV). Many people have felt

that a constant percent error is OK. Transformations usually are not

as useful in our hands, as one must also transform the error models

for the data, to be correct, and this is not usually done. Usually,

that does not lead to optimal weighting by the Fisher information of

the data points, the reciprocal of the variance of each data point.

For example, consider an assay with a 10% CV. At a concentration of

10 units, the assay SD is 1 unit, the variance is also 1, and the

weight is again 1. Now, at a concentration of 20 units, the SD is 2,

the variance is 4, and the weight is 1/4. That is the problem with

assuming a constant CV rather than the Fisher information. While a

constant percent error may "look OK" intuitively on a graph of the

data, it does not correctly adjust for the concentration unless, and

only unless, that is the true situation. Only then is it really

correct. Usually there is also at least some intercept value, and

usually also there is a gentle bend upward in the relationship

between the concentration on the horizontal axis, and the assay SD on

the vertical axis.

We think one should dissect the data as finely as one can,

according to what is knowable. If there are multiple responses such

as concentrations and effects, then each response should ideally be

weighted by the reciprocal of its respective variance.

In addition, there are the other sources of uncertainty such

as the errors in preparation and administration of the doses,

recording when they were given, errors recording the times at which

the responses are obtained, the model misspecification, and any

changing parameter values during the period of the data analysis.

This remaining error can then be estimated separately from the assay

error, so you can know how much is due to the assay and how much to

the other noise in the therapeutic environment.

Very best regards,

Roger Jelliffe

Roger W. Jelliffe, M.D. Professor of Medicine,

Laboratory of Applied Pharmacokinetics,

USC Keck School of Medicine

2250 Alcazar St, Los Angeles CA 90033, USA

email= jelliffe.at.hsc.usc.edu

Our web site= http://www.lapk.org - On 24 Apr 2002 at 11:28:32, "Prashant V Bodhe" (prashnvb.-at-.rediffmail.com) sent the message

Back to the Top

Dear All

Regarding Davids data: Since the data in terminal phase and data

on validation are not available it is not easy to comment on it.

One of the most neglected aspect of bioanalytical technique is as

follows

The guidelines give in detail descrption of LOD, LOQ etc. However

there is is no test defined to judge the "power of resolution" of

method at LOQ levels.

What I mean is as follows. A method having say 50 ng/ml as LOQ is

used for analysing samples of BA study. the terminal phase samples

may show results like 60 ng/ml, 55 ng/ml and 51 ng/ml(or 90ng/ml,

75ng/ml, 60 ng/ml). This may mean that the drug has a very long

terminal half life.

On the contrary if we take into consideration the variability of

analytical techniques especially at levels near LOQ, carry over in

instruments, etc. etc. how good can we hold these results?

Should there be a criteria for deciding "power of resolution near

LOQ" of bioanalytical method?

any thoughts?

One way of confirming results can be anlyse double or triple

quantity of sample keeping reconstituion volume same. Thus amount

of drug injected into HPLC will be more than LOQ. However this

will induce another validation parameter - proving

non-interference due to 2 or 3 ml matrix.

May be mass balance studies would indicate a cut off point in such

case. But such studies can not be carried out easily and by

everyone.

Dr. Prashant Bodhe - On 26 Apr 2002 at 10:10:38, James Hillis (JHillis.at.hfl.co.uk) sent the message

Back to the Top

Prashant, I'm an analytical chemist by training and quite new to

bioanalysis. I agree with your point about the power of differentiation.

While many different parameters are well defined and understood in this

area, I feel that the graduation of response of the analytical system is not

well considered. By graduated response, I mean, can the analytical system

differentiate between eg. 1ng/ml steps. If it can only differentiate

between 10ng/ml steps, then little can be inferred from terminal values such

as 60 ng/ml, 55 ng/ml and 51 ng/ml.

There are inherent difficulties in the compound specific extraction and

analysis from complex matricies and relatively large errors are tolerated.

This weakens any inference made as to a relationship between dose and

response. Yet these relationships are inferred and used, without reference

to the large levels in uncertainty.

James Hillis

jhillis.aaa.hfl.co.uk

Want to post a follow-up message on this topic? If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "Large concentration ranges in PK analysis" as the subject

PharmPK Discussion List Archive Index page

Copyright 1995-2010 David W. A. Bourne (david@boomer.org)