- On 13 Jun 2005 at 18:49:16, Daniel Byrd (byrdd.-at-.cox.net) sent the message

Back to the Top

TO: PharmPK

Nick Holford (Department of Pharmacology & Clinical Pharmacology at the

University of Auckland in New Zealand) asked:

[snip]

> What do you mean by ....?

Fair enough. Through PharmPK, I asked you and Hans Proost (Department of

Pharmacokinetics and Drug Delivery at the University Centre for Pharmacy in

the Netherlands) to define and help me understand the STS (the =93standard

two step=94) model, which turned out to be more of a procedure or method than

a model, but is an acronym used in pharmacokinetic analysis. You did. In

return, I will try to answer your questions, as follows:

> 1. =93Risk assessment=94

To me, a risk assessment consists of the process of (or the procedures

involved in) estimating risk. A model usually accomplishes a risk analysis.

(a) I (and some of my colleagues) define risk as =93the probability of a

future loss,=94 which is, in part, Kaplan's original definition and thus,

consistent with it.

(b) Our definition has the advantage of being axiomatic. See:

P.C. Fishburn, =93Foundations of risk measurement I: Risk as probable loss.=94

Management Science 30: 296-306 (1984). AND P.C. Fishburn, =93Foundations of

risk measurement II: Effects of gain on risk.=94 J. Mathematical Psychol. 25:

226-242 (1984).

(c) To me, risk assessment also is the first part of a three-part process

called risk analysis. The other (following) parts are risk management and

risk communication. (Before you try to manage or explain a risk, first try

to understand the risk.)

(d) Some portion of risk assessments predict measurable outcomes. In these

instances, risk assessments resemble experimental hypotheses. The

estimates are subject to validation.

(e) When a risk assessment predicts some number of cases, and the

prediction is subject to validation or verification, feelings about the

risk (risk perceptions) do not change the measured outcome. Thus, many

risk assessors wonder how the incorporation of risk perception could ever

contribute to estimation. If so, risk perception is properly part of the

management process.

(f) Other definitions of risk exist.

> 2. =93Poisson model=94

The Poisson equation motivates a Poisson model.

(a) The (fundamental) Poisson equation is 1-e^-x, where x is the average

number of hits per cell. Imagine that a hit is a discrete event, like an

irreversible change. E.g., the equation estimates the probability that

some number of hits will fall into a cell in a grid, given an average

number, x, of hits per cell.

(b) The (discrete) probability density function (PDF) of a Poisson model is

e^p =AD (A)Ax / x!, where x is the average number of hits per cell and x! is

the factorial of x.

(c) These (and similar) probabilistic models depend on the results of

imaginary experiments, even conceptual mathematical experiments (Gedanken

experiments), involving tossing coins, rolling dice, or analogous =93random=94

events. The assumption that observers cannot distinguish between outcomes

will condition these models. If some way exists to distinguish between the

outcomes of experiments (e.g., rolls of the dice), the model provides an

incomplete description of the process.

(d) Try =93Statistical Distributions=94 by Merran Evans, Nicholas Hastings and

Brian Peacock [ISBN: 0471371246] Wiley-Interscience, third edition (2000)

pp. 221, OR download the compendium of probability distributions available at

http://www.causascientia.org/math_stat/Dists/Compendium.pdf. (P.S. The

file illustrates Poisson distributions on pp. 101 and 103.)

Augusto Sanabria, Ph.D., sent the internet address for =93Compendium=94 to me

via RiskAnal, another listserver. He also reads PharmPK. Augusto works as

a modeler at the Risk Research Group in the Geohazards Division of the

Australian government [Geoscience Australia (www.ga.gov.au)] at

Jerrabomberra Avenue and Hindmarsh Drive in Symonston

[Augusto.Sanabria.aaa.ga.gov.au]

> 3. =93Filtered Poisson model=94

I feel certain that a competent mathematician can provide a better

definition. However, ....

(a) To me, a filtered Poisson model uses the same (or additional) values

(parameters, variables) to estimate the probability of some outcome through

a standard Poisson model. However, even the same values that an unfiltered

Poisson model would use, get preprocessed through other equations, so their

outcomes (or their probability density functions) differ. Thus, the

Poisson model delivers =93filtered=94 estimates.

(b) The U.S. Environmental Protection Agency (EPA) based their regulatory

model of potency on the carcinogenic process, or =93mutagenic hits.=94 The

somatic cell theory of carcinogenesis holds that cancer cells result from

somatic cell mutations. This model also was a filtered Poisson model. It

modeled the multistage carcinogenic process, using exposure (dose) as an

additional variable. It processed information from toxicological or

epidemiological observations to generate a carcinogenic

=93potency.=94 Unfortunately, the Agency=92s model made some untenable

assumptions. Among these were the ideas that the number of stages in the

model was a function of the different exposures (doses) used, and that a

carcinogen only altered one stage in one (increased risk) direction.

(c) EPA used this altered multistage model to estimate an upper bound to

risk for regulatory purposes, not as a model of the biology of carcinogenesis.

> 4. =93Suresh Moolgavkar's =91two-stage=92 model=94

Suresh Moolgavkar is a physician-mathematician, who currently works at the

University of Washington in Seattle.

(a) Moolgavkar=92s "two-stage" model, sometimes described as an MVK model

(Moolgavkar-Knudson-Venzon), is an exposure (dose) independent model of the

(biological) carcinogenic process. It allows for the expansion and

contraction of target cells. This =93two-stage=94 model incorporates more of

the biology of carcinogenesis into estimates of potency, but it requires an

external model of the relationship between carcinogen exposure (dose) and

mutagenic potency for untransformed cells and transformed (or initiated) cells.

(b) Moolgavkar and his colleagues published =93complete=94 or =93closed form=94

versions of this =93two-stage=94 model. See: W.F. Heidenreich, E.G. Luebeck

and S.H. Moolgavkar, Some properties of the hazard function of the

two-mutation clonal expansion model. Risk Anal. 17(3): 391-399 (1997). For

practical applications, see the following two citations.

(c) S.H. Moolgavkar, E.G. Luebeck and E.L. Anderson, Estimation of unit

risk for coke oven emissions. Risk Anal. 18(6): 813-825 (1998).

(d) S.H. Moolgavkar, E.G. Luebeck, J. Turim and L. Hanna, Quantitative

assessment of the risk of lung cancer associated with occupational exposure

to refractory ceramic fibers. Risk Anal. 19(4): 599-611 (1999).

(e) For several years now, James D. Wilson and I have tried to understand

the implications of expanding and contracting target cells for the mode of

action in carcinogen risk assessment and for the toxicology of an

interesting substance, dioxin (2,4,5,6-tetrachlorodibenzo-p-dioxin). We

published several abstracts at Society for Risk Analysis meetings, and Jim

gave a talk about dioxin at a meeting of SRA=92s Dose-Response section.

(f) The necessity of understanding exposures to carcinogenic substances and

converting these exposures into doses, explains why I and some of my

colleagues (e.g., Tony Cox or Paul Price) follow developments in

pharmacokinetics.

5. Your use of the term STS refers to population pharmacokinetics.

I have some problems with the STS model (or procedure), as used by many

kineticists. To refresh everyone=92s memory, the STS model requires that the

investigator:

(a) Estimate parameters for each individual, e.g., clearance (CL) and

volume of distribution (Vd).

(b) Calculate the average and standard deviation for each parameter across

the sample of individuals. [Measured volumes of distribution and clearances

should generate a half-life for the substance in question for this population.]

(c) The cited averages estimate the mean clearances and the associated

standard deviations of a population. These average values relate to each

other through an equation that describes a hypothetical, average

individual=92s handling of a chemical substance. Cl = kel x Vd, AND kel

= 0.693/t1/2.

How do you know the PDF distributes normally? If the measurements

distribute log-normally, an application of average and standard deviation

calculations may yield aberrant values.

What is the referent population? (How do I understand the

representativeness of the data?) Unless the investigator defines the

population carefully (but narrowly, e.g., male, sophomore medical students

at Oxford), I do not know what the average and standard deviation

represent. In the U.S., we try to reference populations to the

census. Thus, the sampling frame might consist of random phone calls to

persons residing in the U.S. with secondary tests to convince me that the

sample of subjects resembles the census (e.g., same heights, weights, ages,

genders, etc.).

I lack confidence that unselected pharmacokinetic data distribute normally,

as expected in an =93overall uncertainty=94 model. In theory, I could measure

the values of elimination rate and volume of distribution for a

representative group. Then, I could propagate the distributions through

the above equation to derive a distribution equal to the measured

distributions of the population. Currently, most risk assessors use Monte

Carlo techniques to accomplish this task. Charles Yoe, an economist at the

College of Notre Dame in Baltimore, MD, is perhaps the best teacher of

these techniques.

Daniel M. Byrd III, Ph.D., D.A.B.T.

at home:

Not an infectious disease expert

8370 Greensboro Drive

McLean, VA 22102-3500

(703)848-0100

byrdd.-a-.cox.net - On 21 Jun 2005 at 16:09:58, "J.H.Proost" (J.H.Proost.at.rug.nl) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear Daniel,

You made a few comments with respect to STS, and

population pharmacokinetics in general:

> How do you know the PDF distributes normally? If the

>measurements distribute log-normally, an application of

>average and standard deviation calculations may yield

>aberrant values.

This is quite often a difficult question. Either one has

only a low number of subjects (e.g. 10), with fairly

precise individual parameter estimates, or a larger number

of subjects with only a few measurements per subjects,

thus with imprecise individual parameter estimations. None

of these cases allows a clear discrimination between

statistical distributions. In general I prefer a

log-normal distribution, based on theoretical arguments,

unless there is clear evidence for a different

distribution.

In case of assuming a log-normal distribution, the mean

and sd values are of course the geometric mean and sd.

> What is the referent population? (How do I understand

>the representativeness of the data?) Unless the

>investigator defines the population carefully (but

>narrowly, e.g., male, sophomore medical students at

>Oxford), I do not know what the average and standard

>deviation represent. In the U.S., we try to reference

>populations to the census. Thus, the sampling frame

>might consist of random phone calls to persons residing

>in the U.S. with secondary tests to convince me that the

>sample of subjects resembles the census (e.g., same

>heights, weights, ages, genders, etc.).

In general, population pharmacokinetics does not refer to

a 'general population', but to a specific population, as

defined by the inclusion and exclusion criteria in the

experimental protocol. This implies that the conclusions

refer to this population only.

> I lack confidence that unselected pharmacokinetic data

>distribute normally, as expected in an =93overall

>uncertainty=94 model. In theory, I could measure the

>values of elimination rate and volume of distribution for

>a representative group. Then, I could propagate the

>distributions through the above equation to derive a

>distribution equal to the measured distributions of the

>population. Currently, most risk assessors use Monte

>Carlo techniques to accomplish this task.

I agree. In this case, I would prefer to 'measure' (taking

into account the relative large standard error, I would

say 'estimate') clearance and volume of distribution

rather than elimination rate constant and volume of

distribution, since the latter are by definition

correlated. In case one assumes a log-normal distribution,

the estimation of standard errors of derived parameters

(e.g., elimination rate constant, half-life, mean

residence time) can be performed easily, even in case of

correlations between the parameters.

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University of Groningen

The Netherlands

Want to post a follow-up message on this topic? If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "STS" as the subject

PharmPK Discussion List Archive Index page

Copyright 1995-2010 David W. A. Bourne (david@boomer.org)