Back to the Top
Hello all
When predicting PK profiles in children via population PK modelling, I understand that the typically
used allometric exponent of 0.75 is applied to the clearance, for size-based scaling of metabolism
(in addition to maturation functions for the early phase of life). The various reviews that I have
read point to this value of 0.75 as being acceptable to describe 'metabolic processes', whereas
volume-related processes typically use an exponent of 1.
My question is, is this exponent of 0.75 still relevant when applied to an apparent clearance of
orally administered drugs (i.e. CL/F, when no IV data is available) which are expected to have a LOW
FRACTION ABSORBED (say, Fa less than 50%)?
For high Fa drugs, where F is mainly dependent on first pass metabolism, I would imagine that 0.75
is fine, given that both CL and F are dependent on metabolic processes (for metabolised drugs). But
when the limiting factor for F is absorption extent, and in particular, dissolution-limited - is
CL/F still appropriate to scale with an exponent of 0.75? Should an exponent of 1 be used on F,
given that intestinal lumen volume affects dissolution? How can a high extraction (low FH) and low
absorption (low Fa) drug be modelled using the allometric approach with a single exponent?
Please note - as a PBPK modeller myself, I fully understand the merits of mechanistic
physiologically based modelling for paediatric predictions, and I am trying to understand better the
allometric-based approach!
I hope you can help, either by way of response, or by pointing me in the direction of a relevant
publication in case I've missed something in the literature.
Thanks
Kathryn
Back to the Top
Dear Kathryn,
For Fa and dissolution limitation, then I would focus on dose and amount
in GI tract. If the same adult dose is given to a child (rarely the
case), then lower luminal volume/amount of fluid taken in with the dose
may decrease Fa (although this won't be linear with body weight as F
must be (0,1)). In most cases however, dose amounts are scaled down
(e.g. mg/kg), so the smaller relative dose should mean Fa remains the same.
So for a high extraction drug, the CL exponent will be 0.75 due to liver
vol/blood flow scaling with approx 0.75 (Johnson 2005, Price 2003), and
if you are dosing mg/kg I would assume Fa does not change with size,
hence CL/F ~ wt^0.75. If you are giving higher mg/kg doses then lower
dissolution may well decrease Fa but this would be better informed with
in vitro dissolution data (linked with assumed lumen volume) rather than
scaling by weight.
BW,
Joe
Back to the Top
Kathryn,
Theory based allometry predicts an exponent of 3/4 for many functional processes e.g. basal
metabolism, cardiac output, lung volume flow (West et al 1997). It is not restricted to metabolism.
I would not expect Fa to be an allometrically related parameter if it is low because of permeability
or low dissolution. However, extraction ratio (e.g. 1-FH) is a different matter. For high extraction
ratio drugs then you also need to consider size related prediction of organ blood flow. Allometric
theory does not clearly distinguish between the processes of organ intrinsic clearance (independent
of organ blood flow rate of delivery) and organ clearance (dependent on blood organ blood flow rate
of delivery (blood, flow, blood cell distribution, plasma protein binding)).
I have used allometric predictions of ethanol Vmax which (predicts the equivalent of intrinsic
clearance at a particular conc) as well as allometric predictions of hepatic portal blood flow.
Allometric size was predicted using normal fat mass. You may wish to look at Holford et al. 2015).
Best wishes,
Nick
West GB, Brown JH, Enquist BJ. A general model for the origin of allometric scaling laws in biology.
Science. 1997;276:122-26.
Holford N, Jiang Y, Murry DJ, Brown TL, Gary Milavetz G. The Influence of Body Composition on
Ethanol Pharmacokinetics using a Rate Dependent Extraction Model. PAGE. 2015; 24 Abstr 3405
[www.page-meeting.org/?abstract=3405]. http://www.page-meeting.org/default.asp?abstract=3405
Back to the Top
Hi All
Just to add that "high extraction" isn't an intrinsic attribute of the drug and may change by age,
please see :
http://www.ncbi.nlm.nih.gov/m/pubmed/26864786/
Regards
Masoud
Back to the Top
Nick:
You say below:
"Theory based allometry predicts an exponent of 3/4 for many functional processes e.g. basal
metabolism, cardiac output, lung volume flow (West et al 1997). It is not restricted to metabolism."
The article you cite by West is an excellent treatise on the subject of scaling across species.
There are numerous other papers that provide theoretical foundations for allometric scaling across
species (e.g., Darveau, Nature, 2002; West, J Exp Biol. 2005). The allometric theory accounts for
differences in rate-related functions across species that span many orders of magnitude in body
mass.
I am not aware of any theory that supports scaling by weight to the 3/4 power WITHIN A SPECIES.
Similarly, I am unaware of unambiguous data strongly supporting allometric scaling across the
typical range of human weights.
As applied to human pharmacokinetics, I do not believe any theory supports allometric scaling. You
can see this if you consider the ends of the spectrum. Small size is associated with children. They
are not a separate species, but are humans undergoing metabolic maturation. I am not aware of any
allometric theory that accounts for metabolic maturation with age. Similarly, very large size is
associated with morbid obesity. I am not aware of any allometric theory that suggests that clearance
in morbid obesity is best estimated using allometric principles. Between these extremes, scaling by
weight is not very different than scaling by weight to the three quarters power.
Of course, I will defer to data. Can you point me to human PK examples where allometric scaling of
weight to the three quarters power reliably provides substantially better fits to the data than
scaling by weight alone? I can point to many examples where it makes no difference. I can also point
to many examples where investigators simply use allometric scaling without first seeing if
allometric scaling was supported by the data. However, I know of only one or two examples where
models were estimated with and without allometric scaling, and the allometric scaling worked better
than the simpler non-scaled model. If allometric scaling for human pharmacokinetics was “true” on
first principles, as your comments imply, then the literature should abound with unequivocal
examples.
Thanks,
Steve
--
Steven L. Shafer, MD
Professor of Anesthesiology, Perioperative and Pain Medicine, Stanford University
Adjunct Associate Professor of Bioengineering and Therapeutic Sciences, UCSF
Back to the Top
Hi Nick and Steve,
If Nick and Steve don't mind I would like to comment as well. I hope this is not bad netiquette to
interject like this...
Why do you make the distinction between/within species? If I recall correctly the math part of the
theory has nothing to do with appearance or breeding, the things that define species. Sure,
different species are used to illustrate the theory. But this is only done so that the signal
(differences due to size) is very large compared the noise (differences due to other things).
Another reason why inter-species examples are often used is that it is probably the most impressive
thing to predict reasonably well over exceedingly large size ranges. I don't know of any reason why
it wouldn't work within species as well provided other things are correctly accounted for,
maturation and aging being the most obvious things.
Allometric scaling and obesity and maturation are unrelated things! Allometry is size and obesity is
body-composition. The fact that large humans tend to be obese humans is not a biological law, it is
influenced by our culture and other things. As a thought experiment, if we had a different culture
and provided all humans with the same number of calories we would have obese children and skinny
adults.
When you try real challenging data sets incorporating young-children, children and adults in a
single continuous PK model I find the literature pretty unequivocal: Allometric models generally
perform reasonably well given the difficulty of the task. Linear models hardly exist for these kinds
of datasets, and if they do they likely need incorporate arbitrary cut-points to function. As an
example, I would point you to our general purpose PK model or propofol (Eleveld, D.J., Proost, J.H.,
Cortínez, L.I., Absalom, A.R. and Struys, M.M., 2014. A general purpose pharmacokinetic model for
propofol. Anesthesia & Analgesia, 118(6), pp.1221-1237.) If you look at CL versus weight in Figure 9
it is very hard to believe that a linear model would work well there. We have a similar graph for
remifentanil but this is not published yet.
Researchers developing allometric models without giving equal weight to linear models is I think a
reluctance to re-invent the wheel each time you need a wheel. Allometric models work well over wide
size ranges and introduces no extra model parameters, so there isn't really a downside. Allometric
scaling is based on mathematical theory and some not terribly unreasonable assumptions. Why is the
burden-of-proof on allometric scaling to be necessarily better than linear scaling? Where is the
proof that linear scaling is better? If they both fit the data essentially equally why should we
discard a comprehensive theory in preference for something that is only a mathematical and
historical convenience?
Warm regards,
Douglas Eleveld
Back to the Top
Dear Kathryn
Allometric scaling is a pragmatic approximation that with occasional adjustments yields at least in
some cases PK / PD predictions within the customary two fold margin of observed values (i.e. within
an error margin of 50 - 100%).
Ignoring the physiological, biochemical and genetic differences between species and within species
(e.g. those between neonates, todlers, young adults etc.) is difficult to justify theoretically and
practically. The 2006 TGN1412 disaster is just a single example of how wrong can allometric scaling
be.
Zvi
Back to the Top
Steve,
Thanks for your comments and questions on theory based allometry. I have cross-posted it to nmusers
because this topic has been of interest there over many years.
I am aware that your wrote an editorial on this topic with Denis Fisher which you titled “Allometry,
Shallometry!”. Having read your editorial and your comments I don’t think the implication that
allometry is shallow is appropriate. On the contrary, I get the impression (see below) that you and
Denis do not really understand the biological principles underlying allometry and seem to be unaware
of the substantial literature supporting theory based allometry and its application in humans.
Would your journal be willing to receive a rejoinder to your editorial with a deeper explanation of
the science and the literature?
Best wishes,
Nick
Nick Holford, MBChB, FRACP
Professor of Clinical Pharmacology, University of Auckland
Adjunct Professor of Bioengineering and Therapeutic Sciences, UCSF
On 21-May-16 00:42, Steven L Shaferwrote:
Nick:
You say below:
"Theory based allometry predicts an exponent of 3/4 for many functional processes e.g. basal
metabolism, cardiac output, lung volume flow (West et al 1997). It is not restricted to metabolism."
The article you cite by West is an excellent treatise on the subject of scaling across species.
There are numerous other papers that provide theoretical foundations for allometric scaling across
species (e.g., Darveau, Nature, 2002; West, J Exp Biol. 2005). The allometric theory accounts for
differences in rate-related functions across species that span many orders of magnitude in body
mass. I am not aware of any theory that supports scaling by weight to the 3/4 power WITHIN A
SPECIES.
NH:
Unfortunately, you repeat a common misunderstanding of allometric theory that it is somehow only
applicable across species. Allometric theory as originally proposed by West is based only on body
mass (West, Brown et al. 1997). Allometric theory does not require consideration of species or any
other covariate. This is the first commandment of allometry (Holford 2008). If you read the work by
West et al. you will find that there is nothing in the theory that prevents its use for within
species scaling using mass. Therefore the theory of West supports the use of the 3/4 exponent within
species.
SS:
Similarly, I am unaware of unambiguous data strongly supporting allometric scaling across the
typical range of human weights.
NH:
I recommend that you read the paper by McCune et al. that formally tests the allometric theory
prediction of an exponent of 3/4 for clearance based on a large study of busulfan across the human
size range (McCune, Bemer et al. 2014). The theory was tested explicitly and no evidence found to
reject the value of 3/4. Work with a drug where your expertise is renowned
(https://www.youtube.com/watch?v=gD7BZIl2uzc) has clearly demonstrated the benefit of allometric
theory across humans from infants to adults. Eleveld showed the fit was improved using theory based
allometric scaling (Eleveld, Proost et al. 2014) Schuttler also demonstrated an improved fit with an
estimated exponent for clearance 0.75 which is consistent with the theoretical value of 3/4
(Schuttler and Ihmsen 2000).
In the spirit of modern scientific philosophy are you aware of unambiguous data (and analysis) that
falsifies the theory of allometry (https://en.wikipedia.org/wiki/Karl_Popper)?
SS:
As applied to human pharmacokinetics, I do not believe any theory supports allometric scaling.
NH:
As noted above there is nothing in the theory of allometry proposed by West that would mean it is
not applicable to humans. If you do not want to believe this theory then that is your personal
choice, as it would be for any religious belief, and I will not attempt to change your religion.
SS:
You can see this if you consider the ends of the spectrum. Small size is associated with children.
They are not a separate species, but are humans undergoing metabolic maturation.
NH:
I have personally been a strong advocate on considering all humans, regardless of age, as being a
single species and have sought integrated explanations of human clinical pharmacology. If you were
aware of the paediatric pharmacokinetic literature then you would know of many publications
supporting the use of a combination of theory based allometry for size plus empirical maturation
models for age (see this review (Holford, Heo et al. 2013)).
SS:
I am not aware of any allometric theory that accounts for metabolic maturation with age.
NH:
From the first commandment it necessarily follows that changes associated with age are not
predictable from the allometric theory of West et al. There is no age related theory to predict
quantitative changes. However, plausible biological understanding of maturation means that clearance
will be zero (or at least very small) at conception and will approach a maximum when it will be
indistinguishable from the mature adult value. So at least at the extremes there is a biological and
quantitative prediction of maturation. Joining these extremes requires an empirical approach. A
monotonic sigmoid emax function has been suggested (Tod, Jullien et al. 2008) and widely applied
(Holford, Heo et al. 2013). A more complex function may be needed but this will need to be driven
first by data not by theory.
SS:
Similarly, very large size is associated with morbid obesity. I am not aware of any allometric
theory that suggests that clearance in morbid obesity is best estimated using allometric principles.
Between these extremes, scaling by weight is not very different than scaling by weight to the three
quarters power.
NH:
Body composition contributes to body mass. Theory based allometry does not specify how differences
in body composition affect allometric size. It is plausible however to propose that the size that is
the driving force behind allometric theory may not be determined simply by total body weight.
Application of theory based allometry in conjunction with fat free mass can be used to determine a
normal fat mass (NFM) (Anderson and Holford 2009). The NFM concept has been used to account for body
composition differences determining allometric size. NFM is not predicted from allometric theory but
is a biologically plausible extension of the theory of allometric size based on mass. NFM has been
used to show that total body mass rather than fat free mass provides a better description of
propofol pharmacokinetics in the obese (Cortinez, Anderson et al. 2010). It has also be used to show
that fat free mass is a better predictor of dexmedetomidine but obesity is associated with reduced
clearance independently of allometric size based on fat free mass (Cortinez, Anderson et al. 2015).
This demonstrates how the complexities of biology can be better understood based on a plausible
theory of allometry. The theory may not be perfect but it is compatible with a very large number of
observation studies in many domains. Investigation of other phenomena such as maturation and obesity
is aided by building on allometric theory.
SS:
Of course, I will defer to data. Can you point me to human PK examples where allometric scaling of
weight to the three quarters power reliably provides substantially better fits to the data than
scaling by weight alone?
NH:
In addition to the large study of busulfan PK mentioned previously (McCune, Bemer et al. 2014) I
suggest you look at the prediction of morphine clearance across the human size and age range using
theory based allometry with maturation. Prediction of clearance in a large external data set was
clearly better than other approaches including empirical allometry (Holford, Ma et al. 2012). Other
published examples can be found in this review (Holford, Heo et al. 2013).
SS:
I can point to many examples where it makes no difference. I can also point to many examples where
investigators simply use allometric scaling without first seeing if allometric scaling was supported
by the data.
NH:
This is often the case when sample sizes are small, weight distribution is narrow and power is small
(Anderson and Holford 2008). A pragmatic approach given the challenges of falsifying allometric
theory with small data sets is to assume it is useful. It is certainly better than using empirical
allometry or ignoring size altogether.
SS:
However, I know of only one or two examples where models were estimated with and without allometric
scaling, and the allometric scaling worked better than the simpler non-scaled model. If allometric
scaling for human pharmacokinetics was “true” on first principles, as your comments imply, then the
literature should abound with unequivocal examples.
NH:
If you know of examples of suitably powered studies which can also show they have accounted for
other mass correlated factors that would confound the estimation of a true allometric exponent then
I would be glad to know the details.
If you read the literature carefully and exclude those that are underpowered to truly detect the
difference between an exponent of 3/4 and say an exponent of 1 or an exponent of 2/3 and have
accounted for all other factors, such as maturation, that are necessarily correlated with mass then
you will not find many examples. I am not aware of any that are inconsistent with allometric theory.
NH:
Anderson, B. J. and N. H. Holford (2008). "Mechanism-based concepts of size and maturity in
pharmacokinetics." Annu Rev Pharmacol Toxicol 48: 303-332.
Anderson, B. J. and N. H. G. Holford (2009). "Mechanistic basis of using body size and maturation to
predict clearance in humans." Drug Metab Pharmacokinet 24(1): 25-36.
Cortinez, L. I., B. J. Anderson, N. H. Holford, V. Puga, N. de la Fuente, H. Auad, S. Solari, F. A.
Allende and M. Ibacache (2015). "Dexmedetomidine pharmacokinetics in the obese." Eur J Clin
Pharmacol doi:10.1007/s00228-015-1948-2.
Cortinez, L. I., B. J. Anderson, A. Penna, L. Olivares, H. R. Munoz, N. H. Holford, M. M. Struys and
P. Sepulveda (2010). "Influence of obesity on propofol pharmacokinetics: derivation of a
pharmacokinetic model." Br J Anaesth 105(4): 448-456.
Eleveld, D. J., J. H. Proost, L. I. Cortinez, A. R. Absalom and M. M. Struys (2014). "A general
purpose pharmacokinetic model for propofol." Anesthesia and analgesia 118(6): 1221-1237.
Holford, N. (2008). "Re: [NMusers] Scaling for pediatric study planning."
http://www.cognigencorp.com/nonmem/current/2008-September/0182.html.
Holford, N., Y. A. Heo and B. Anderson (2013). "A pharmacokinetic standard for babies and adults." J
Pharm Sci 102(9): 2941-2952.
Holford, N. H., S. C. Ma and B. J. Anderson (2012). "Prediction of morphine dose in humans."
Paediatr Anaesth 22(3): 209-222.
McCune, J. S., M. J. Bemer, J. S. Barrett, K. Scott Baker, A. S. Gamis and N. H. G. Holford (2014).
"Busulfan in Infant to Adult Hematopoietic Cell Transplant Recipients: A Population Pharmacokinetic
Model for Initial and Bayesian Dose Personalization." Clinical Cancer Research 20(3): 754-763.
Schuttler, J. and H. Ihmsen (2000). "Population pharmacokinetics of propofol: a multicenter study."
Anesthesiology 92(3): 727-738.
Tod, M., V. Jullien and G. Pons (2008). "Facilitation of drug evaluation in children by population
methods and modelling." Clin Pharmacokinet 47(4): 231-243.
West, G. B., J. H. Brown and B. J. Enquist (1997). "A general model for the origin of allometric
scaling laws in biology." Science 276: 122-126.
Back to the Top
Dear Zvi,
I read a bit about TGN1412. Can you explain what the problem has to do with allometric scaling?
Do you think some other scaling method would have avoided the problem? What method?
warm regards,
Douglas Eleveld
Back to the Top
Dear Nick:
By all means you should respond.
Dennis and I never claim that allometry is "shallow." Far from it. As a principle to understand
scaling across species, it is a very deep and profound insight. What concerns me is the application
of allometry to the scaling of human pharmacokinetics. This appears based on the assumption that the
principles that apply to scaling across species that vary across many orders of magnitude also apply
to human pharmacokinetics across the ranges of human weight. A simple example of the high and low
extremes of weight demonstrate this to be false.
Thus, I have zero interest in a rejoinder with "a deeper explanation of the science and the
literature." I want to see DATA that shows that allometric scaling reproducibly provides better
estimates of human pharmacokinetics than scaling by body size alone. If such data do not exist, or
only exist in only isolated cases with the majority of cases showing that it makes no difference,
then the allometric scaling is not a priori correct. As stated in our editorial, allometric scaling
is a testable hypothesis. As such, it should be tested before adoption, just like any other
assumption about data.
Lastly, I would appreciate your avoiding ad hominem arguments, such as “you and Denis do not really
understand the biological principles underlying allometry and seem to be unaware of the substantial
literature supporting theory based allometry and its application in humans."
Thanks,
Steve
Back to the Top
Dear Steve, Nick,
Here are some other points/responses to the original posting to guide
people on within-species considerations:
Firstly, do not rely on PK data to always tell you. Mcleay (CPK 2012)
looked at estimating allometric exponents for 56 drugs. the average
estimate was 0.65, no precision was reported but from the histogram the
95% CI was about 0.1-1.2. Lesson: PK data are generally too noisy to
discriminate a "true" within species exponent, and the difference in
predictive or descriptive power of weight raised to somewhere in the
region of 0.6-0.8 will be negligible.
Secondly, if you are a clinical pharmacologist rather than data
scientist, why not use some prior knowledge of the system? Think about
the biological processes involved in CL. Cardiac output and organ blood
flows scale with approx wt^0.75 (good summaries in the PBPK literature,
e.g. Price 2003 Crit Rev Tox), Johnson (Liver Trans 2005) showed liver
volume scales with wt^0.78 in humans, Rhodin found GFR scales with
wt^0.63 in humans (no surprise then that for decades paediatric
nephrologists have scaled renal function estimates to BSA).
Thirdly, look at how clinical dosing has evolved over time: When
Holford was still in his pram (probably), Crawford (Pediatrics 1950) was
noting dose requirements seemed to scale with BSA rather than linear
weight (he drew parallels with BMR scaling also), and of course
oncologists dose narrow therapeutic index agents by BSA. How has this
practice come about? To argue scaling by weight is "not very different"
to allometry means we should be telling oncologists they have their
dosing wrong?
Finally, you ask for real world examples, which I hope from reading the
above you will realise might not be unequivocal, but at least show CL
does not scale with linear body weight: one from small molecules (Burger
2007 CPT lamivudine) and one for biologics (Goldman 2012 Ann Paed
Rheum), which interestingly shows the correlation between infliximab CL
and metabolic rate.
In their recent editorial Drs Fisher and Shafer quote Sheiner "let the
data speak". When considering how to parameterise a covariate model,
the data, including the above, above speak thus:
1. We cannot hope to discriminate between weight raised to 0.6-0.8, so
by estimating an allometric exponent we are adding an uncertain
parameter to our model, potentially causing instability.
2. From biology, and evolution of clinical dosing guidelines, we know
CL scales somewhere in this range, so fixing the exponent to something
close to 0.75 is a reasonable "biological prior".
3. Many covariates we are interested in (age in neonates/infants, organ
function) may have some degree of correlation with weight, so fixing the
weight relationship will allow more precise estimation of these effects,
and avoid biasing an estimated weight exponent.
Joe
Joseph F Standing
MRC Fellow, UCL Institute of Child Health
Antimicrobial Pharmacist, Great Ormond Street Hospital
Honorary Senior Lecturer, St George's, University of London
Back to the Top
Nick
You, Steve, and I had the same mentor — Lewis Sheiner. His most important teaching was LET THE DATA
SPEAK. When theory and evidence clash, Lewis would not blindly stick to theory. In this instance,
Steve asked you to provide EVIDENCE to support your claim that allometric scaling would yield
markedly better fits than weight-normalization. You offered the McCune article to support your
argument (without mentioning your vested interest as an author). In that article, you wrote:
all clearance (CL,Q) and volume (V1,V2) parameters were scaled for body size and composition using
allometric theory and predicted fat free mass (FFM).(19–21)
It does not appear that you evaluated a weight-normalized model. If you don’t look, you never see!
I also note that the table reports the following:
CL Clearance L/h/62kg NFM CL 11.4 (1.1)
This brings up the issue of SAFETY. I was a clinician for several decades and Steve continues to be
an active clinician. I don’t know if you see patients. However, many participants in this mailing
list have never selected a dose of a drug, then administered it to a patient. I venture to say that
most clinicians when faced with a dosing regimen that requires raising weight to the 3/4 power would
run in the opposite direction (or, make a dosing error). The entry in the table is even more
problematic. A clinician first needs to calculate NFM, then they need to realize that the dose is
not proportional to 11.4 by a factor of weight/62.
If the goal of PK is publishing journal articles about pure science, your approach might be OK.
But, as far as I know, the goal is to improve patient safety. If there were a strong (or even
moderate) preference for allometrically-scaled models, I would support their use. But, Steve asked
you to provide EVIDENCE for this and you failed to do you. You also cited Eleveld’s article.
Although his manuscript did include one weight-normalized model, the allometric model required
multiple additional terms to fit the data. In particular, that article (as yours) required a
“maturation” term to describe younger patients. In other words, neither of these articles
demonstrates that allometric models are sufficient to describe the range of sizes in humans. Had
Eleveld added those extra terms to the weight-normalized model, it might have performed as well as
the model he published.
Over the past two decades, I have analyzed data from > 200 studies including many in infants and
children. In many of these, I have compared weight-normalized and allometric (and, in adults,
unscaled) approaches. In virtually all cases, the difference in fit between the weight-normalized
and allometric approaches was trivial and often favored the weight-normalized approach. Can you
cite cases in which the allometric approach fares much better?
I also have had the opportunity to be an editor for a journal and to review articles for many
journals (and I have reviewed submissions by many people who participate in this mailing list). In
many instances, authors refuse to evaluate a weight-normalized model, citing you. In many of these
instances, I have insisted that the authors conduct that analysis and (as far as I can recall) there
has never been strong evidence to support the allometric model.
I repeat — if you don’t look, you don’t see.
I suspect that Steve will have more to add.
Dennis
Dennis Fisher MD
Back to the Top
Thanks for this Joe,
I agree with your points, and would add a couple more questions:
1. How many of the analyses which reported a "better fit" using empirically estimated exponents were
powered to estimate them?
2. How many of the analyses which reported a "better fit" using empirically estimated exponents
obtained 95% CI's which included the theoretical values?
3. In particular for CL, how many of the analyses which reported a "better fit" using empirically
estimated exponents obtained 95% CI's which excluded "1"?
I think we need to figure out which of the data are only mumbling and which are speaking :-)
Regards,
David
David Foster
Senior Lecturer in Pharmacokinetics | P4-08 | Playford Building
School of Pharmacy & Medical Sciences | Australian Centre for Pharmacometrics | Sansom Institute
for Health Research
University of South Australia
CEA-19, GPO Box 2471
Adelaide, SA, 5001
Back to the Top
I am not sure that this can be solved by voting :)
As an example, I opened 5 large phase 3 datasets, trying to be outside
of the cancer area (where WT dependence could be very different)
So exponents on CL (95%CI):
0.51 (0.47-0.55)
0.57 (0.49-0.65)
0.64 (0.47-0.82)
0.68 (0.61-0.75)
0.97 (0.91-1.03)
All are monoclonal antibodies, all are for adult subjects, not for
pediatric/adult combination of data.
You can pick a choose whatever you like. We usually do a data-driven
analyses, and fix the parameter to allometric scaling 0.75 if the data
are not informative.
Also, monkey-to-human scaling for monoclonals is best performed (there
are several papers on that) with CL exponent in the range of 0.8-0.9,
larger than 0.75 but smaller than 1 (this was also data-driven, based on
the post factum analyses of monkey and human data for many mABs).
Regards
Leonid
Back to the Top
Dennis,
I do agree with what I think your sentiment is (please correct me if I see your view wrong), that
proponents making some claim which requires additional correction factors “smells pretty fishy”. It
seems as if the proponents wish to avoid falsification of their favorite theories by inventing
additional concepts. Unfortunately for everyone, falsification is more difficult than it looks with
real world PK data. Individuals differ not only by size but also age, sex, body-composition and a
myriad of other poorly documented and even less well understood things and these are all potential
confounders. And we haven’t even mentioned the model-misspecification issue of reducing a biological
being to a couple of homogeneous compartments.
I always imagine a very smart person back in 1800 when first confronted with a proponent of the
theory of gravity with the fact that “all things fall toward the earth”. They would have the
counter-example that birds, clouds or those new-fangled balloons do not fall so and thus the theory
must be incorrect. When the proponent begins over barometers, air-pressure and displacement the
person would counter with “you only invoke these things to cover the flaws in the gravity theory”.
Our historical situation is even more difficult, we have a “theory of gravity”, allometric theory,
before any mathematical theories for age, sex, body-composition or health or disease are available.
With regard to the Eleveld PK model, when I started model building I only used linear weight-scaled
models. During exploratory analysis I found I needed to add many arbitrary correction factors to get
a linear weight scaled model to fit the data. With an allometric model I still needed correction
factors but fewer and these mostly seemed less arbitrary. More importantly, the fit and predictive
performance was better.
I am tempted to repeat the *entire* hierarchical model building using linear scaling but I am afraid
after all that work it may be claimed that I didn’t really do my best to “protect my favorite
theory”. In reality, I have no loyalty to arbitrary concepts and I would drop allometric theory the
very moment I found something else that worked better. But I understand it’s hard to convince
someone else of this. Anyway, the data underlying the Eleveld PK model is available in the
supplementary data. So anyone can test anything they want with the same data I had. If a linear
weight-scaled model could be found which works better then it would be a great thing for patients.
Warm regards,
Douglas Eleveld
Back to the Top
Dear Douglas
The dose for the FIH TGN 1412 study was selected based on monkey data completely ignoring the
inter-species differences in the density of CD28 on monkey & human T cells as well as the
inter-species differences in the affinities of TGN 1412 for these sites. The investigators followed
a FDA guideline based on NOEL, which essentially relies on simple allometry. They even reduced the
dose by another safety factor of 500. Had they used the information on CD 28 expression (density) on
human T cells and the correct affinity data, they should have realized that the dose they have
selected may result in CD28 occupancy in excess of 90%. An occupancy level considered as highly
dangerous when dealing with a super agonist such as TGN1412 under any circumstances and in
particular in a FIH trial.
Sincerely
Zvi
Back to the Top
Dear Nick:
You stated in your comments to me, both on NMUSERS and on PHARMPK, that a paper of yours with Dr.
McCune offered a validation of the use of allometric scaling. Dennis Fisher has already commented on
this paper. Dennis said I might have additional comments. As usual, Dennis is right.
The paper is McCune et al, Busulfan in infant to adult hematopoietic cell transplant recipients: a
population pharmacokinetic model for initial and Bayesian dose personalization.. Clin Cancer Res.
2014;20:754-63. Based on my reading of your paper, and the supplementary material, I would like to
offer several observations.
1. Quoting from page 755: “To characterize busulfan pharmacokinetics over the entire age
continuum, all clearance (CL,Q) and volume (V1, V2) parameters were scaled for body size and
composition using allometric theory and predicted fat-free mass.” In other words, you assumed
allometric theory at the outset of your analysis.
2. I think your testing of this assumption is described on page 759: “we estimated the allometric
exponents for each of the 4 main pharmacokinetic parameters (Supplementary Table S4). Initial
estimates of 2/3 and 1.25 were used for the clearance and volume exponents.” However, you tested
this with bootstrap, not with log-likelihood profiles. Why?
3. If allometry makes little difference, then it is an expected result that your final estimates
would be close to the starting parameters. This might especially be the case where there are 10
parameters in the calculation of clearance (see item 10 below), compromising the “navigability” of
the model away from the starting estimate of PWR when the other 9 parameters start at the value
determined by assuming that PWR=0.75.
4. I stated in my comments that allometric theory did not account for the upper extreme of
obesity. You agree, since you found it necessary to corrected allometric scaling with an additional
parameter to account for the effects of obesity.
5. I stated in my comments that allometric theory did not account for maturation. You agree,
since you found it necessary to add an additional parameter for maturation.
6. You had actual body weight for only 133 subjects in this study, of which only 24 subjects were
less than 18 years of age (supplementary table 2). Although your model has 1610 individuals, you
only estimated the allometric portion of your model from 24 children. This allometric scaling
parameter was assumed to be true for all 1407 subjects (calculated from supplementary table 2).
Since your allometric parameter, Ffat, was derived from just 24 children, and applied to all 1407
children, your testing (supplementary table S4) may be a tautology.
7. You state on page 755 that you used the dosing weight in reference 18. Reference 18 is Gibbs,
et al, The Impact of Obesity and Disease on Busulfan Oral Clearance in Adults, Blood
1999;93:4436-40. Reference 18 discusses actual body weight, body surface area, adjusted ideal body
weight, and ideal body weight, all calculated from standard formulae. There is no reference to
Dosing Weight in this publication.
8. Dennis pointed out the potential safety concerns of allometric scaling. I suggest that
interested readers look at page 756 of your paper. If that does not scare clinicians, then the exact
math for dose calculation appears in supplementary table 7. Would you be comfortable if the
oncologist treating your child had to calculate dose based on the complex, interlocking equations
required to estimate body size? What theoretical advantage in dose calculation justifies the
potential for computational error inherent in supplementary table 7? The risk vs. benefit of
allometric scaling cannot be determined from the data in the paper.
9. You have no data showing how well your model predicts individual patients. The closest you
come are the visual predictive checks (figure 1) and the prediction corrected visual predictive
check (supplement 2). This tells me that the cloud of points is about right. That’s fine, but the
average patient does not die. It is patient at the extremes of prediction accuracy who are at
increased risk. The data, as presented, does not provide this information.
10. Clearance (page 757) is calculated 10 parameters: a population estimate, which
is adjusted for F(size), F(maturation), and F(sex). F(size) is based on dosing weight (not
explained, see 7 above), height, WHS(50), WHS(max), F(fat), FFEM(DW), and PWR (your allometric
parameter, fixed at ¾). F(maturation) is based on PMA, TM(50), and the Hill coefficient. F(sex) is a
further adjustment for sex. When clearance is a function of 10 parameters, I do not see how this
tests allometric scaling. Indeed, if allometric scaling were hurting your fit (unlikely – more
likely it makes no difference, see below), other parameters might compensate to fit the data.
11. You compare this model to models by Trame, Paci, and Bartelink, noting that
your model performs much better than these models. You are comparing your model with 12 structural
parameters to models with 2 (Trame), 4 (Paci), and 5 (Bartelink) structural parameters. Your 12
parameter model better described your data than these simpler structural models fit to your data.
Did you expect anything else?
12. You state on page 762: “The model is based on principles that have already
been shown to be robust for predictions with other small molecule agents from neonates to adults.” I
don’t see that. If “robust” means that it allometric helps describe PK at the extremes of weight,
then the allometric model was not robust. It required adjustments for both maturation and for
obesity. Between these extremes, say 30-100 kg, any optimal coefficient times weight to the ¾ power
will differ by less than 10% from an optimal coefficient times weight alone. This will be invisible
given the order of magnitude variability in clearance (your figure 2).
I see little to no evidence that your paper with Dr. McCune demonstrated superiority of allometry.
Rather, your paper demonstrated that even a model with 12 parameters could not reduce the
variability of busulfan estimated clearance beyond an order of magnitude. You also demonstrated that
allometric models require specific adjustments for maturation and dosing. You will recall this was
one of the points that I made in my comments, which are also discussed in the Allometry Shallomatry!
editorial.
Perhaps there are other analyses of these data that would demonstrate a significant benefit of
allometric scaling of data. If you are willing to share with me your data on the 133 subjects for
whom you have actual body weights, I would be happy to address the question directly.
Respectfully,
Steve
--
Steven L. Shafer, MD
Professor of Anesthesiology, Perioperative and Pain Medicine, Stanford University
Adjunct Associate Professor of Bioengineering and Therapeutic Sciences, UCSF
Back to the Top
Hello,
A short answer on a very specific point, in that long debate, as a
statistician.
On Tue, May 24, 2016 at 01:31:11PM -0600, Steven L Shaferwrote:
« 2. I think your testing of this assumption is described on page 759: “we estimated the
allometric exponents for each of the 4 main pharmacokinetic parameters (Supplementary Table S4).
Initial estimates of 2/3 and 1.25 were used for the clearance and volume exponents.” However, you
tested this with bootstrap, not with log-likelihood profiles. Why?
Bootstrap needs much less hypothesis than log-likelihood profiles to
build confidence intervals (or, equivalently, tests): profiling
assumes that somehow the khi-square approximation holds well enough to
allow to define a cut-off level to build the confidence interval as
the intersection of the log-likelihood profile and this cut-off. This
may be wrong in finite sample sizes and with a « strongly » non-linear
model with a lot of noise, which is probably quite frequent on
population PK models. So using bootstrap seems not a problem, compared
to log-likelihood profiling.
However, there are also a few tricks in bootstrap, starting with the
exact kind of bootstrap used and the number of draws made, and
probably in such complex settings there is no convergence proof of the
method.
Hope I did not misunterpretated the question,
Emmanuel CURIS
emmanuel.curis.-at-.parisdescartes.fr
Back to the Top
Dear Zvi,
Sorry, I initially thought you we blaming the TGN 1412 disaster on the use of allometric scaling. If
I understand correctly you are actually blaming it on doing cross-species extrapolation while
ignoring cross-species differences. Using a different scaling method (linear, surface area, etc)
would probably not have changed the result.
A disputed point in this discussion is whether intra-species (eg. child-adult) allometric scaling is
a reasonable thing to do. Would you agree that in this case differences in receptor
density/occupancy are likely to be less critical compared to monkey-human extrapolation?
warm regards,
Douglas
Back to the Top
Dear Leonid:
Thank you for this. It is exactly the point that I wish to make.
The ONLY assumption in data analysis that probably doesn't merit testing is the Central Limit
Theorem. That seems to be a fundamental property of sampling. Other than the Central Limit Theorem,
data analysis involves assumptions that merit testing:
1. A t test has the assumption that populations follow normal distributions. Often this is not
tested. However, it is perfectly reasonable to ask that authors use a normality test such as
Shapiro-Wilk or Kolmogorov-Smirnov test, or use a Q-Q plot.
2. Logistic regression assumes a linear relationship between the independent variable and the logit
of the binary dependent variable. Authors are often asked to verify this, typically by binning the
data so that logit can be calculated within each bin using the observed frequency, and plotting the
logit vs. the independent variable.
3. Cox proportional hazards models assume proportional hazards. This is a strong assumption, and if
violated can lead to highly misleading results.
4. In pharmacokinetics, conventional mammillary models assume instantaneous mixing of the
intravenous bolus in the central compartment. In anesthesia, where times are frequently modeled in
seconds, this assumption clearly fails.
5. Compartment models assume a terminal log-linear decay, independent of the method of drug
delivery.
What Dennis and I have requested in our Allometry Shallometry! editorial is no different. The
question is not whether allometry is "true" or not. That is a meaningless question - these are just
models. Allometry is an assumption. As such, it should be tested. If allometry helps the model
describe the data, great! If not, then the additional complexity should be rejected for parsimony.
Additionally, if analysts wish to add a power to body size, then characterize the power that best
describes the data, exactly as you have done below. Let the data speak!
You note that you pick 0.75 if the data are not informative. My preference would be to pick 1 if the
data are not informative. This likely reflects our respective contexts. My context is clinical
dosing. In that context, it easier (and safer!) to translate pharmacokinetics to dosing guidelines
if doses are proportional to weight. Were my context scaling animal PK to human PK, then I would
probably choose 0.75 if the data were not informative.
Thanks,
Steve
Back to the Top
Dear Doug:
This is not correct: "A disputed point in this discussion is whether intra-species (eg. child-adult)
allometric scaling is a reasonable thing to do."
The issue being discussed whether allometry is reasonable ASSUME WITHOUT EVIDENCE.
Dennis Fisher and I do not question whether allometry is reasonable to try. However, quoting our
conclusions: "Investigators of human pharmacokinetics should follow the advice of our mentor, the
late Lewis Sheiner, to “let the data speak.” Unless it is clearly and convincingly demonstrated to
be better than a simple weight-proportional model, we consider allometry shallometry."
I asked Dennis write our editorial after becoming frustrated by authors who refused to provide
evidence that the assumption allometry was justified by the data. What other modeling claim has such
a stranglehold that multiple authors claim it is "true" without evidence?
Verifying assumptions is a fundamental principle of all data analysis. When you employ allometry,
test your assumption that allometry improves the ability of your model to characterize your data.
Since your interest is guiding clinical dosing, exactly the same as my interest, if the data do not
inform the power on clearance, than in the interest of patient safety you should choose 1. Otherwise
you risk patient injury for no appreciable benefit.
I think there are ZERO package inserts that give dosing instructions using weight to the 3/4 power.
Is this correct? If so, then another risk of using allometric scaling without clear justification is
that it will be rejected by the FDA. If there are no examples of package inserts that instruct
clinicians to give drugs using weight to the 3/4 power, then allometric modeling will lead to
academic discussions, such as this, but will be irrelevant in advancing therapeutics.
Thanks,
Steve
PS: If anyone knows of package inserts that use allometric principles to guide dosing, please let me
know!
Back to the Top
Dear Drs. Shafer and Fisher,
Most of the discussion I have seen here focused on the clinical practice. However, I’d like to point
out first that we can also apply allometric scaling in drug development. If one needs to design a
peds trial for the first time what would we do to narrow down the doses and how can we inform the
study design? So, whether to fix the exponent to 0.75 or not depends on the context.
I do agree with you that it is impractical for physicians to calculate the dose by themselves using
allometric scaling for pediatrics. However, they don't have to. Take a look at the BUSULFEX label
information for pediatrics and Booth et al (Journal of Clinical Pharmacology, 2007; 47:101-111).
It's actually the body weight normalized dose that was written on the label while FDA used
allometric scaling in their PopPK modeling. There’s always a way that we can make things simple in
practice, but this doesn’t mean we should give up on ‘complex’ modeling when it is necessary.
Back to the discussion on allometric scaling, Momper et al. (JAMAPediatr. 2013;167(10):926-932) and
my abstract in ASCPT 2016 annual meeting (Allometry is a Responsible Choice in Pediatrics Drug
Development: https://ctm.umaryland.edu/posters_presentations.html) provided evidence that
allometric scaling works well in extrapolation PK from adults to pediatrics, though neither of us
talked about the underlining mechanism of allometric scaling in detail. Furthermore, I won’t mind if
we need a maturation function of age in allometric scaling if necessary. Actually, we have a clear
explanation of this maturation function; I believe most of the publications provided an explanation
when allometric scaling was applied with this maturation function.
One important aspect of modeling is to incorporate prior knowledge, e.g. allometric scaling.
However, people tend to ignore others work but only focus on their own data. Indeed, this is a bad
practice to me. It may not be practical to collect enough data in pediatrics to precisely describe
PK for each drug before we make a dosing decision for them. What shall we do, if we don’t have
enough information? To me, it is risky if we let the data speak by itself. Actually, we don’t have
to and we should not.
Maybe I misinterpret, but I don’t think data itself can speak. It is always us that analyze or model
the data, and it is us that make the data speak in the way we want with the assumptions we made.
P-value (or the best fit of the data) is not the purpose of modeling. Actually, it’s less important.
I think modeling is to derive useful information from the data to answer meaningful questions rather
than describe the data. Therefore, I would like to incorporate my prior knowledge with the data and
make the decision, because prior knowledge is also important data to me (probably more valuable),
though we seldom use it in modeling.
Why should we fully rely on data from one single clinical trial, but ignore things happened hundreds
of times before?
Tao
--
Tao Liu BSc
PhD Candidate
Center for Translational Medicine
University of Maryland, Baltimore
Back to the Top
Dear Emmanuel:
I appreciate your comments. Having routinely used both log likelihood profiles and bootstraps for
years, I'd like to offer several observations:
1. Log-likelihood profiling looks at the data slightly differently than bootstraps. Log-likelihood
profiling allows you to see exactly how well you know each parameter given the data. If the log
likelihood profile is a narrow smooth parabola, then the parameter estimation is highly informed by
the data. If the log-likelihood profiles is flat, then the data do not inform the parameter
estimate.
2. In a bootstrap, the data are constantly resampled (obviously). The bootstrap tells you how stable
the estimate is across resamples. If all of the data inform the parameter estimate, then the
bootstraps will reach similar estimates, with a narrow SE. If a small subset of data inform the
parameter estimate, then the estimate may change depending on whether the subset is
under-represented, adequately represented, or over-estimated in the bootstrapped sample. This will
produce a large SE.
3. You can envision this by the following experiment, which I've done. Let your samples be thousands
of data points from a normal distribution with a mean of 0 and a standard deviation of 1. Let your
model be Y = THETA1 + THETA2. For starting estimates, let THETA1 be 10, and THETA2 be -10. A log
likelihood profile will be absolutely flat for THETA1, and for THETA2, because any change in THETA1
is compensated for by a change in THETA2. This says (accurately) that you don't know either one. In
a bootstrap THETA1 and THETA2 will be close to the starting estimates, because there is no gradient
to increase or decrease either estimate with any resampled data set. The log-likelihood profile
tells you that you don't know either parameter, which is true. The bootstrap will suggest that you
know each parameter rather well, which is false.
4. I think this is EXACTLY your point below: "This may be wrong in finite sample sizes and with a «
strongly » non-linear model with a lot of noise." In my example, the data are entirely noise. There
is no signal, other than that the mean is 0. However, the above example suggests to me that the
model need not be strongly non-linear.
5. As I interpret the Holford model, 10 different THETAs inform the estimate of clearance. I can't
tell if this approaches the redundancy of "Clearance = THETA1 + THETA2", which would clearly be
wrong. A log-likelihood profile will reveal this. I have less confidence in a bootstrap. That is why
I would prefer to see a log-likelihood profile of the power term, rather than a bootstrap
confirmation of the initial estimate.
Thanks,
Steve
Back to the Top
Dear Steven,
Thanks for your comments. I indeed misunderstood your question when
you spoke about « tests », I had in mind the statistical part, and not
the model fit validation part. I definitly agree that bootstrap, at
least in its usual form, does not help very much to detect invalid
model fits. Note however that on some situations, some bootstrap
variants may detect such problems (I experienced that in situations
very different from PK models, where analysis of « bootstrap »
parameters distributions evidenced several minima and other
convergence issues. Of course, direct computation of bootstrap
confidence intervals masked this. I quote « bootstrap », because it
was not the basic raw bootstrap).
I guess studying joint parameter distributions obtained by bootstrap
would also give hints about such situations, with very high
correlations or patterns looking very different from an ellipsis.
In fact, in the extreme case you mention, I think that the best tool
is change the initial parameters estimates and see what
happens... That should be done after any non-linear fit anyway
(including logistic regressions or other non-linear models), to detect
false minimum or non-identifiability of the model, in general much
more difficult to detect than the obvious example you gave, that
should be detected even before starting any fit.
According to the second point, I also agree, bootstrapping assume an
homogeneous population, or should be done with according precautions
in re-sampling the data. However, I'm not sure this is really a
problem and not a feature: if you don't have a way to detect this
influential values and « explain » them, then it is difficult to trust
the fit value, and large confidence intervals may be more appropriate,
no? And here again, I think analysis the distribution of the
bootstrapped estimate values may be much more informative than their
summary.
Well, I guess my conclusion would be that both tools are needed,
because they address different points, and only in some special cases
does only one of them give all the information. Objective function
profiles (quite often, this is the log-likelihood, but really not
always) as a tool among others to check that the models makes sense,
and bootstrap to have "correct" confidence intervals (and sometimes
other informations), which of course are useful only if the model
makes sense... Would you agree with this position?
Best regards,
Emmanuel CURIS
emmanuel.curis.-at-.parisdescartes.fr
Back to the Top
Thank you Tao,
To read this reasonable testament from a current graduate student lifts my spirits! You describe our
daily work in drug development. Without the application of prior knowledge (data, guidances,
experiences, publications, etc.) we could not do our job.
Many thanks to all participants of this discussion,
Joachim
Joachim Grevel, PhD
Scientific Director
BAST Inc Limited
Science & Enterprise Park
Loughborough University
Loughborough, LE11 3AQ
United Kingdom
Back to the Top
Dear Steve,
I agree with you completely that if the data is informative enough you should test every reasonable
thing you can think of, allometric, linear whatever. These cannot all appear in detail the final
manuscript due to being too long and distracting, so it can appear that linear models were not
tested, while they actually were.
However I disagree with the advice at the end of the editorial that allometric scaling should only
be used if it shows a clear improvement over linear scaling. I can think of two reasons right away:
1) This unfairly elevates the linear scaling hypothesis without evidence from the data. If you
want to “let the data speak” it often says that there is no difference between the linear and
allometric hypotheses. So a “data-driven” approach would treat the hypothesis equally, but this is
not what the advice is. You require “clear and convincing” evidence for allometric scaling but
require none to assume the linear scaling hypothesis.
The editorial justifies this elevation based on ease of mental calculation as a safety issue. I
think a power function calculated by a computer has a lower risk of error compared to a linear
function done mentally. So I don’t think the ease of mental calculation is really a consequential
improvement. Computers are everywhere and in the OR too, why not use them?
2) I think (no evidence yet) an allometric model is usually safer than a linear one. I
constantly hear anesthesiologist talking about higher (per kg) doses for children and lower (per kg)
doses for obese. A safe model should probably do the same thing and allometric models generally do
this. An anesthesiologist would not suggest giving the same (per kg) dose for all sizes, but this is
exactly what a linear model suggests!
The objection I usually hear goes along the lines of “my data only concerns adults and our model
should not be applied outside the tested population”. This is a lawyer’s way to fix a problem, we
can do better. Modelers should realize that in the real word their models get extrapolated and
should make an effort that the consequences are benign. Good extrapolation properties for a model
are like anti-lock brakes on a car. Ideally, they are never used, but when they are needed they are
very important to have.
Extrapolation is by definition not “data-driven” but I think it is important to get it right. We
should learn from the problems of the James LBM equation and make sure our models “do the right
thing” for obese individuals, even if they are not in our dataset! The same concept applies to
children at the other side of the size-spectrum. What the “right-thing” actually is depends on the
particular drug and allometric scaling is not by definition the best choice, but I think it often
is. Again, this is not evidence from the data. I am much more concerned with whether a model “does
the right thing” when extrapolated than whether it was able to extract those last few points out of
the objective function.
I don’t know about package inserts but I imagine they probably suggest different (higher per kg)
dosing rates for children and for obese (lower per kg). So package inserts are actually
approximating allometric scaling.
Warm regards,
Douglas
Back to the Top
Hi All
Steve asks: If anyone knows of package inserts that use allometric principles to guide dosing,
please let me know!
Here is the link to the FDA review on the dosing table famciclovir that achieves similar exposure as
adult in the pediatric population for penciclovir:
http://www.fda.gov/downloads/Drugs/DevelopmentApprovalProcess/DevelopmentResources/UCM198018.pdf
The model behind the dosing table is based on allometric principles. These principles served well in
the pediatric development for this drug. Interestingly, in the PK study design to explore the
dose-exposure relationship, we used linear dose scaling to err on the side of caution (i.e.
slightly under-expose smaller children) and indeed the generated data deviated from the linear
relationship as expected. Rather than report the actual model-based relationship, it was agreed to
report discrete dosing. I suspect this approach may be commonly used in labels.
Kind regards
Mick
PharmPK Discussion List Archive Index page |
Copyright 1995-2014 David W. A. Bourne (david@boomer.org)