- On 25 Oct 2013 at 13:14:57, zhoux383.at.UMN.EDU sent the message

Back to the Top

Hi All,

I'm wondering if there is a way to fix the residual error in Phoenix NLME?

Background: I'm trying to analzye animal PK data collected using

destructive sampling method(1 sample per subject). And fixing the residual

error would enable me to explore the BSV of PK parameters.

Thanks! - On 25 Oct 2013 at 14:02:35, Bob Leary (Bob.Leary.-at-.certara.com) sent the message

Back to the Top

Yes you can!

In the structure tab next to the Residual (where you select additive, multiplicative, etc) you

should see a check box for freeze - this will freeze residual error.

If you are using a textual model you can also freeze the error by adding 'freeze' as such:

error(CEps(freeze) =)

Regards,

Devin Pastoor

Research Scientist

Center for Translational Medicine

University of Maryland, Baltimore

www.ctm.umaryland.edu

Pastoor, Devin

--

Hi,

In edit as textual mode, you can change the code to :

error(CEps (freeze) = 0.12)

Hope it helps.

Thanks

Shailly

www.ctm.umaryland.edu

Shailly Mehrotra

--

If you mean FIX in the NONMEM-like sense of setting it to a given user-specified value as opposed to

determining it through a likelihood optimization -

PHOENIX uses the terminology "freeze" for that and there is an option in the UI where you input the

initial eps value to do just that.

If you want to do it manually in the PML language, then for example to freeze(or fix) EPS1 to 1

looks like

error(EPS1(freeze) = 1)

Bob Leary

Bob Leary - On 25 Oct 2013 at 19:06:45, zhoux383.-a-.umn.edu sent the message

Back to the Top

Hi David,

Thanks!

Continuing on the question of dealing with destructive sampling data(one

sample per animal) with PopPK method, I also found a very interesting paper

on using resampling methods to estimate PK parameters by generation of

"pseudoprofiles" of PK data (which is composed of e.g. 1000 full PK profile

over 7 time points):

Mager H, Göller G.Resampling methods in sparse sampling situations in

preclinical pharmacokinetic studies.J Pharm Sci. 1998 Mar;87(3):372-8.

The authors used this method to estimate AUC, CL, t1/2 etc.

non-compartmental PK parameters. I'm wondering if this method can be

extended to compartmental PK parameters (K12, K21, alpha, beta) estimation

or even for more complicated PK/PD model parameters given PD data resampled

at the same time.

Thanks!

Regards,

Jie - On 25 Oct 2013 at 21:25:22, Pastoor, Devin (dpastoor.at.rx.umaryland.edu)" sent the message

Back to the Top

Dear Jie,

Maybe I am misunderstanding your question, but development of Pop PK methodologies (ie NLME

modeling) was specifically designed for the issue of sparse sampling! As such, assuming you have an

appropriate experimental design (ie you can't determine if it is a one vs two compartment model if

you only have sampling at ~ Cmax and terminal phase) compartmental parameters can be obtained

regardless of whether samples were obtained through destructive sampling of many individuals or

sparse sampling of some or rich sampling of few...etc. For example getting 6 concentrations .at. 8 time

points should give similar results if you get 1 destructive sample x 48 individuals vs 2 samples x

24 individuals vs 4 samples x 12 individuals....

To put it more concisely - it "shouldn't" matter whether you've obtained your concentration-time

profile via destructive sampling or by sparse/rich sampling of individuals. It should handle all

scenarios. As to how precisely you can estimate the parameters is another matter :-)

While we're on it - if you are going for a compartmental approach I'd recommend using physiological

parameters (CL/V) rather than micro/macro constants due to ease of interpretation!

Best Regards,

Devin Pastoor

Research Scientist

Center for Translational Medicine

University of Maryland, Baltimore

www.ctm.umaryland.edu - On 25 Oct 2013 at 23:34:38, Nick Holford (n.holford.aaa.auckland.ac.nz) sent the message

Back to the Top

Hi,

I think this part of Devin's response needs some further clarification

about what is an appropriate experimental design:

"assuming you have an appropriate experimental design (ie you can't determine if it is a one vs two

compartment model if you only have sampling at ~ Cmax and terminal phase) compartmental parameters

can be obtained regardless of whether samples were obtained through destructive sampling of many

individuals or sparse sampling of some or rich sampling of few...etc."

NLME methods typically have two levels of random effects -- random variability of parameters across

subjects and random residual variability around observations. The original question for this thread

was about a special design -- one observation per subject -- which cannot distinguish the random

variability across subjects from the residual variability around observations. In this case, one

approach is to assume a model and parameter(s) for the residual error variability, fix (aka

'freeze') the residual error parameter(s) and then estimate the random variability across subjects

under some structural model (e.g. a compartmental PK model). Thus the destructive sampling design is

not really interchangeable with designs with more than observation per subject as Devin seems to

imply:

"For example getting 6 concentrations .-at-. 8 time points should give similar results if you get 1

destructive sample x 48 individuals vs 2 samples x 24 individuals vs 4 samples x 12 individuals...."

You will not get similar results in the destructive sample case without

an appropriate assumption about the residual error. That assumption

rests on what other information you have from other experiments.

I would also like to comment that NLME methods were not developed for

the sparse sampling case. They were applied to sparse sampling in

clinical settings and following that many people assumed that any kind

of sparse data design would be useful. There are however no free

lunches. Sparse data designs produce sparse results. If you want to

really learn something from using modelling you should use optimal

design methods so that you can appreciate when sparse designs are too

sparse to be useful. A rule of thumb that I try to follow is to have at

least as many observations per subject as you have parameters in the

model that you are interested in taken at times that are informative

about those parameters.

Nick Holford

--

Nick Holford, Professor Clinical Pharmacology

Dept Pharmacology & Clinical Pharmacology, Bldg 503 Room 302A

University of Auckland,85 Park Rd,Private Bag 92019,Auckland,New Zealand

email: n.holford.at.auckland.ac.nz

http://holford.fmhs.auckland.ac.nz/ - On 25 Oct 2013 at 23:36:06, zhoux383.-at-.umn.edu sent the message

Back to the Top

Hi Devin,

Thanks for your kind reply!

Sorry I may have confused you a little bit regarding to the question. From

previous discussion and references, I understand that popPK is a useful

method to deal with destructive sampling data(one per sample) by fixing the

RUV.

In addition, another interesting re-sampling method is also proposed to

estimate the PK parameters via bootstrapping for destructive sampling data.

H. Mager & G. Goeller, J. Pharm.Sci. 87, 372-378 (1998)

The method(Pseudoprofile-based bootstrap) is illustrated as:

1) resample with replacement at one concentration at each time point

2) construct a pseudoprofile of matrix [ncol=number of time points,

nrow=bootstrap times], e.g. matrix [ncol=7, nrow=1000]

3) calculate target parameter from each row of full time profile data, e.g.

1000*T1/2

4) draw 2000 (number of animals per time point*T1/2) from step 3 and obtain

location parameter(mean or median) to get 2000*T1/2(mean of 3 animals)

5) estimation of T1/2 and its distribution

In summary, pseudo-full profile PK data is generated via re-sampling,

"individual" parameters are then calculated for each round of sample and

parameter distributions are estimated via a 2nd round of bootstrapping.

My question is whether the parameters estimated above are only constrained

to NCA parameters such as AUC and CL? How about parameters from 2-CMT or

PK/PD data models with more complicated functions?

I imagine people prefer the PopPK method or simple naive-pooled data

method, but just wondering if re-sampling is a 3rd reasonable way for

analysis of sparse data.

Thanks!

Regards,

Jie - On 26 Oct 2013 at 15:14:33, Pastoor, Devin (dpastoor.-at-.rx.umaryland.edu) sent the message

Back to the Top

Nick,

Thank you very much for the correction, I jumped a little ahead of myself. I think I should

rephrase...

Jie's initial question was regarding the ability to obtain compartmental parameters from data

gathered via destructive sampling. To that regard, I presume this should be possible using a naïve

pooled technique to get an estimate of 'population' level parameters regardless of destructive vs

sparse sampling.

As you pointed out, without some prior information about one level of the random effects hierarchy,

it would most likely not be feasible to estimate both population and individual level parameters as

provided NLME.

One question I do have, you say:

> A rule of thumb that I try to follow is to have at least as many observations per subject as you

> have parameters in the model that you are interested in taken at times that are informative about

> those parameters.

>

I won't argue with more observations per subject being preferential, but given the same number of

total observations, do you really feel your results will be so skewed as you decrease the # of obs

per individual while increasing the # of individuals.

For example, if you had an experimental design where due to the invasiveness of the procedure or

contamination you could only take 2-3 samples per individual - would the end results of your model

fit be so different than if you had half the number of individuals but 4-6 samples per individual.

I can see where this could potentially present identifiability issues with high variability - be it

residual or BSV, I am just wondering if there is a 'threshold' in which the sparser designs become

significantly more biased.

I guess nothing a few simulations couldn't help elucidate :-)

Devin Pastoor

Research Scientist

Center for Translational Medicine

University of Maryland, Baltimore

www.ctm.umaryland.edu - On 26 Oct 2013 at 22:56:20, Nick Holford (n.holford.-a-.auckland.ac.nz) sent the message

Back to the Top

Devin,

You wrote:

> To that regard, I presume this should be possible using a naïve pooled technique to get an

> estimate of 'population' level parameters regardless of destructive vs sparse sampling.

It is not really relevant that sampling is destructive. What is relevant

is the number of observations per subject. It is quite possible to have

only one sample per subject without destroying the subject. The naive

pooled method can be applied with any number of samples per subject but

by definition the method does not distinguish between subject

variability (BSV) in the parameters. If BSV is small relative to

residual unidentified variability (RUV) then good estimates of the

population parameters may be obtained with a single observation per

subject. This is quite often seen when studying effectively cloned

non-human species such as laboratory mice. This method is the natural

alternative to fixing the RUV to some value and using single

observations per subject to estimate BSV.

You also asked:

> I won't argue with more observations per subject being preferential, but given the same number of

> total observations, do you really feel your results will be so skewed as you decrease the # of obs

> per individual while increasing the # of individuals.

It depends what result you are interested in. A study with 10

observations and 3 structural parameters in 6 subjects is richly sampled

for estimation of the structural parameters but is sparse for estimation

of the parameter BSV (Sheiner used to say that at least 25 subjects are

required for a reasonable estimate of BSV). On the other hand 2

observations per subject with 3 structural parameters and 30 subjects is

sparse for the structural parameters but maybe reasonable for BSV. Of

course the adequacy of estimation of structural and BSV parameters are

linked. As noted previously the use of an optimal design program can be

helpful in evaluating designs with a trade off between number of samples

and timing of samples versus the number of subjects. If the model is

correct then there is no reason to suppose the parameters would be

biased ('skewed').

Furthermore you asked:

> I can see where this could potentially present identifiability issues with high variability - be

> it residual or BSV, I am just wondering if there is a 'threshold' in which the sparser designs

> become significantly more biased.

Identifiability issues will arise when you have only one : observation

per subject -- then it is impossible to identify RUV 2: one subject --

then it is impossible to identify BSV. Some really bad designs (e.g only

trough concentrations) will mean some structural parameters will not be

identifiable (e.g. oral absorption).

Otherwise it a really a question of estimability -- the precision and

adequacy of the estimate will depend upon the number of subjects and

number of observations and timing. There is no threshold. The more data

you have and the better the design then the better the parameter

estimates will be in the sense of being more precise. As noted above, if

the model is correct then there is no reason to suppose there would be bias.

I hope that helps you sort out some of the issues.

Best wishes,

Nick

--

Nick Holford, Professor Clinical Pharmacology

Dept Pharmacology & Clinical Pharmacology, Bldg 503 Room 302A

University of Auckland,85 Park Rd,Private Bag 92019,Auckland,New Zealand

email: n.holford.at.auckland.ac.nz

http://holford.fmhs.auckland.ac.nz/

Want to post a follow-up message on this topic?
If this link does not work with your browser send a follow-up message to PharmPK@lists.ucdenver.edu with "Fix residual error in Phoenix NLME" as the subject |

Copyright 1995-2014 David W. A. Bourne (david@boomer.org)