- On 15 Sep 2003 at 09:49:32, "Ravi Kuppuraj" (rkuppuraj.aaa.innaphase.com) sent the message

Back to the Top

The following message was posted to: PharmPK

Hello,

can somebody comment on what should happen to SSEs when you "fix" a

parameter in a PK model, and why?

thanks

Ravi Kuppuraj - On 15 Sep 2003 at 15:37:21, "Bonate, Peter" (pbonate.aaa.ilexonc.com) sent the message

Back to the Top

The following message was posted to: PharmPK

Ravi Kuppuraj writes:

can somebody comment on what should happen to SSEs when you "fix" a

parameter in a PK model, and why?

That depends on what you fix it to. If you fix a value to its maximum

likelihood estimate, SSE should be unaffected. If you fix it to

something other than SSE than then SSE should increase. Now, MSE, on

the other hand, will change. The denominator of MSE is (n-p) where p

is the number of estimable parameters. Because you have fixed MSE, p

is smaller and so MSE will increase. Hope this helps,

Pete Bonate - On 16 Sep 2003 at 12:46:13, "J.H.Proost" (j.H.Proost.at.farm.rug.nl) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear Ravi Kuppuraj,

With respect to your question:

> can somebody comment on what should happen to SSEs when you "fix" a

> parameter in a PK model, and why?

SSE (Sum of squared errors) will always increase if a parameter is

fixed during fitting of a model. This is simply so because fixing a

parameter does not allow to explore the entire parameter space to find

the minimum SSE. So there will be always a different value for the

fixed parameter resulting in a lower SSE (unless you fixed the value at

the minimum, but this makes no sense; see below).

Please note that the SSE is only one possible 'objective function' to

be minimized during fitting; e.g. the objective function may be 'minus

two log-likelihood' or any expressed related to it (including SSE). The

numerical value of the objective function is usually not relevant; it

is used only to find the minimum (and AIC, see below).

In case of ordinary least=squares (OLS) or weighted least-squares

(WLS), 'minus two log-likelihood' can be replaced by log(SSE) and

log(WSSE), respectively (log refers to natural logarithm with base e),

leaving out various constant terms.

Whether or not the change of the objective function by fixing or

varying a particular parameter is 'relevant', is usually judged from

Akaike's Information Criterion (AIC):

AIC = -2 log(likelihood) + 2 P

where P is the number of estimated parameters. Comparing two models,

the one with lowest AIC is the 'best' model.

So, fixing a parameter decreases P and increased -2 log(likelihood).

Besides, fixing a parameter implies that the number of assumptions

about the model increases; one investigates only a restricted model.

This can be done only if there is some plausible value for that

parameter, e.g. a value obtained from an independent source. Fixing a

parameter simply because the fitting procedure does not provide a

plausible estimate (e.g. a negative or zero value) is definitely not

good practice! (although reporting a negative PK parameter is also not

recommended). Here we are in the darker side of modeling ...

Best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.aaa.farm.rug.nl

Want to post a follow-up message on this topic? If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "Sum squared errors" as the subject

PharmPK Discussion List Archive Index page

Copyright 1995-2010 David W. A. Bourne (david@boomer.org)