- On 19 Oct 2005 at 11:27:16, dsharp5.at.rdg.boehringer-ingelheim.com sent the message

Back to the Top

The following message was posted to: PharmPK

Group:

This is one of those very simple questions because it is very basic and

therefore gets lost in sophisticated discussions that we have in this

group.

In my CRO and consulting days I worked with a number of companies on

PK. I

found a number of them allowed the computation of elimination rate

constants

(and half-life and AUCinfinity) using only two points in the terminal

phase.

Of course their r-squared values are quite good(!), but I never believed

this was a legitimate approach, although I failed in convincing them of

this.

If one consults standard texts they say a minimum of three points is

needed,

but why? Searching my memory back in the hazy mists of the past, it

strikes

me that it requires 3 points to uniquely define an exponential function.

When we do a log transform the resulting straight line requires only two

points, but we shouldn't lose sight of the fact that it's an exponential

function we are determining.

My question is, is the use of at least three points a mathematical

necessity, or merely good sense? If it is the latter, than good sense

obviously differs from place to place. I am not formally trained in

PK (or

anything else I do for that matter!) so I missed this early lesson.

Please

edify me.

I have considered a post entitled "Stupid PK tricks" where I outline

some

the dubious approaches I have "experienced", but it would only be for

humor,

and would not fit with the serious nature of the group.

Dale

[Standard text? Two points is the minimum for half-life with the

assumption that you are taking two points from the log-linear

terminal phase. Three (or more) points allows you to start testing/

verifying that assumption. -db] - On 19 Oct 2005 at 13:17:19, kevin.m.koch.at.gsk.com sent the message

Back to the Top

Dale,

If we linearize the exponential function, it can be defined by two

points. It's just good sense to use more. But more important than

the number of points is the time frame over which they are spread.

Two or three points spanning a couple of half-lives should better

estimate the elimination function than a dozen points spanning a

fraction of a half-life.

Kevin - On 19 Oct 2005 at 14:59:27, Xiaodong Shen (shenxiaodong11.-at-.yahoo.com) sent the message

Back to the Top

The following message was posted to: PharmPK

Hi,

To judge if it is a line we need at least three

points, two points always make a line in terms of

mathematics.

In addition, I am not a PK person.

Xiaodong - On 19 Oct 2005 at 19:11:38, Indranil Bhattacharya (ibhattacharya.-a-.gmail.com) sent the message

Back to the Top

Dale, from my limited experience in the world of PK, I would suggest

that three points should be considered for estimation of elimination

half life. My justification for the selection being 1) that with more

than two data points I will have a 'richer' data set to compute the

elimination half life and the half life calculated would be a 'better

estimate'. 2) At lower concentrations I would expect more variability

(due to the assay) assuming that profile is being followed until it

reaches LOQ.

Of course the selection and overall contribution of the third point

depends upon its position in the PK profile.

Indranil Bhattacharya

Ph.D candidate

Dept. of Pharmaceutical Sciences

State University of New York at Buffalo

Usa - On 19 Oct 2005 at 21:52:26, "Kassem Abouchehade" (kassem.at.pharm.mun.ca) sent the message

Back to the Top

The following message was posted to: PharmPK

Dale,

when determining the half-life from the slope (-K/2.303) of terminal

line

resulting from a plot of log C vs time, we rely on at least 3 points

which

is more reliable. The time points should be selected such that the

interval between the first and the last point chosen is more than twice

the estimated half-life based on them. Using two points will be less

accurate and not reliable especially when dealing with drugs with very

long half-lives and also depends how low the drug can be detected during

the elimination phase.

Also when comparing the terminal phases of two drugs one with long

and the

other with a short half-life, relying on two points only is not accurate

and will not provide a fair PK comparison between the two drugs.

Kassem - On 19 Oct 2005 at 23:44:37, "Kassem Abouchehade" (kassem.at.pharm.mun.ca) sent the message

Back to the Top

The following message was posted to: PharmPK

Dale,

I would like to add also this old paper by Gibaldi and Weintraub for

your

reference:

Gibaldi M, Weintraub H.

J Pharm Sci. 1971 Apr;60(4):624-6.

"Some considerations as to the determination and significance of

biologic

half-life".

Kassem - On 19 Oct 2005 at 21:30:53, Varma MVS (varma_mvs.at.yahoo.com) sent the message

Back to the Top

HI,

Reliable Kel needs use of more then 2 points from the terminal

profile. Although a straight line can be drawn with 2 points, that

makes no sense satistically.

In many practicle situations the terminal portion of Plasma conc

profile falls very close to LOQ where the analyticl variability is

maximum. Thus considering last and lastbut one points will lead to

worng numbers. Instead averaging the Kel obtained with subsequent

points of atleast 3 points will give a better picture.

However, if one find only 2 points in elimination phase, it is always

good to go for model-fitting or non-compartmetal analysis.

Varma Manthena - On 20 Oct 2005 at 08:34:38, "Willi Cawello" (Willi.Cawello.at.schwarzpharma.com) sent the message

Back to the Top

Dear Dale,

I expect your question refers to the terminal half-life. Under this

condition please find this anser:

The working group pharmacokinetics of the AGAH (Association for

Applied Human Pharmacology) has published the results of thier

discussions about PK items in a text book (Parameters of Compartment-

free Pharmacokinetics, Willi Cawello (Ed.), 1999). Please find an

extract from section 4.2.1 titled 'Calculation of the terminal half-

life from plasma data':

In general, only the terminal half-life is determined by model-

independent methods. Conceptually, this is carried out by means of a

semilogarithmic presentation of measured drug concentrations versus

time. In order to decide whether calculation of a half-life is

meaningful, the terminal portion of this presentation has to be

examined. If the data in this portion of the profile can be

reasonably well approximated by a straight line, a (terminal) half-

life t1/2 can be calculated according to

t1/2 = ln (2) / lambda-z [F4.7]

where lambda-z denotes the slope of the approximating straight line.

Calculation of lambda-z is generally carried out by unweighted linear

regression [Snedecor and Cochran, 1989] resulting in

lambda-z = [ sum(ti) * sum(ln Ci) - n * sum(ti) ln Ci ] /

[ n * sum(ti^2) - (sum(ti))^2 ] [F4.8]

where n is the number of data points used in the regression analysis,

ti the respective times and ln Ci the corresponding logged drug

concentrations (to base e). There are no fixed rules for the

selection of data to be used in this analysis, but the following

hints may give some guidance:

1. As far as possible, all concentration data in the terminal phase

should be selected; however, a minimum of three data points should be

used.

2. Whenever possible, the last concentration measured at the end of

the profile should be used. Taking this concentration into account

could be problematic for cases in which it is higher than

concentration values at earlier time points (including values lower

than the limit of quantification (LOQ)).

3. The maximum observed drug concentration, Cmax, should only be used

if it is not substantially affected by drug absorption.

From a practical viewpoint, the determination of half-lives is best

accomplished by means of interactive pharmacokinetic or statistical

software which allow adequate graphical presentations of the data as

well as corresponding calculations of pharmacokinetic parameters,

such as the terminal half-life t1/2.

*

As a general rule, the observation period should be about three to

five times of the supposed half-life and five observations should be

scheduled within the range of the terminal phase. For example, if the

supposed half-life is 8 hours, blood samples should be collected up

to 24 - 40 hours after drug administration, with samples taken e.g.

at 10, 12, 16, 24 and 36 h.

b.) Using more sophisticated methods (so called peeling methods or

methods of residuals) it is possible to determine not only the

terminal half-life but also the half-lives described in equation C(t)

=A1*exp(-lambda1*t)+A2*exp(-lambda2*t)+.. [Gibaldi and Perrier, 1982].

c.) The half-life of a drug can show large interindividual variability.

d.) Each individual drug concentration vs. time profile should be

evaluated separately. For reasons of consistency, it is recommended

to initially present all the profiles together on a semilogarithmic

scale and to consider the following questions:

Is it possible to use all the plasma concentrations following a

timepoint common to all the profiles?

Is it possible to use all the plasma concentrations within a given

time window (e.g. from 4-12 h after drug intake) ?

Is it possible to use the last n drug concentrations for each profile

(n\0xB33) ?

e.) Alongside these graphical-based methods for determining half-

lives, other methods based on mathematical algorithms are also

available. For example, in WinNonlin the following algorithm is used:

Linear regressions are repeated using the last three points, the last

four points, the last five points etc. For each regression, an

adjusted R2 is computed:

where n is the number of data points in the regression and R2 is the

square of the correlation coefficient. The regression with the

largest adjusted R2 is selected to estimate the terminal half-life,

with one caveat: if the adjusted R2 does not improve, but is within .

0001 of the largest value, the regression with the larger number of

points is used.

Best regards,

Willi - On 20 Oct 2005 at 17:24:03, Stephen Duffull (steveduffull.-at-.yahoo.com.au) sent the message

Back to the Top

Hi all

I think there are 2 distinct components to this discussion:

1) How many data points do you need to estimate the parameters of a

straight line and

2) How many data points do you need to estimate the log-linear slope

in a PK noncompartmental study.

For point 1. You need 3 points. There are really 3 parameters

(intercept, slope and residual variance). If you have 2 parameters

then you assume incorrectly that there is no residual variability.

For point 2. I would think that there must be some guidance on this.

Regards

Steve

[Point 1. Interesting, so you want to know how good your parameter

estimates are as well, or is this just an estimate of fit? Two points

seem to be sufficient for our clinical colleagues, maybe they assume

residual variance is the same (similar) from case to case and don't

need to estimate it every time they draw blood samples. Reminds me of

the time a well respected colleague presented data with a straight

line drawn through one point, he had assumed the slope ;-) - db] - On 20 Oct 2005 at 09:38:00, "J.H.Proost" (J.H.Proost.aaa.rug.nl) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear Dale,

I agree with several comments pointing to the importance of the

concentration range, in terms of half-lives, for the precision of the

estimated elimination rate constant (k) and half-life. I'm not really

happy with the suggestions that three data points can be used for the

estimation of k.

It is good practice to calculate the standard error and confidence

interval of the estimate of k. This gives a good (although certainly

not perfect) idea of the reliability of the calculated value of k.

With two points the standard error is infinite. Please note that one

should use the t-distribution for the calculation of the confidence

intervals, and not the normal distribution. With three data points

the t-value for the 95% confidence interval is 12.7 (one degree of

freedom), so the confidence interval is very wide. With four data

points the t-value is 4.3 (two degrees of freedom), and the

confidence interval is much less wide. For more data points the gain

in precision is not so spectacular (t = 3.2 for five points), so four

data points seems a reasonable minimum value.

A second comment refers to the purpose of the estimation of k. If it

is used for the estimation of the AUC from the last time point to

infinity, and the extrapolated area is relatively small compared to

the total AUC, the precision of k is not really a major topic, and a

two-point estimate may be 'good enough'. In that case it is not the

confidence interval of k that matters, but the confidence interval of

the estimated total AUC.

Best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.at.rug.nl - On 20 Oct 2005 at 09:50:30, andreanicole.edginton.aaa.bayertechnology.com sent the message

Back to the Top

Dear group:

An increase in the resolution of points along the 'terminal phase'

will affect the calculation of half-life. The terminal phase can be

weakly defined by the last two data points. As points are included

between the last two time points (usually relatively far apart) the

likelihood of detecting an additional 'terminal phase' increases. As

long as the elimination is first order, taking the two point approach

will likely underestimate half-life. Increasing the number of points

to three is indeed superior.

Andrea

--

Bayer Technology Services GmbH

Process Technology, Biophysics

Leverkusen, Germany - On 20 Oct 2005 at 13:36:01, =?ISO-8859-1?Q?Helmut_Sch=FCtz?= (helmut.schuetz.aaa.bebac.at) sent the message

Back to the Top

The following message was posted to: PharmPK

Hi Dale!

>I found a number of them allowed the computation of elimination rate

>constants (and half-life and AUCinfinity) using only two points in the

>terminal phase.

>Of course their r-squared values are quite good(!),...

>

With only two points it must have been not only /good/, but *exactly*

1...

>Searching my memory back in the hazy mists of the past, it strikes

>me that it requires 3 points to uniquely define an exponential

function.

>When we do a log transform the resulting straight line requires

only two

>points, but we shouldn't lose sight of the fact that it's an

exponential

>function we are determining.

>

No, since

[1] y = A * exp(B * x)

contains *two* parameters, two points also suffice for the exponential.

The only difference is, that the transformed equation

[2] ln(y) = ln(A) + B * x

can be solved directly through a set of linear equations, whereas [1] is

nonlinear in parameter B and therefore calls for an iterative procedure.

You can check this with wonderful M$-Excel:

A=100, B=-ln(2)/12=-0.05776226504666210 (half-life = 12)

x= 0 y=100

x=12 y= 50

applying a linear regression to x | ln(y) (i.e. [1]) gives

A=100.0000000000000, B=-0.05776226504666220

whereas the built-in "Solver"-routine (i.e. [2]) gives

A=100.0008693642800, B=-0.05776275283824150

Turning the screws (e.g., changing the number of iterations, the

sensitivity, etc.), different values will be obtained.

If you change the sign of parameter B in the models to

y = A * exp(-B * x) and ln(y) = ln(A) - B * x

you will get

A=100.0000000000000, B=0.05776226504666220 (LR)

A=100.0008692952850, B=0.05776275282517360 (Solver)

This simple example shows, why [1] rather than [2] is applied in

'non-compartmental' PK.

As David and Xiadong already pointed out we need at least three

points to look for linearity (since with two points we have zero

degrees of freedom for testing).

There was a rather long thread about R2 in 2002, you may have

a look at

http://www.boomer.org/pkin/PK02/PK2002228.html

or if the link is not working, go to the search page

http://www.boomer.org/cgi-bin/htsearch

with the key-words

"Non-compartmental" "Analysis" "Odeh"

best regards,

Helmut

--

Helmut Sch=FCtz

BEBAC

Consultancy Services for Bioequivalence and Bioavailability Studies

Neubaugasse 36/11

1070 Vienna/Austria

tel/fax +43 1 2311746

http://BEBAC.at

Bioequivalence/Bioavailability Forum at http://forum.bebac.at

http://www.goldmark.org/netrants/no-word/attach.html

[The archive page URLs change from time to time. When ever I redo an

yearly archive the URLs may change. For the current year this can be

quite often. Sometimes I change my archive software and redo all the

archives. The last time was when I added some extra munging of the

email addresses in the archive (see http://members.aol.com/emailfaq/

mungfaq.html). Helmut's search terms work exactly but a more general

approach is to use the title/topic as a search term. With title/topic

and year you can look up the entry on the annual index at http://

www.boomer.org/pkin/ - db] - On 20 Oct 2005 at 08:02:56, Xiaodong Shen (shenxiaodong11.-at-.yahoo.com) sent the message

Back to the Top

The following message was posted to: PharmPK

Hi,

Even two points always give r-squared value 1, 1 makes

a line looks very good. But people would never use two

points to judge if it is a line since with two points

you can only draw one line and also a very straight

line.

Xiaodong - On 20 Oct 2005 at 13:32:19, dsharp5.aaa.rdg.boehringer-ingelheim.com sent the message

Back to the Top

The following message was posted to: PharmPK

All,

Thank you very much for your comments. My own personal practice is very

similar to what Willi outlined, however, I have not always been

successful

convincing others that this is the best approach. If, as Johannes has

suggested, which is the SE of Kel of a 2-point line is infinity, than I

would say this not useable. A two point terminal phase tells us that

the

true kel value is somewhere between + and minus infinity. I would

maintain

we knew that without running any experiments. I believe this may be

another

way of stating my argument, which that infinitely many exponentials

can be

drawn between 2 points. Certainly no one would argue against the

idea that

more points in the terminal phase are better than fewer points, but

oftentimes in animal studies blood volume and animal care considerations

mandate the collection of fewer samples. My approach for profiles

with only

two points in the terminal phase is report AUClast, Cmax and Tmax and

not go

any further.

Nonetheless, what is the consensus of the group? Is the use of two

point

terminal phases mathematically proscribed, or merely good sense.

Should we

accept the results of this analysis? I can point to a literature

paper or

two where TK based on 2 points in the terminal phase was reported, so it

gets by some referees. - On 20 Oct 2005 at 20:06:08, =?ISO-8859-1?Q?J=FCrgen_Bulitta?= (bulitta.at.ibmp.osn.de) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear All,

In addition to the points already mentioned, it might be worth adding

a "bioequivalence point of view", especially for extended release

formulations.

I think the number of points used to derive the terminal half-life

really should be chosen based on a specified objective for the drug

under discussion. As Dr Proost pointed out, what really matters is

the impact of the uncertainty in estimated terminal half-life on the

parameter of interest. Among others, AUC0-infinity, AUMC (!), MRT,

Vss, Vz, and T1/2 itself.

If one is really interested in the influence of the choice of the

number of data-points on the bias and precision in terminal half-life

and its derived parameters, a simulation approach for different

proportional and additive analytical errors with subsequent non-

compartmental evaluation might be a reasonable choice. This approach

might be considered to determine, if the chance to show

bioequivalence is affected by the method of estimating terminal half-

life, e.g. for a drug with a long half-life and a difficult

analytical assay.

My personal practice:

I usually use 3-6 datapoints (for some drugs 4-6) to estimate

terminal half-life based on visual inspection (e.g. in WinNonlin) and

R^2-adjusted. If the assay precision is good and if there is a

systematic increase (or decrease) the more points are selected, I

choose 3-4 points. Only if the third point is Cmax, then I go for 2

points or skip estimation of T1/2 for this subject.

Hope this helps.

Best regards

Juergen

--

Juergen Bulitta

Scientific Employee, IBMP

Paul-Ehrlich-Str. 19

D-90562 Nuernberg

Germany - On 21 Oct 2005 at 00:11:29, (Kees.Bol.aaa.kinesis-pharma.com) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear,

After reading a couple of messages I think the approaches are

sometimes too scientific, and not practical enough.

In standard pharmaceutical PK reports one does not report SE and CI

on the estimation of k.

One estimates k, mostly on the basis of a minimum of 3 data-points.

Acceptance of the estimates is based on other criteria, e.g. R^2 is

at least 0.9 (differs from company to company), and the time-span of

the data-points used in the calculation should be at least 2x the

estimate of T1/2 (one of our criteria).

At the end it doesn't really matter if the estimation of your T1/2 is

12, 10.5 or 13, because for one subject you wil overestimate t1/2 for

the other you will underestimate T1/2. What will be the focus of many

reports is the mean or median T1/2 and the intersubject variability.

If your sample size is large enough, your mean or median estimate

will not differ much if you use different criteria (as long as your

criteria are predefined and consequently used).

If the purpose of your trial is to formally compare two treatments

statistically a poor estimation will increase your intersubject

variability, and may require a larger sample size. What you could

also do is improve the design of your study, e.g measure longer,

improve the sensitivity of your bioassay.

In toxicokinetic studies you often have the problem that you can not

measure the concentrations long enough because you hit the LOQ much

quicker (metabolism is often much faster in rats, mice etc.), or that

you are not able to take enough blood samples without bleeding the

animal too much. As a result you sometimes have studies in which you

only have 2 data-points in the terminal phase in almost every animal.

Then again you have to be practical (because you don't want to, or

don't have the resources, to do population PK for every preclinical

study). You still calculate T1/2 and report the mean or median, but

give a remark that T1/2 and the related paramters could not be

estimated accurately. At least you have learned something from your

study. You know that the T1/2 was say about 10 hours and not 2 hours

or 100 hours.

The above methods have been used in many drug filings to regulatory

authorities. They may not be that scientifically sound to some of

you, but at least it helps you to move forwards.

Best regards,

Kees

Kees Bol

Kinesis Pharma BV

Consultants in Drug Development

The Netherlands - On 21 Oct 2005 at 13:58:45, "Hans Proost" (j.h.proost.-a-.rug.nl) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear all,

Kees Bol wrote:

> In standard pharmaceutical PK reports one does not report SE and CI

> on the estimation of k.

OK, but why should one not improve the 'standard' PK report? And it

is not

really required to report SE and CI; these values can be used to judge

whether or not the estimation of k is sufficiently precise to report. If

not, this should be reported. This refers to any value mentioned in a

report.

> Acceptance of the estimates is based on other criteria, e.g. R^2 is

> at least 0.9 (differs from company to company),

What is the rationale of this criterion? As I have written in earlier

message, R^2 (or 'adjusted R^2') is not a suitable criterion for

goodness-of-fit. Among others, because it does not take into account the

number of data points used (remember that R^2 is exactly 1 for two

points).

Willi Cawello wrote:

> e.) Alongside these graphical-based methods for determining half-

> lives, other methods based on mathematical algorithms are also

> available. For example, in WinNonlin the following algorithm is used:

>

> Linear regressions are repeated using the last three points, the last

> four points, the last five points etc. For each regression, an

> adjusted R2 is computed:

>

> where n is the number of data points in the regression and R2 is the

> square of the correlation coefficient. The regression with the

> largest adjusted R2 is selected to estimate the terminal half-life,

> with one caveat: if the adjusted R2 does not improve, but is within .

> 0001 of the largest value, the regression with the larger number of

> points is used.

Is there any scientific proof of this approach? Taking into account the

aforementioned property of R^2 I doubt whether this is a valid

approach. I

would suggest a different approach, although I must admit that I did not

proof this approach:

Use the residual variance as the criterion for choosing the number of

data

points. The residual variance is the sum of the squared deviations

(in the

logarithmically transformed scale) divided by the degrees of freedom,

i.e.

n-2.

Please note that this is a suggestion only. I don't say that this

approach

is scienfically proven, and I don't say it is optimal. But at least

it takes

into account the number of data points in a plausible manner.

Any comments are welcome!

Best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.-at-.rug.nl - On 22 Oct 2005 at 14:56:41, "Sima Sadray" (sadrai.-at-.sina.tums.ac.ir) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear All,

Here you can follow the discussion with true data. For Diclofenac we saw

multiple peak phenomena so for some subjects we had only two point for k

estimation. So we compare the two method (slope with two or three

point) as

you see the results may be very different. There was underestimation

for k

and overestimation for t1/2 in this case.

mean t 1/2

slope with two point -0.14 -0.44 -0.39 -0.25 -0.25 -0.29 -2.35

slope with three point -0.27 -0.55 -0.43 -0.39 -0.41 -0.41 -1.69

difference -0.13 -0.11 -0.04 -0.13 -0.16 -0.12 0.66

% difference two/three 47.73 20.80 10.04 34.58 38.52 28.21

-39.30

With Best Regards

Sadray - On 24 Oct 2005 at 11:41:11, "Hans Proost" (j.h.proost.-at-.rug.nl) sent the message

Back to the Top

The following message was posted to: PharmPK

Dear all,

In addition to my previous comment on messages of Kees Bol and Willi

Cawello

on Calculation of half-life:

I made some Monte Carlo simulations for the estamation of the

elimination

rate constant. The model was a one-compartment model with first-order

absorption, parameters k (since k is the parameter to be estimated I

used k

as a model parameter instead of CL), V and ka, with lognormally

distributed

interindividual variability in k, V and ka, and in total 10 data

points with

measurement error.

The elimination rate constant k was estimated by regression analysis of

ln(C) versus time, using 3 to 10 data points. The 'best' estimate of

k was

chosen by the following criteria:

- maximum value of R^2

- maximum value of 'adjusted R^2' (see below)

- minimum value of the residual variance (sum of the squared

deviations (in

the logarithmically transformed scale) divided by the degrees of

freedom,

i.e. n-2)

- minimum value of standard error of k

- minimum value of coefficient of variation of k (standard error of k

divided by k).

The performance was expressed as %ME (mean error) and %RMSE (root mean

squared error, where 'error' is the relative difference between the

estimated and true value of k, i.e. (k_est - k_true) / k_true).

The results can be summarized as follows:

1) The performance is dependent on the various variables, in

particular the

mean value k and the time schedule.

2) The difference in performance between the methods is rather small,

and

generally insignificant.

3) All methods may give some bias (underestimation of k) in case of a

small

difference between k and ka (as expected).

4) Which method performs 'best' is dependent on the aforementioned

variables.

5) The performance of the methods based on standard error of k is less

predictable (sometimes better, sometimes worse) and used (on average)

more

data points than the other methods.

6) The performance of the R^2 method and adjusted R^2 method are

almost the

same (on average, R^2 uses less data points).

7) The method based on R^2 uses a smaller number of data points than the

method based on residual variance (and a marginally smaller number than

'adjusted R^2'); on average the difference between 'residual error' and

'R^2' is about 1 data point.

The latter finding confirms my expectation that the R^2 method uses less

data points, but not my second expectation that this method would be

less

precise than the 'residual variance' method. Both methods are about

equally

precise. From these findings I conclude that the R^2 (or adjusted R^2)

method should be preferred, since it uses less data points, and thus is

likely to be less influenced by e.g. second-peak phenomena.

A final comment with respect to 'adjusted R^2': I found different

forms for

adjusted R^2 via Internet, but they gave the same result for a

particular

data set (so implying a rearrangement of terms). I used the following

equation:

Adjusted R^2 = 1 - (n-1)/(n-k) * (1 - R^2)

where n is the number of measurements and k is the number of independent

parameters (in this case 2, i.e. slope and intercept).

This is equivalent to:

Adjusted R^2 = R^2 - (k-1)/(n-k) * (1 - R^2)

I also found a different equation:

Adjusted R^2 = R^2 - p/(n-p-1) * (1 - R^2)

where p is the number of regressors (predictors in regression analysis).

Since k = p+1 (adding intercept parameter), this is also the same

equation.

In conclusion: my earlier scepticism with respect to R^2 as a

criterion to

choose the number of data points for the estimation of elimination rate

constant and half-life was not justified. R^2 is the best criterion.

Best regards,

Hans Proost

Johannes H. Proost

Dept. of Pharmacokinetics and Drug Delivery

University Centre for Pharmacy

Antonius Deusinglaan 1

9713 AV Groningen, The Netherlands

tel. 31-50 363 3292

fax 31-50 363 3247

Email: j.h.proost.aaa.rug.nl

Want to post a follow-up message on this topic? If this link does not work with your browser send a follow-up message to PharmPK@boomer.org with "Calculation of half-life" as the subject

PharmPK Discussion List Archive Index page

Copyright 1995-2010 David W. A. Bourne (david@boomer.org)